[node.js] Node.js Hostname/IP doesn't match certificate's altnames

I have code:

var r = require('request');
r({
  method: 'POST',
  url: 'https://api.dropbox.com'},
  function() { console.log(arguments)  } )

When I run it on desktop with Node 0.9.4, I get this in the console:

{ '0': [Error: Hostname/IP doesn't match certificate's altnames] }

When I run it on Netbook with Node 0.6.12, it all works without error (302 response - I think its right).

In question Node.js hostname/IP doesnt match certificates altnames, Rojuinex write: "Yeah, browser issue... sorry". What does "browser issue" mean?

UPD. This problem was resolved after roll back on Node v0.8

This question is related to node.js

The answer is


To fix the issue for package http-proxy

1) HTTP (localhost) accessing HTTPS To fix this issue set changeOrigin to true.

const proxy = httpProxy.createProxyServer();

proxy.web(req, res, {
  changeOrigin: true,
  target: https://example.com:3000,
});

2) HTTPS accessing HTTPS you should include SSL certificate

httpProxy.createServer({
  ssl: {
    key: fs.readFileSync('valid-ssl-key.pem', 'utf8'),
    cert: fs.readFileSync('valid-ssl-cert.pem', 'utf8')
  },
  target: 'https://example.com:3000',
  secure: true
}).listen(443);

The other way to fix this in other circumstances is to use NODE_TLS_REJECT_UNAUTHORIZED=0 as an environment variable

NODE_TLS_REJECT_UNAUTHORIZED=0 node server.js

WARNING: This is a bad idea security-wise


A slightly updated answer (since I ran into this problem in different circumstances.)

When you connect to a server using SSL, the first thing the server does is present a certificate which says "I am api.dropbox.com." The certificate has a "subject" and the subject has a "CN" (short for "common name".) The certificate may also have one or more "subjectAltNames". When node.js connects to a server, node.js fetches this certificate, and then verifies that the domain name it thinks it's connecting to (api.dropbox.com) matches either the subject's CN or one of the altnames. Note that, in node 0.10.x, if you connect using an IP, the IP address has to be in the altnames - node.js will not try to verify the IP against the CN.

Setting the rejectUnauthorized flag to false will get around this check, but first of all if the server is giving you different credentials than you are expecting, something fishy is going on, and second this will also bypass other checks - it's not a good idea if you're connecting over the Internet.

If you are using node >= 0.11.x, you can also specify a checkServerIdentity: function(host, cert) function to the tls module, which should return undefined if you want to allow the connection and throw an exception otherwise (although I don't know if request will proxy this flag through to tls for you.) It can be handy to declare such a function and console.log(host, cert); to figure out what the heck is going on.


I was getting this when streaming to ElasticSearch from a Lambda function in AWS. Smashed my head a against a wall trying to figure it out. In the end when setting the request.headers['Host'] I was adding in the https:// to the domain for ES - changing this to [es-domain-name].eu-west-1.es.amazonaws.com (without https://) worked straight away. Below is the code I used to get it working, hopefully save anyone else smashing their head against a wall...

import path from 'path';
import AWS from 'aws-sdk';

const { region, esEndpoint } = process.env;
const endpoint = new AWS.Endpoint(esEndpoint);
const httpClient = new AWS.HttpClient();
const credentials = new AWS.EnvironmentCredentials('AWS');

/**
 * Sends a request to Elasticsearch
 *
 * @param {string} httpMethod - The HTTP method, e.g. 'GET', 'PUT', 'DELETE', etc
 * @param {string} requestPath - The HTTP path (relative to the Elasticsearch domain), e.g. '.kibana'
 * @param {string} [payload] - An optional JavaScript object that will be serialized to the HTTP request body
 * @returns {Promise} Promise - object with the result of the HTTP response
 */
export function sendRequest ({ httpMethod, requestPath, payload }) {
    const request = new AWS.HttpRequest(endpoint, region);

    request.method = httpMethod;
    request.path = path.join(request.path, requestPath);
    request.body = payload;
    request.headers['Content-Type'] = 'application/json';
    request.headers['Host'] = '[es-domain-name].eu-west-1.es.amazonaws.com';
    request.headers['Content-Length'] = Buffer.byteLength(request.body);
    const signer = new AWS.Signers.V4(request, 'es');
    signer.addAuthorization(credentials, new Date());

    return new Promise((resolve, reject) => {
        httpClient.handleRequest(
            request,
            null,
            response => {
                const { statusCode, statusMessage, headers } = response;
                let body = '';
                response.on('data', chunk => {
                    body += chunk;
                });
                response.on('end', () => {
                    const data = {
                        statusCode,
                        statusMessage,
                        headers
                    };
                    if (body) {
                        data.body = JSON.parse(body);
                    }
                    resolve(data);
                });
            },
            err => {
                reject(err);
            }
        );
    });
}

I know this is old, BUT for anyone else looking:

Remove https:// from the hostname and add port 443 instead.

{
  method: 'POST',
  hostname: 'api.dropbox.com',
  port: 443
}

After verifying that the certificate is issued by a known Certificate Authority (CA), the Subject Alternative Names will be checked, or the Common Name will be checked, to verify that the hostname matches. This is in the checkServerIdentity function. If the certificate has Subject Alternative Names and the hostname is not listed, you'll see the error message described:

Hostname/IP doesn't match certificate's altnames

If you have the CA cert that is used to generate the certificate you're using (usually the case when using self-signed certificates), this can be provided with

var r = require('request');

var opts = {
    method: "POST",
    ca: fs.readFileSync("ca.cer")
};

r('https://api.dropbox.com', opts, function (error, response, body) {
    // do something
});

This will verify that the certificate is issued by the CA provided, but hostname verification will still be performed. Just supplying the CA will be enough if the cert contains the hostname in the Subject Alternative Names. If it doesn't and you also want to skip hostname verification, you can pass a noop function for checkServerIdentity

var r = require('request');

var opts = {
    method: "POST",
    ca: fs.readFileSync("ca.cer"),
    agentOptions: { checkServerIdentity: function() {} }
};

r('https://api.dropbox.com', opts, function (error, response, body) {
    // do something
});

If you are going to trust a sub-domain, for example, aaa.localhost, Please don't do it like mkcert localhost *.localhost 127.0.0.1, this will not work since some browser doesn't accept wildcard subdomain.

Maybe try mkcert localhost aaa.localhost 127.0.0.1.


We don't have this problem if we are testing our client request with localhost destination address (host or hostname on node.js) and our server common name is CN = localhost in the server cert. But even if we change localhost for 127.0.0.1 or any other IP we'll get error Hostname/IP doesn't match certificate's altnames on node.js or SSL handshake failed on QT.

I had the same issue about my server certificate on my client request. To solve it on my client node.js app I needed to put a subjectAltName on my server_extension with the following value:

[ server_extension ]
       .
       .
       .

subjectAltName          = @alt_names_server

[alt_names_server]
IP.1 = x.x.x.x

and then I use -extension when I create and sign the certificate.

example:

In my case, I first export the issuer's config file because this file contents the server_extension:

export OPENSSL_CONF=intermed-ca.cnf

so I create and sign my server cert:

openssl ca \
    -in server.req.pem \
    -out server.cert.pem \
    -extensions server_extension \
    -startdate `date +%y%m%d000000Z -u -d -2day` \
    -enddate `date +%y%m%d000000Z -u -d +2years+1day`   

It works fine on clients based on node.js with https requests but it doesn't work with clients based on QT QSsl when we define sslConfiguration.setPeerVerifyMode(QSslSocket::VerifyPeer), unless we use QSslSocket::VerifyNone it won't work. If we use VerifyNone it will make our app to don't check the peer certificate so it'll accept any cert. So, to solve it I need to change my server common name on its cert and replace its value for the IP Address where my server is running.

for example:

CN = 127.0.0.1


I had the same issue using the request module to proxy POST request from somewhere else and it was because I left the host property in the header (I was copying the header from the original request).


For developers using the Fetch API in a Node.js app, this is how I got this to work using rejectUnauthorized.

Keep in mind that using rejectUnauthorized is dangerous as it opens you up to potential security risks, as it circumvents a problematic certificate.

const fetch = require("node-fetch");
const https = require('https');

const httpsAgent = new https.Agent({
  rejectUnauthorized: false,
});

async function getData() {
  const resp = await fetch(
    "https://myexampleapi.com/endpoint",
    {
      agent: httpsAgent,
    },
  )
  const data = await resp.json()
  return data
}