I'm using node.js request.js to reach an api. I'm getting this error
[Error: UNABLE_TO_VERIFY_LEAF_SIGNATURE]
All of my credentials are accurate and valid, and the server's fine. I made the same request with postman.
request({
"url": domain+"/api/orders/originator/"+id,
"method": "GET",
"headers":{
"X-API-VERSION": 1,
"X-API-KEY": key
},
}, function(err, response, body){
console.log(err);
console.log(response);
console.log(body);
});
This code is just running in an executable script ex. node ./run_file.js, Is that why? Does it need to run on a server?
Note: the following is dangerous, and will allow API content to be intercepted and modified between the client and the server.
This also worked
process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = '0';
It's not an issue with the application, but with the certificate which is signed by an intermediary CA.
If you accept that fact and still want to proceed, add the following to request options:
rejectUnauthorized: false
Full request:
request({
"rejectUnauthorized": false,
"url": domain+"/api/orders/originator/"+id,
"method": "GET",
"headers":{
"X-API-VERSION": 1,
"X-API-KEY": key
},
}, function(err, response, body){
console.log(err);
console.log(response);
console.log(body);
});
The Secure Solution
Rather than turning off security you can add the necessary certificates to the chain. First install ssl-root-cas package from npm:
npm install ssl-root-cas
This package contains many intermediary certificates that browsers trust but node doesn't.
var sslRootCAs = require('ssl-root-cas/latest')
sslRootCAs.inject()
Will add the missing certificates. See here for more info:
https://git.coolaj86.com/coolaj86/ssl-root-cas.js
CoolAJ86's solution is correct and it does not compromise your security like disabling all checks using rejectUnauthorized or NODE_TLS_REJECT_UNAUTHORIZED. Still, you may need to inject an additional CA's certificate explicitly.
I tried first the root CAs included by the ssl-root-cas module:
require('ssl-root-cas/latest')
.inject();
I still ended up with the UNABLE_TO_VERIFY_LEAF_SIGNATURE error. Then I found out who issued the certificate for the web site I was connecting to by the COMODO SSL Analyzer, downloaded the certificate of that authority and tried to add only that one:
require('ssl-root-cas/latest')
.addFile(__dirname + '/comodohigh-assurancesecureserverca.crt');
I ended up with another error: CERT_UNTRUSTED. Finally, I injected the additional root CAs and included "my" (apparently intermediary) CA, which worked:
require('ssl-root-cas/latest')
.inject()
.addFile(__dirname + '/comodohigh-assurancesecureserverca.crt');
For Create React App (where this error occurs too and this question is the #1 Google result), you are probably using HTTPS=true npm start and a proxy (in package.json) which goes to some HTTPS API which itself is self-signed, when in development.
If that's the case, consider changing proxy like this:
"proxy": {
"/api": {
"target": "https://localhost:5001",
"secure": false
}
}
secure decides whether the WebPack proxy checks the certificate chain or not and disabling that ensures the API self-signed certificate is not verified so that you get your data.
It may be very tempting to do rejectUnauthorized: false or process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = '0'; but don't do it! It exposes you to man in the middle attacks.
The other answers are correct in that the issue lies in the fact that your cert is "signed by an intermediary CA." There is an easy solution to this, one which does not require a third party library like ssl-root-cas or injecting any additional CAs into node.
Most https clients in node support options that allow you to specify a CA per request, which will resolve UNABLE_TO_VERIFY_LEAF_SIGNATURE. Here's a simple example using node's built-int https module.
import https from 'https';
const options = {
host: '<your host>',
defaultPort: 443,
path: '<your path>',
// assuming the bundle file is co-located with this file
ca: readFileSync(__dirname + '/<your bundle file>.ca-bundle'),
headers: {
'content-type': 'application/json',
}
};
https.get(options, res => {
// do whatever you need to do
})
If, however, you can configure the ssl settings in your hosting server, the best solution would be to add the intermediate certificates to your hosting provider. That way the client requester doesn't need to specify a CA, since it's included in the server itself. I personally use namecheap + heroku. The trick for me was to create one .crt file with cat yourcertificate.crt bundle.ca-bundle > server.crt. I then opened up this file and added a newline after the first certificate. You can read more at
https://www.namecheap.com/support/knowledgebase/article.aspx/10050/33/installing-an-ssl-certificate-on-heroku-ssl
You can also try by setting strictSSL to false, like this:
{
url: "https://...",
method: "POST",
headers: {
"Content-Type": "application/json"},
strictSSL: false
}
I had the same issues. I have followed #ThomasReggi and #CoolAJ86 solution and worked well but I'm not satisfied with the solution.
Because "UNABLE_TO_VERIFY_LEAF_SIGNATURE" issue is happened due to certification configuration level.
I accept #thirdender solution but its partial solution.As per the nginx official website, they clearly mentioned certificate should be combination of The server certificate and chained certificates.
Just putting this here in case it helps someone, my case was different and a bit of an odd mix. I was getting this on a request that was accessed via superagent - the problem had nothing to do with certificates (which were setup properly) and all to do with the fact that I was then passing the superagent result through the async module's waterfall callback. To fix: Instead of passing the entire result, just pass result.body through the waterfall's callback.
Following commands worked for me :
> npm config set strict-ssl false
> npm cache clean --force
The problem is that you are attempting to install a module from a repository with a bad or untrusted SSL[Secure Sockets Layer] certificate. Once you clean the cache, this problem will be resolved.You might need to turn it to true later on.
Another approach to solving this securely is to use the following module.
node_extra_ca_certs_mozilla_bundle
This module can work without any code modification by generating a PEM file that includes all root and intermediate certificates trusted by Mozilla. You can use the following environment variable (Works with Nodejs v7.3+),
NODE_EXTRA_CA_CERTS
To generate the PEM file to use with the above environment variable. You can install the module using:
npm install --save node_extra_ca_certs_mozilla_bundle
and then launch your node script with an environment variable.
NODE_EXTRA_CA_CERTS=node_modules/node_extra_ca_certs_mozilla_bundle/ca_bundle/ca_intermediate_root_bundle.pem node your_script.js
Other ways to use the generated PEM file are available at:
https://github.com/arvind-agarwal/node_extra_ca_certs_mozilla_bundle
NOTE: I am the author of the above module.
I had an issue with my Apache configuration after installing a GoDaddy certificate on a subdomain. I originally thought it might be an issue with Node not sending a Server Name Indicator (SNI), but that wasn't the case. Analyzing the subdomain's SSL certificate with https://www.ssllabs.com/ssltest/ returned the error Chain issues: Incomplete.
After adding the GoDaddy provided gd_bundle-g2-g1.crt file via the SSLCertificateChainFile Apache directive, Node was able to connect over HTTPS and the error went away.
If you come to this thread because you're using the node postgres / pg module, there is a better solution than setting NODE_TLS_REJECT_UNAUTHORIZED or rejectUnauthorized, which will lead to insecure connections.
Instead, configure the "ssl" option to match the parameters for tls.connect:
{
ca: fs.readFileSync('/path/to/server-ca.pem').toString(),
cert: fs.readFileSync('/path/to/client-cert.pem').toString(),
key: fs.readFileSync('/path/to/client-key.pem').toString(),
servername: 'my-server-name' // e.g. my-project-id/my-sql-instance-id for Google SQL
}
I've written a module to help with parsing these options from environment variables like PGSSLROOTCERT, PGSSLCERT, and PGSSLKEY:
https://github.com/programmarchy/pg-ssl
Hello just a small adition to this subject since in my case the
require('ssl-root-cas/latest')
.inject()
.addFile(__dirname + '/comodohigh-assurancesecureserverca.crt');
didn't work out for me it kept returning error that the file could not be downloaded i had been a couple of hours into the reasearch of this particular error when I ran into this response https://stackoverflow.com/a/65442604
Since in my application we do have a proxy to proxy some of our requests as a security requirement of some of our users I found that in the case you are consulting an API that has this issue and if you can access the API url throught your browser you can proxy your request and it might fix the [Error: UNABLE_TO_VERIFY_LEAF_SIGNATURE] issue.
An example of how i use my proxy
await axios.get(url, {
timeout: TIME_OUT,
headers: {
'User-Agent': 'My app'
},
params: params,
proxy: {
protocol: _proxy.protocol,
host: _proxy.hostname,
port: _proxy.port,
auth: {
username: _proxy_username,
password: _proxy_password
}
}
});
I had the same problem and I am able to fix it the following way,
Use the full-chain or just the chain certificate instead of just the certificate.
That is all.
This same error can be received when trying to install a local git shared repo from npm.
The error will read: npm ERR! code UNABLE_TO_VERIFY_LEAF_SIGNATURE
Apparently there is an issue with the certificate, however what worked for me was change the link to my shared repo in the package.json file from:
"shared-frontend": "https://myreposerver"
to:
"shared-frontend": "git+https://myreposerver"
In short, just adding git+ to the link solved it.
Another reason node could print that error is because a backend connection/service is misconfigured.
Unfortunately, the node error doesn't say which certificate it was unable to verify [feature request !]
Your server may have a perfectly good certificate chain installed for clients to connect and even show a nice padlock in the browser's URL bar, but when the server tries to connect to a backend database using a different misconfigured certificate, then it could raise an identical error.
I had this issue in some vendor code for some time. Changing a backend database connection from self-signed to an actual certificate resolved it.
You have to include the Intermediate certificate in your server. This solves the [Error: UNABLE_TO_VERIFY_LEAF_SIGNATURE]
Related
I’ve been building a React app for a while now and have been testing responsiveness across multiple devices.
The React app itself works perfectly fine on my local machine. When accessing the React instance over the network, all HTTP requests fail because it wants to send HTTP requests to port 3000 instead of port 5000 which is what my Node.js server is running on.
[1] Compiled successfully!
[1]
[1] You can now view client in the browser.
[1]
[1] Local: http://localhost:3000
[1] On Your Network: http://192.168.1.122:3000
[0] [nodemon] starting `node server.js`
[1] Compiled successfully!
[1] webpack compiled successfully
[0] Server is running on port 5000
[0] MongoDB Connected!
Example of a request in the React app
// Submit application to database
const storeAndSubmit = (formData) => {
try {
// eslint-disable-next-line
const res = axios({
method: 'post',
headers: {
'Content-Type': 'application/json',
},
url: 'http://localhost:5000/api/applications',
data: formData,
});
dispatch({
type: APPLICATION_SUCCESS,
payload: formData.pageNumber,
});
} catch (err) {
dispatch({
type: APPLICATION_FAIL,
});
}
};
Because I don’t know what IP address React will choose when running my start script, I can’t just hard code the IP address into the request. Is there a React environment variable that can be accessed that has the current local network IP address after the start script has been run? If so, I can use that in my HTTP requests and I think that might work.
Error example over the network
xhr.js:210 POST http://192.168.1.122:3000/api/applications 404 (Not Found)
One thing you can do is proxy your requests by adding the following to your package.json file: "proxy": "http://localhost:5000",
Now in your fetch or Axios calls you can use the following URL '/api/applications' instead of 'http://localhost:5000/api/applications'
You are having a networking issue, so let’s go over it in detail.
You have two processes running on your development machine:
a Node.js HTTP server serving the HTML file which loads the React app on localhost:3000. Let’s call this AppServerProcess
a Node.js HTTP server running a controller/driver for the database on localhost:5000. Let’s call this DbServerProcess
So what happens when you are requesting the web application from another device in the network?
The device will make an HTTP request to http://192.168.1.122:3000, where the AppServerProcess will handle the request and respond with the HTML content and the relevant scripts and assets, which will be loaded by the browser. The code in the JavaScript scripts (the web application), will have the fetch code with a URI of http://localhost:5000, which the device will resolve into itself, where it will fail to find anything.
Now, the computer running both processes (DbServerProcess and AppServerProcess) has at least one IP address on the local network, which is 192.168.1.122. This means that if the DbServerProcess is running on localhost:5000, it should also be available on 192.168.1.122:5000, so the URI that should be hardcoded on fetch is http://192.168.1.122:5000/api/applications.
Note that this will also work when working locally, as the IP address will resolve to itself.
Also note that if the computer running both processes has DHCP configured, this IP address may change subject to that configuration, where you seem to have a misconception of what is happening, because it’s not React that chooses that, and it’s not even the AppServerProcess; it’s the OS which has at least one network interface that has a local IP address assigned by the DHCP server running on the router. If you want this to be static, then that is an OS configuration (pretty straight forward on both Windows, macOS and Linux).
Look for "setting static IP address on {operating_system_name}".
I will assume if you specify the React port while starting your app, you will be able to solve this issue and correct me if I am wrong.
You can just do this on Linux:
PORT=3006 react-scripts start
Or this on Windows:
set PORT=3006 && react-scripts start
Check this answer.
Look into cors-npm for your backend server because that maybe the reason for not connecting.
Later you can maybe use Cloudflare Tunnel for your web server and use that address to access your web server in the react app. The link can be provided as an environment variable for the react. see setup react env variable
Firstly, test if the backend API is working or not via Postman.
http://192.168.1.122:5000/ should work on your case.
=========================================================
After it
Try this code on your frontend after checking the backend is working correctly with Postman.
const api = axios.create({
baseURL: "http://192.168.1.122:5000/api/", //PLEASE CONFIRM IP.
headers: {
'Content-Type': 'application/json',
},
});
const submitApi = async (formData) => api.post('/applications', formData);
const storeAndSubmit = async (formData) => {
try {
const {data} = await submitApi(formData);
if(!!data) {
dispatch({
type: APPLICATION_SUCCESS,
payload: formData.pageNumber,
});
} else {
dispatch({
type: APPLICATION_FAIL,
});
}
} catch(e) {
dispatch({
type: APPLICATION_FAIL,
});
}
}
I think you should check the response on the browser in the device you want to connect to the app from.
With the 404 code, the potential reason may be that the device and your computer are not using the same Wi-Fi modem.
You should not use a domain at all:
url: '/api/applications',
or
url: 'api/applications',
The former dictates api to be served from the domain's root, and the latter requires api to be served from the current page's path. In both cases, schema, domain, and port will be inherited from the current page's URL.
Details are in RFC 2396.
It allows you use your code without changes on any domain, any port, as the hosting architecture is not of the frontend app's concern.
Make a .env file and maybe try this:
REACT_APP_API_ENDPOINT=http://192.168.1.blablabla
You can also do:
"scripts" : {
"start": "REACT_APP_API_ENDPOINT=http://localhost.whatever npm/yarn start"
"start-production": "REACT_APP_API_ENDPOINT=http://production.whatever npm/yarn start"
}
It was taken from How can I pass a custom server hostname using React?.
In my Expo React Native app, I've been trying to fetch data from my Ruby on Rails API but it seems that http connections are not allowed.
All the solutions I've found either add configuration to AndroidManifest.xml in Android and Info.plist in iOS which I can't access, but I'd rather keep development in Expo instead of ejecting. I've tried changing localhost to 127.0.0.1, 10.0.2.2, and my machine's IP as well, but none worked. I've looked at the documentation for configuration with app.json and networking but I can't find anything about configuring App Transport Security within Expo. I'm currently running Expo on android 9.0 Pie.
fetch(`http://localhost:3000/api/v1/trainers`, {
headers: {
Accept: 'application/json',
"Content-Type": "application/json"
},
}).then(res => res.json())
.catch(err => console.log(err));
TypeError: Network request failed
at XMLHttpRequest.xhr.onerror (420454d9-7271-44d5-b…-3a968d130699:48065)
at XMLHttpRequest.dispatchEvent (420454d9-7271-44d5-b…-3a968d130699:53513)
at XMLHttpRequest.setReadyState (420454d9-7271-44d5-b…-3a968d130699:52368)
at XMLHttpRequest.__didCompleteResponse (420454d9-7271-44d5-b…-3a968d130699:52195)
at 420454d9-7271-44d5-b…-3a968d130699:52305
at RCTDeviceEventEmitter.emit (420454d9-7271-44d5-b…-3a968d130699:22482)
at MessageQueue.__callFunction (420454d9-7271-44d5-b…-3a968d130699:22097)
at 420454d9-7271-44d5-b…-3a968d130699:21854
at MessageQueue.__guard (420454d9-7271-44d5-b…-3a968d130699:22051)
at MessageQueue.callFunctionReturnFlushedQueue (420454d9-7271-44d5-b…-3a968d130699:21853)
What domain is your REST api on?
I was having the same problem with fetching from a react native app (running on an emulator) to a node.js/express REST api. I couldn't find a workaround within React.
I found a solution with ngrok (https://ngrok.com). It allows you to expose a specific local domain:port to the outside. With this method I was able to fetch from my app to my REST api.
To be clear, I'm not sure the problem is on React-Native's end. I suspect it was my firewall not allowing the connection to go through. Instead of messing with all of those settings, I simply used ngrok.
I have created a React App with the create-react-app cli. Now I use an external library do some requests. (I pass the library my backendUrl and it does the request)
localhost:3000 My ReactJS App (Webpack)
localhost:8081 My Backend Server
Now this leads to an error, saying the Access-Control-Origin-Header was not sent.
SO I looked into how I can activate this with Webpack. What I have done:
Eject Webpack Config to get access to the dev server properties with:
npm run eject
Add following part in the webpack.config.js
devServer: {
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers":
"Origin, X-Requested-With, Content-Type, Accept"
}
}
--> Did nothing
I then tried to use the proxy mechanism:
"proxy": {
"/api": {
"target": "<put ip address>",
"changeOrigin" : true
}
}
But since I use an external Library, which doesn't uses fetch and uses the whole URL to make API calls, this also doesn't work, since from my understanding this only proxies requests like fetch("api/items") for example.
I am a bit confused, since I can't find anything online. Maybe I put the things above in the wrong configuration file or line?
There is also a webpackDevServer.config.js but I can't find anything online about it and as soon as I add something to it, it will produce errors.
CORS needs to be turned on server side, not client side. You can download chrome extensions to bypass CORS but that is for development purposes only.
The reason why you're getting CORS is because you're jumping from 3000 to 8181 which are two different origins. There are multiple different ways to enable CORS depending on what you're using on the backend.
In some cases this is in-fact just a development issues as at runtime, it's all running on the same origin. Most things will give you the ability to proxy calls into the place. For example, here is how .NET SPA Services / Angular 7 do it:
Angular running on :4200
.NET running on :5000
spa.UseProxyToSpaDevelopmentServer("http://localhost:4200");
https://learn.microsoft.com/en-us/aspnet/core/client-side/spa-services?view=aspnetcore-2.2
I am not sure what you're using on the backend, so here is a general article if you're not using .NET
https://enable-cors.org/server.html
I have a really strange problem. I have a sails app running on my local machine and want to access this application from another computer within the same network, but I get a timeout.
not that I think that it should be necessary, but I even configured cors in config/cors.js:
allRoutes: true,
origin: '*',
credentials: true,
methods: 'GET, POST, PUT, DELETE, OPTIONS, HEAD',
headers: 'content-type'
My project is pretty empty, I just created a few models/controllers and thats it.
I didn't find anything about this problem in their manual so I guess I must have done something wrong during my project setup. Has anyone an idea on how to fix this?
edit
just to make sure I didn't do something completely wront I just created a new project
sails new foo
cd foo
sails lift
-> http://192.168.1.12:1337
works from localhost, gets a timeout from any other host in the network
I'm actually taking for granted that you already tried to ping your host machine from the other machine and you successfully managed to get a response.
If so, I suggest ngrok package to gain access for other users to your local machine. Ngrok exposes your localhost to the web, and has an NPM wrapper that is simple to install and start:
$ npm install ngrok -g
$ ngrok http 1337
The last command will print in the console the new address that points to localhost:1337
Edit: By the way, I'm pretty sure your problem is about network configurations and not sails configurations.
I'm trying to run socket.io and I'm getting a bunch of these:
http://domain.com:8080/socket.io/?EIO=2&transport=polling&t=1401421022966-0 400 (Bad Request)
This is the response I'm getting:
{"code":0,"message":"Transport unknown"}
I can't find any reason. I read somewhere that it might be misinterpreting the client, but that's about as far as I could get.
I had the same issue after upgrading from 0.9.x to 1.x.x. Shortening the long story, I would set transports to ['websocket', 'polling'] and then the error...
when you config your server to use specefic transpors you should set the same config on client side to...
server
var io = require('socket.io')(server, {'transports': ['websocket', 'polling']});
client
var io = io( serverUri, {'transports': ['websocket', 'polling']});
I had the same issue,after upgrading from 0.9.x, turns out my server config was set to ['websocket', 'jsonp-polling'] which was valid in 0.9 but the default config for the client and server is now ['polling', 'websocket'].
Removing my server config got me up and running.
The config is now documented in engine.io (https://github.com/automattic/engine.io), the new transport layer introduced in 1.0 - in particular this line:
transports ( String): transports to allow connections to (['polling', 'websocket'])
i had the same issue:
Getting the latest socket-client.js and using these file on clientside, solved this problem for me.
This happened to me when I served the socket.io.js script myself.
I had to go copy node_modules/socket.io/node_modules/socket.io-client/socket.io.js to where I was serving it up.
Try this configuration on server side
const io = require('socket.io')(server, {
cors: {
origin: "http://localhost:8100",
methods: ["GET", "POST"],
transports: ['websocket', 'polling'],
credentials: true
},
allowEIO3: true
});
My solution was to upgrade node.js to latest (0.12.0 at the time of this post). Originally node.js was installed as a part of a bundle. Once I uninstalled that node.js coming from that bundle (Aptana 3 bundle, node.js was somewhat behind), and installed the latest from node.js's website, things started working finally.
I was experimenting with React.js. I spent several hours debugging the phenomena, I've found build errors in socket.io, specifically about socket.io-client, it tried to invoke Visual Studio MSBuild unsuccesfully. Which is sad, the error occured with node-gyp too. Apparently socket.io-client is not needed to run/serve my examples, and seems like these unfortunate errors (which lured me into an endless forest) can be ignored.
(I noticed also a module while installing webpack-dev-server, which is Darwin only (a.k.a. Mac OS X). That's fortunately an optional dependency. It's frightening though: I know that Apple is very hipster, but the majority of the world is non Mac.)
I fixed it acting in my server.js
the io instance is initialized as follow:
const io = socket.listen(httpServer, { serveClient: true })
Before I had { serveClient: false }, because otherwise, I was getting an error. But actually, if you want that your client takes io from the node instance you have to serve it.
UPDATE: At the end.. you want to simply have const io = socket.listen(httpServer)
In this way is going to be true by default.