get IP address of node app running in docker container - javascript

I have a node express app running in a Docker container and through the app i'm trying to log the IP address of the incoming request. Since i'm running everything behind a firewall i used something like this "req.headers['x-forwarded-for'] || req.connection.remoteAddress" but it logs the same IP address each time irrespective of the request origin. i.e. i always see the same IP even when it is made from different IP's.
Is there an elegant way to log the IP address of the node app running in a docker container. Will using this package help https://www.npmjs.com/package/ip
If not please suggest a way to capture the IP address.

I know this is old, but this might still help.
What worked for me was a combination of my Express container and NGINX running as a reverse proxy on the host machine.
For the NGINX, I added the following settings:
proxy_pass http://localhost:9461; # <-- put the exposed port of the Docker Express container
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
With the above, once I access my Express container (via my specified NGINX server_name), the IP of the request will be stored in req.headers['x-forwarded-for'] :
console.log("IP of this request is:", req.headers['x-forwarded-for']);

const {exec} = require('child_process');
let hostname = 'localhost';
exec("ip route | awk '/default/ {print $3}'", (error, stdout) => {
if (error) {
console.error('cannot get outer ip address for wrapper service', error);
} else {
hostname = stdout.replace(/\n/, '');
}
});
After that, in hostname variable contains the host ip.

Related

Client unable to hit socket.io server

Project is hosted on Digital Ocean.
On the client side, its throwing a 404 error
GET http://134.209.147.204/socket.io/?EIO=3&transport=polling&t=NKKWF-X //404
Here is the nginx config file
server {
listen 80;
root /var/www/html;
location / {
proxy_pass http://127.0.0.1:5000; (where the frontend is running)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /socket/ {
proxy_pass http://localhost:3001; (where the sockets.io server is running)
}
}
Frontend
socket = io('/socket/')
Both the frontend and backend runs without any errors and can be accessed from the browser.
After days of hacking, I was able to make it work!
nginx config
upstream websocket {
server 127.0.0.1:3001;
}
server {
listen 80;
root /var/www/html;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /ws/ {
proxy_pass http://websocket/socket.io/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
socket.io server
const app = require('express')();
const server = app.listen(3001);
const io = require('socket.io').listen(server);
socket.io client
socket = io.connect('http://yourdomain/', {path: '/ws/'})
I faced the same problem when trying to connect a sockets.io application to a nodejs server that is behind an Apache2 server. I access the nodejs using /video/. I read this answer and didn't get it. Dummy me! But just in case I'm not alone, I'll attempt to clarify it further here.
I ended up having to follow the code of sockets.io to understand what they mean in the documentation. I'm such a dummy. The documentation says:
A new Socket instance is returned for the namespace specified by the
pathname in the URL, defaulting to /. For example, if the url is
http://localhost/users, a transport connection will be established to
http://localhost and a Socket.IO connection will be established to
/users
After following the code, the meaning (in my dummy's mind) became clear. The socket connection is made with the "String" that I specify in socket=io("url/video/"), but that the transport connection will be attempted to just its "url" part. To change the transport connection path, you need to specify one of the options described in the documentations for the Manager class, which are the same as the options for the io class.
Here is a link to the pertinent documentation, you need to read the io and the Manager headings.

Nginx only serving first 72kbs of my JS files

Nginx appears to only be loading the first 72kbs of my javascript files. I've searched all around my nginx config files and cannot see this setting anywhere. I've added things like
location / {
...
proxy_max_temp_file_size 1m;
...
}
and
location / {
...
sendfile on;
sendfile_max_chunk 1m;
...
}
But still I'm unable to overwrite this weird setting that is only allowing first part of the file to load.
The connection uses nginx proxy_pass to foward port 80 to kibanas port '5601'. I feel like there could be a setting that limits file transfer over proxy? Just not sure where to find it.
proxypass connection looks like:
server {
listen 80;
server_name logs.mydomain.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
And my default nginx settings is posted here:
http://www.heypasteit.com/clip/OEKIR
Would appear when disk space is low NGINX defaults to minimal safe settings.
When I checked the VMs health I noticed that the disk drive was full. Elasticsearch was logging a couple of GBs worth of text in to error logs each day. I still Have not fully identified why elastic search is flooding the error logs.
But I think excessive disk space usage has contributed to this error. Nginx could be detecting this and switching over to minimal safe configuration which would only allow 72kbs of data to be served per file.
Once I cleared out the excessive logs, Nginx began serving the full JS files again without needing to restart nginx or kibana.

Proxy a WS (web socket) connection?

I have a connection to a web socket but it is limited to the intranet.
Looks like this:
socket = new WebSocket("ws://xx.xxx.6.98:8200/demo/");
It it possible to proxy it from the server side so it can become accessible from the outside world?
Something like:
socket = new WebSocket("ws://mysite.com/getWebSocket/demo/");
With Nginx you can forward the traffic towards your destination server. Here is an example of how to proxy forward a Socket.IO Websocket.
location /socket.io/ {
proxy_pass http://xx.xxx.6.98:8200;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Useful link: https://www.nginx.com/blog/websocket-nginx/
Sure, what I do with SignalR is running it on a server, and expose it to the internet with ProxyPass on Apache.
So what you need is a Reverse Proxy, you can also let it work with Nginx but this is with Apache:
ProxyPass /ws ws://xx.xxx.6.98:8200/demo/
You have this on a domain or server IP, and when you go to ipaddressordomain.com/ws you actually go to ws://xx.xxx.6.98:8200/demo/

http to a Node server over https nginx website

Is it possible to have a http connection to a node.js server over a website which is generally secured by https ?
how do you suggest to combine a node connection with a website that is operating on https .
As already mentioned in the comments, it is useful to create a proxy to your node.js applications.
A good tutorial is, for instance, from DigitalOcean: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-14-04
In this tutorial, it shows that the host configuration can look like this:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://APP_PRIVATE_IP_ADDRESS:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
In this case, a reverse proxy to the port 8080 was created. Generally, this method can be used for every node.js application.
In your case: add the location block in your https server block and modify the route.
Support for https was just added to node-http-server it can spin up both http and https servers at the same time. It also has some test certs you can use and a cert generation guide for self signing.
note if you want to spin up on port 80 and 443 you'll need to run with sudo
An example for spinning up the same code on both http and https :
var server=require('node-http-server');
server.deploy(
{
port:8000,
root:'~/myApp/',
https:{
privateKey:`/path/to/your/certs/private/server.key`,
certificate:`/path/to/your/certs/server.pub`,
port:4433
}
}
);
Its super simple and has pretty good documentation and several examples in the examples folder.

Meteor WebSocket connection to 'ws://.../websocket' failed: Error during WebSocket handshake: Unexpected response code: 400

I am brand new to things like Meteor.JS, and was wondering about this error. I started the test project (with the button click meter) and it works, but then I go into the console and see
WebSocket connection to 'ws://shibe.ninja/sockjs/243/5gtde_n9/websocket' failed: Error during WebSocket handshake: Unexpected response code: 400
I don't know how to fix it.
Thanks
Maybe a little late but in case you still stuck on this. I got the same issue when deploying the app and using nginx as proxy.
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
Check also the nginx docs here: http://nginx.com/blog/websocket-nginx/
I bumped into this problem myself, but I already had my proxy headers set correctly and it was still not working. But apparently Cloudflare is causing issues. Here is a great article on the subject: https://meteorhacks.com/cloudflare-meets-meteor
As far as I've found, there are three solutions:
Option 1: Use CloudFlare enterprise, which supports sockets.
Option 2: Disable Meteor WebSockets, which will affect your performance as it fallbacks back to use sock.js as a replacment. To do this, just set your meteor environment like this:
export DISABLE_WEBSOCKETS=1
Option 3: In Cloudflare, create a ddp subdomain for the websocket (ddp.yourdomain.com), then disable Cloudflare on the new subdomain. After that set your meteor environment like this:
export DDP_DEFAULT_CONNECTION_URL=http://ddp.example.com
After this my nginx config needed some adjustments, as this has now become a cross-origin (CORS) setup. This is my new nginx config:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80 proxy_protocol;
listen [::]:80 proxy_protocol;
server_name mydomain.com ddp.mydomain.com;
## This allows the CORS setup to work
add_header Access-Control-Allow-Origin 'http://example.com';
## This hides the CORS setup from the Meteor server
## Without this the header is added twice, not sure why?
proxy_hide_header Access-Control-Allow-Origin;
## Idealy the two options above should be disabeled,
## Then use this one instead, but that caused issues in my setup.
# proxy_set_header Access-Control-Allow-Origin 'http://example.com';
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host; # pass the host header
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $remote_addr; # Preserve client IP
proxy_set_header X-Forwarded-For $remote_addr;
proxy_http_version 1.1;
# Meteor browser cache settings (the root path should not be cached!)
if ($uri != '/') {
expires 30d;
}
}
}
Finally, remember to restart nginx.

Categories