Socket.io not disconnecting through Cloudflare/Nginx - javascript

I have a small web app that's served with Express.js and connects to the backend with Socket.io. To make it public, I'm using Nginx as a reverse proxy and then Cloudflare at the very front. The app relies on the disconnect event firing when the it's closed or reloaded to keep track of online users among other things. When going through Nginx and Cloudflare the disconnect event never fires on the backend. It does when developing locally.
Here's my Nginx config file:
server {
listen 80;
server_name colab.gq;
server_name www.colab.gq;
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Forwarded-For $remote_addr;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/colab.gq/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/colab.gq/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
And here's a snippet of my server-side code:
io.on('connection', (socket) => {
// Stuff here
socket.on('disconnect', () => {
console.log('disconnected!')
// Stuff that relies on the disconnect event
})
})
When the user closes the tab or reloads the page the disconnect event should fire, but it never does when passing the connection through Nginx and Cloudflare. Thanks in advance for any help!
UPDATE: It seems like a few seconds after reloading/closing the disconnect event finally registers.

You have to add upgrade headers to your socket io path in nginx configuration it like this,
location ~* \.io {
.. your configuration
proxy_pass http://localhost:3000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Since you asked for a extension of the answer, first thing which you may already know is socket.io is a protocol that use Websockets protocol under the hood (so both are the same). As standards both Websockets protocols and HTTP protocols is listening on the same port 80 or 443. The default protocol is HTTP, if user wants to use websockets protocol he/she has to send a upgrade request from HTTP to WS protocol and there is some key authentication and steps.
That's why you need to put them in your nginx configuration.
Refer this if you need more information abouth Protocol upgrade mechanism.
Even though in my opinion, this is not a exact duplicate of this question, I feel obligated to give credits for #Paulo to providing the perfect answer even though is not accepted.

What is probably happening is 2 things.
1st: Socket.io has a long timeout (hence the several seconds) and is looking to reconnect before declaring that it has been disconnected. Check for a reconnect_attempt or reconnecting event (note I have not used socket.io in a while, so I could be wrong on the timing for this)
2nd: You are not telling socket.io that it is disconnecting upon closing the browser window.
I would recommend adding an event listener to your javascript that will tell your server that it is being disconnected upon window close.
window.addEventListener('beforeunload', () => {
socket.disconnect();
}

Related

NGINX : upstream timed out (110: Connection timed out) but not when URL queried directly

i'm using Ubuntu 20.04 to host a number of websites, and i'm using nginx as a gateway server to my apache webserver.
the problem i'm facing is that my website won't load one of it's components that's loaded up by the javascript it loads (via AJAX from jQuery). it's the loading of a simple list of background URLs as JSON data, that should normally take under a second. And it does, when queried directly in the browser. But it won't load at all when loaded up by the HTML + Javascript of the website itself. :(
/etc/nginx/sites-enabled/00-default-ssl.conf :
# HTTPS iRedMail
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mail.mydomain.com;
root /var/www/html;
index index.php index.html;
include /etc/nginx/templates/misc.tmpl;
include /etc/nginx/templates/ssl.tmpl;
include /etc/nginx/templates/iredadmin.tmpl;
include /etc/nginx/templates/roundcube.tmpl;
include /etc/nginx/templates/sogo.tmpl;
include /etc/nginx/templates/netdata.tmpl;
include /etc/nginx/templates/php-catchall.tmpl;
include /etc/nginx/templates/stub_status.tmpl;
}
# HTTPS my own server
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
#root /home/myrealname/data1/htdocs/nicer.app;
ssl_certificate /home/myrealname/data1/certificates/other-ssl/all.crt;
ssl_certificate_key /home/myrealname/data1/certificates/other-ssl/mydomain.com.key;
ssl on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers 'kEECDH+ECDSA+AES128 kEECDH+ECDSA+AES256 kEECDH+AES128 kEECDH+AES256 kEDH+AES128 kEDH+AES256 DES-CBC3-SHA +SHA !aNULL !eNULL !LOW !kECDH !DSS !MD5 !RC4 !EXP !PSK !SRP !CAMELLIA !SEED';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparam.pem;
location / {
proxy_pass https://192.168.178.55:444/;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_connect_timeout 159s;
proxy_send_timeout 60;
proxy_read_timeout 60;
send_timeout 60;
resolver_timeout 60;
}
}
/etc/apache2/sites-enabled/00-default-ssl.conf is your typical ssl enabled apache config, serving a VirtualHost on *:444.
UPDATE : the problem appears to be independent of running php7.4-fpm eithe on unix sockets or on a unix port, or plain apache2 with SSL and PHP enabled.
UPDATE2 : the problem is caused by multiple AJAX requests coming from the javascript at the same time, something which is largely beyond my control to fix.
please check if you can load the direct URL while the website loads
the data you request, is it filtered by PHP?
if so, which PHP and Apache are you using? The plain vanilla, or php-fpm?
one of the things my webpage does is load up JSON data from a couchdb server, which i necessarily serve with SSL.
in all the confusion, i had forgotten to re-enable the nginx forward to couchdb http (not https). simply copying it's config file from /etc/nginx/sites-enabled.bak/couchdb to /etc/nginx/sites-enabled/couchdb.conf fixed all my problems :)

Error on Let's encrypt auto renewal (Nginx)

I am trying to set up greenlock-express to run behind nginx proxy.
Here is my nginx config
...
# redirect
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location / {
return 301 https://$server_name$request_uri;
}
}
# serve
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
# SSL settings
ssl on;
ssl_certificate C:/path/to/mydomain.com/fullchain.pem;
ssl_certificate_key C:/path/to/mydomain.com/privkey.pem;
# enable session resumption to improve https performance
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# enables server-side protection from BEAST attacks
ssl_prefer_server_ciphers on;
# disable SSLv3(enabled by default since nginx 0.8.19) since it's less secure then TLS
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ciphers chosen for forward secrecy and compatibility
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
# enable OCSP stapling (mechanism by which a site can convey certificate revocation information to visitors in a privacy-preserving, scalable manner)
resolver 8.8.8.8 8.8.4.4;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate C:/path/to/mydomain.com/chain.pem;
# config to enable HSTS(HTTP Strict Transport Security) https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
# added to make handshake take less resources
keepalive_timeout 70;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://127.0.0.1:3001/;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
...
I have node server running on port 3000 (http) and port 3001 (https). Everything else seems to be working, but certificates do not update and expire after 3 months.
If I closed nginx and ran node server on port 80 (http) and port 443 (https), then it updates certs.
I made sure that .well-known/acme-challenge is forwarded to node server, i.e when I go to url http(s)://mydomain.com/.well-known/acme-challenge/randomstr I get following response:
{
"error": {
"message": "Error: These aren't the tokens you're looking for. Move along."
}
}
The easy way to separate the webroot for ACME authentication.
Create a webroot directory for ACME authentication.
C:\www\letsencrypt\.well-known
In the nginx configuration, set the webroot for ACME authentication to the previously created directory.
http://example.com/.well-known/acme-challenge/token -> C:/www/letsencrypt/.well-known/acme-challenge/token
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root C:/www/letsencrypt;
}
location / {
return 301 https://$server_name$request_uri;
}
}
Restart nginx.
You can change your webroot in certbot to get authentication again.
certbot certonly --webroot -w C:\www\letsencrypt\ -d exapmle.com --dry-run
First, test it by adding the --dry-run option. Otherwise, you may experience issues limiting the number of authentication attempts.
The error you are seeing is that when a token is placed in your
webroot/.well-known/acme-challenge/token
Then Let’s Encrypt tries to verify that from the internet. going to http://yourdomain/.well-known/acme-challenge/token it gets a 404 error - page not found. Exactly why it get’s a 404 I can’t be certain. If you place a file there yourself, is it reachable from the internet ?.
If you are wondering there are a couple of automatic ways to renew your SSL's without restarting your nginx. The one most nginx users seem to prefer is the webroot plugin: first, obtain a new cert using something like:
certbot certonly --webroot -w /path/to/your/webroot -d example.com --post-hook="service nginx reload"
Then set up a cron job to run certbot renew once or twice a day; it will only run the post-hook when it actually renews the certificate. You can also use --pre-hook flag if you prefer to stop nginx to run certbot in standalone mode.
There’s also a full nginx plugin, which you can activate with --nginx. It’s still being tested, so experiment at your own risk and report any bugs.
Note:
post-hookFlag will take care of reloading nginx upload renewal of your certs

TLS Websocket forwarding not working as expected

I have a working websocket which can be reached from my clientside code like this: (Not using Socket.io or any library, just plain html5 websockets)
ws://localhost:9000/socket/connect
The Websocket is implemented in Java Play Framework
public WebSocket connect() {
return WebSocket.Text.accept(request -> ActorFlow.actorRef(out -> SocketActor.props(out, jsonWebToken),
actorSystem, materializer
)
);
}
Anyway this is working fine, however I now want to set up https for the website and am doing this on nginx. Switching to https I also need to use the wss protocol instead of ws, therefore I want to proxy wss calls as well through nginx and hereby I am facing issues.
I have configured my nginx for the websocket proxy as stated here https://www.nginx.com/blog/websocket-nginx/ My complete config looks like this
upstream backend {
server backend:9000;
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name legato;
ssl_certificate /etc/nginx/ssl/legato.crt;
ssl_certificate_key /etc/nginx/ssl/legato.key;
location /api {
return 302 /api/;
}
location /api/ {
proxy_pass http://backend/;
}
location /statics {
root /var/www;
}
location /socket/connect {
proxy_pass http://backend/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location / {
root /var/www/frontend;
index index.html;
try_files $uri /index.html;
}
}
so the /socket/connect route is forwarded to the websocket backend route.
From my clientside javascript code I am opening the websocket now like this
socket = new WebSocket(wss://192.168.155.177/socket/connect);
which gives me the error
WebSocket connection to 'wss://192.168.155.177/socket/connect' failed: Error during WebSocket handshake: Unexpected response code: 200
Can somebody explain to me what is wrong in this case?
I found a little workaround for this issue, gonna leave it here for anybody in the future.
So I was trying:
Implement SSL encryption for the Websockets on NGINX. This didn't work because NGINX would connect to the websocket via http again. Since I am not using any socket.io this didn't work as the backend does not support it
Implement SSL on backend side. This was rather tricky to me. Switching to https on java involves using keystores, furthermore How to use TLS in Play!Framework WebSockets ("wss://") suggest that even if I could get https working it might be that wss still won't work. So after a few failed attempts I ditched this.
The solution was using stunnel which allowed me to open up a port, encrypt any incoming traffic and forward it to another port. So I am now ssl encrpyting traffic on port 9433 and forwarding it to the backends unecrypted ws Endpoint which made it work.
If you wan't to use it in an actual production environment you should do some research on the scaleability of this method, especially stunnel.

Nginx only serving first 72kbs of my JS files

Nginx appears to only be loading the first 72kbs of my javascript files. I've searched all around my nginx config files and cannot see this setting anywhere. I've added things like
location / {
...
proxy_max_temp_file_size 1m;
...
}
and
location / {
...
sendfile on;
sendfile_max_chunk 1m;
...
}
But still I'm unable to overwrite this weird setting that is only allowing first part of the file to load.
The connection uses nginx proxy_pass to foward port 80 to kibanas port '5601'. I feel like there could be a setting that limits file transfer over proxy? Just not sure where to find it.
proxypass connection looks like:
server {
listen 80;
server_name logs.mydomain.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
And my default nginx settings is posted here:
http://www.heypasteit.com/clip/OEKIR
Would appear when disk space is low NGINX defaults to minimal safe settings.
When I checked the VMs health I noticed that the disk drive was full. Elasticsearch was logging a couple of GBs worth of text in to error logs each day. I still Have not fully identified why elastic search is flooding the error logs.
But I think excessive disk space usage has contributed to this error. Nginx could be detecting this and switching over to minimal safe configuration which would only allow 72kbs of data to be served per file.
Once I cleared out the excessive logs, Nginx began serving the full JS files again without needing to restart nginx or kibana.

http to a Node server over https nginx website

Is it possible to have a http connection to a node.js server over a website which is generally secured by https ?
how do you suggest to combine a node connection with a website that is operating on https .
As already mentioned in the comments, it is useful to create a proxy to your node.js applications.
A good tutorial is, for instance, from DigitalOcean: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-14-04
In this tutorial, it shows that the host configuration can look like this:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://APP_PRIVATE_IP_ADDRESS:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
In this case, a reverse proxy to the port 8080 was created. Generally, this method can be used for every node.js application.
In your case: add the location block in your https server block and modify the route.
Support for https was just added to node-http-server it can spin up both http and https servers at the same time. It also has some test certs you can use and a cert generation guide for self signing.
note if you want to spin up on port 80 and 443 you'll need to run with sudo
An example for spinning up the same code on both http and https :
var server=require('node-http-server');
server.deploy(
{
port:8000,
root:'~/myApp/',
https:{
privateKey:`/path/to/your/certs/private/server.key`,
certificate:`/path/to/your/certs/server.pub`,
port:4433
}
}
);
Its super simple and has pretty good documentation and several examples in the examples folder.

Categories