Error on Let's encrypt auto renewal (Nginx) - javascript

I am trying to set up greenlock-express to run behind nginx proxy.
Here is my nginx config
...
# redirect
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location / {
return 301 https://$server_name$request_uri;
}
}
# serve
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
# SSL settings
ssl on;
ssl_certificate C:/path/to/mydomain.com/fullchain.pem;
ssl_certificate_key C:/path/to/mydomain.com/privkey.pem;
# enable session resumption to improve https performance
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# enables server-side protection from BEAST attacks
ssl_prefer_server_ciphers on;
# disable SSLv3(enabled by default since nginx 0.8.19) since it's less secure then TLS
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ciphers chosen for forward secrecy and compatibility
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
# enable OCSP stapling (mechanism by which a site can convey certificate revocation information to visitors in a privacy-preserving, scalable manner)
resolver 8.8.8.8 8.8.4.4;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate C:/path/to/mydomain.com/chain.pem;
# config to enable HSTS(HTTP Strict Transport Security) https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
# added to make handshake take less resources
keepalive_timeout 70;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://127.0.0.1:3001/;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
...
I have node server running on port 3000 (http) and port 3001 (https). Everything else seems to be working, but certificates do not update and expire after 3 months.
If I closed nginx and ran node server on port 80 (http) and port 443 (https), then it updates certs.
I made sure that .well-known/acme-challenge is forwarded to node server, i.e when I go to url http(s)://mydomain.com/.well-known/acme-challenge/randomstr I get following response:
{
"error": {
"message": "Error: These aren't the tokens you're looking for. Move along."
}
}

The easy way to separate the webroot for ACME authentication.
Create a webroot directory for ACME authentication.
C:\www\letsencrypt\.well-known
In the nginx configuration, set the webroot for ACME authentication to the previously created directory.
http://example.com/.well-known/acme-challenge/token -> C:/www/letsencrypt/.well-known/acme-challenge/token
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root C:/www/letsencrypt;
}
location / {
return 301 https://$server_name$request_uri;
}
}
Restart nginx.
You can change your webroot in certbot to get authentication again.
certbot certonly --webroot -w C:\www\letsencrypt\ -d exapmle.com --dry-run
First, test it by adding the --dry-run option. Otherwise, you may experience issues limiting the number of authentication attempts.

The error you are seeing is that when a token is placed in your
webroot/.well-known/acme-challenge/token
Then Let’s Encrypt tries to verify that from the internet. going to http://yourdomain/.well-known/acme-challenge/token it gets a 404 error - page not found. Exactly why it get’s a 404 I can’t be certain. If you place a file there yourself, is it reachable from the internet ?.
If you are wondering there are a couple of automatic ways to renew your SSL's without restarting your nginx. The one most nginx users seem to prefer is the webroot plugin: first, obtain a new cert using something like:
certbot certonly --webroot -w /path/to/your/webroot -d example.com --post-hook="service nginx reload"
Then set up a cron job to run certbot renew once or twice a day; it will only run the post-hook when it actually renews the certificate. You can also use --pre-hook flag if you prefer to stop nginx to run certbot in standalone mode.
There’s also a full nginx plugin, which you can activate with --nginx. It’s still being tested, so experiment at your own risk and report any bugs.
Note:
post-hookFlag will take care of reloading nginx upload renewal of your certs

Related

NGINX : upstream timed out (110: Connection timed out) but not when URL queried directly

i'm using Ubuntu 20.04 to host a number of websites, and i'm using nginx as a gateway server to my apache webserver.
the problem i'm facing is that my website won't load one of it's components that's loaded up by the javascript it loads (via AJAX from jQuery). it's the loading of a simple list of background URLs as JSON data, that should normally take under a second. And it does, when queried directly in the browser. But it won't load at all when loaded up by the HTML + Javascript of the website itself. :(
/etc/nginx/sites-enabled/00-default-ssl.conf :
# HTTPS iRedMail
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mail.mydomain.com;
root /var/www/html;
index index.php index.html;
include /etc/nginx/templates/misc.tmpl;
include /etc/nginx/templates/ssl.tmpl;
include /etc/nginx/templates/iredadmin.tmpl;
include /etc/nginx/templates/roundcube.tmpl;
include /etc/nginx/templates/sogo.tmpl;
include /etc/nginx/templates/netdata.tmpl;
include /etc/nginx/templates/php-catchall.tmpl;
include /etc/nginx/templates/stub_status.tmpl;
}
# HTTPS my own server
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
#root /home/myrealname/data1/htdocs/nicer.app;
ssl_certificate /home/myrealname/data1/certificates/other-ssl/all.crt;
ssl_certificate_key /home/myrealname/data1/certificates/other-ssl/mydomain.com.key;
ssl on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers 'kEECDH+ECDSA+AES128 kEECDH+ECDSA+AES256 kEECDH+AES128 kEECDH+AES256 kEDH+AES128 kEDH+AES256 DES-CBC3-SHA +SHA !aNULL !eNULL !LOW !kECDH !DSS !MD5 !RC4 !EXP !PSK !SRP !CAMELLIA !SEED';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparam.pem;
location / {
proxy_pass https://192.168.178.55:444/;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_connect_timeout 159s;
proxy_send_timeout 60;
proxy_read_timeout 60;
send_timeout 60;
resolver_timeout 60;
}
}
/etc/apache2/sites-enabled/00-default-ssl.conf is your typical ssl enabled apache config, serving a VirtualHost on *:444.
UPDATE : the problem appears to be independent of running php7.4-fpm eithe on unix sockets or on a unix port, or plain apache2 with SSL and PHP enabled.
UPDATE2 : the problem is caused by multiple AJAX requests coming from the javascript at the same time, something which is largely beyond my control to fix.
please check if you can load the direct URL while the website loads
the data you request, is it filtered by PHP?
if so, which PHP and Apache are you using? The plain vanilla, or php-fpm?
one of the things my webpage does is load up JSON data from a couchdb server, which i necessarily serve with SSL.
in all the confusion, i had forgotten to re-enable the nginx forward to couchdb http (not https). simply copying it's config file from /etc/nginx/sites-enabled.bak/couchdb to /etc/nginx/sites-enabled/couchdb.conf fixed all my problems :)

How to fix http 414 Request-URI Too Large error on nginx?

I want all traffic going to my web server to be redirected to https on Nginx. However, when I go to my website at http://www.example.com, I get the error "414 Request-URI Too Large" and the URL is ridiculously long -- http://www.example.com/http:://www.example.com/http:: -- and it goes on for a while, which I am assuming is what is giving me this error. But I don't know how to fix this redirection error because my config file for Nginx doesn't contain a $request_uri parameter.
Here's the Nginx config file:
server {
server_name example.com;
return 301 https://www.example.com;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
root /var/www/html;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name www.example.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://localhost:8080; #whatever port your app runs on
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
if ($host = www.example.com) {
return 301 https://www.example.com;
}
if ($host = example.com) {
return 301 https://www.example.com;
}
listen 80 default_server;
server_name example.com www.example.com;
return 404;
}
Any help would be greatly appreciated!
I fix this issue with :
Open your nginx.conf file /etc/nginx/conf.d/nginx.conf
Add this code inside http
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
client_max_body_size 24M;
client_body_buffer_size 128k;
client_header_buffer_size 5120k;
large_client_header_buffers 16 5120k;
Reload your nginx with service nginx reload
I know this is an old question but for anyone looking for a solution, clearing cache & cookies from my browser was the working method.
Open Chrome, at the top right, click the three vertical dots icon.
Click More tools and then Clear browsing data.
At the top, choose a time range. (For me I deleted for the last hour)
Click Clear data.
Source

Socket.io not disconnecting through Cloudflare/Nginx

I have a small web app that's served with Express.js and connects to the backend with Socket.io. To make it public, I'm using Nginx as a reverse proxy and then Cloudflare at the very front. The app relies on the disconnect event firing when the it's closed or reloaded to keep track of online users among other things. When going through Nginx and Cloudflare the disconnect event never fires on the backend. It does when developing locally.
Here's my Nginx config file:
server {
listen 80;
server_name colab.gq;
server_name www.colab.gq;
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Forwarded-For $remote_addr;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/colab.gq/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/colab.gq/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
And here's a snippet of my server-side code:
io.on('connection', (socket) => {
// Stuff here
socket.on('disconnect', () => {
console.log('disconnected!')
// Stuff that relies on the disconnect event
})
})
When the user closes the tab or reloads the page the disconnect event should fire, but it never does when passing the connection through Nginx and Cloudflare. Thanks in advance for any help!
UPDATE: It seems like a few seconds after reloading/closing the disconnect event finally registers.
You have to add upgrade headers to your socket io path in nginx configuration it like this,
location ~* \.io {
.. your configuration
proxy_pass http://localhost:3000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Since you asked for a extension of the answer, first thing which you may already know is socket.io is a protocol that use Websockets protocol under the hood (so both are the same). As standards both Websockets protocols and HTTP protocols is listening on the same port 80 or 443. The default protocol is HTTP, if user wants to use websockets protocol he/she has to send a upgrade request from HTTP to WS protocol and there is some key authentication and steps.
That's why you need to put them in your nginx configuration.
Refer this if you need more information abouth Protocol upgrade mechanism.
Even though in my opinion, this is not a exact duplicate of this question, I feel obligated to give credits for #Paulo to providing the perfect answer even though is not accepted.
What is probably happening is 2 things.
1st: Socket.io has a long timeout (hence the several seconds) and is looking to reconnect before declaring that it has been disconnected. Check for a reconnect_attempt or reconnecting event (note I have not used socket.io in a while, so I could be wrong on the timing for this)
2nd: You are not telling socket.io that it is disconnecting upon closing the browser window.
I would recommend adding an event listener to your javascript that will tell your server that it is being disconnected upon window close.
window.addEventListener('beforeunload', () => {
socket.disconnect();
}

Nginx only serving first 72kbs of my JS files

Nginx appears to only be loading the first 72kbs of my javascript files. I've searched all around my nginx config files and cannot see this setting anywhere. I've added things like
location / {
...
proxy_max_temp_file_size 1m;
...
}
and
location / {
...
sendfile on;
sendfile_max_chunk 1m;
...
}
But still I'm unable to overwrite this weird setting that is only allowing first part of the file to load.
The connection uses nginx proxy_pass to foward port 80 to kibanas port '5601'. I feel like there could be a setting that limits file transfer over proxy? Just not sure where to find it.
proxypass connection looks like:
server {
listen 80;
server_name logs.mydomain.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
And my default nginx settings is posted here:
http://www.heypasteit.com/clip/OEKIR
Would appear when disk space is low NGINX defaults to minimal safe settings.
When I checked the VMs health I noticed that the disk drive was full. Elasticsearch was logging a couple of GBs worth of text in to error logs each day. I still Have not fully identified why elastic search is flooding the error logs.
But I think excessive disk space usage has contributed to this error. Nginx could be detecting this and switching over to minimal safe configuration which would only allow 72kbs of data to be served per file.
Once I cleared out the excessive logs, Nginx began serving the full JS files again without needing to restart nginx or kibana.

only websocket and xhr working with socket.io and nginx proxy pass

I have a problem that might or might not be related to nginx proxy pass.
socket.io on port 7022 but nginx manages all requests on port 80.
Works so very very good with socket.io and at least half-good with xhr.
I never get flashsocket or html file to work. maybe not jasonp either.
I say it may be related to proxying in nginx because I did remove flashsocket months before the socket.io proxy alternative became available in nginx.
I often read on forums that socket.io simply works. "I never change anything on the server" "you should not need too...socket.io does it all for you... just leave it as it is..
My transport is down to
io.set('transports', ['websocket','xhr-polling','jsonp-polling']);
while the below is the recommended:
io.set('transports', [
'websocket'
,'flashsocket'
,'htmlfile'
,'xhr-polling'
,'jsonp-polling'
]);
this is my nginx proxy configuration
location /socket.io {
proxy_pass http://localhost:7022;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
my nginx server is also serving some static html as well as php with FastCGI server listening on 127.0.0.1:9000
Additional nginx settings
server {
listen 80;
#server blah blah
client_max_body_size 100M;
client_body_buffer_size 5M;
location / {
try_files $uri $uri/ #rewrites;
# This block will catch static file requests, such as images, css, js
location ~* \.(?:ico|css|gif|jpe?g|png)$ {
# Some basic cache-control for static files to be sent to the browser
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_buffer_size 2M;
fastcgi_intercept_errors off;
fastcgi_buffers 4 2M;
fastcgi_read_timeout 200;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
# this prevents hidden files (beginning with a period) from being served
location ~ /\. {
deny all;
}
# opt-in to the future
add_header "X-UA-Compatible" "IE=Edge,chrome=1";
Anything obvious??
Anybody else with trasport issues with socket.io? maybe in combination with nginx routing?

Categories