Hi I've been trying to do this previously with Apache with no success. I've decided to try Nginx instead.
I'm trying to establish the following,
client <-- wss -- > Nginx <-- ws --> Nodejs
Seems like a simple thing to do, however I'm not getting any success. I'm continuously getting Error 301.
My client side is simple,
const connection = new WebSocket('wss://' + location.host + '/ws');
Server side is,
const ws = new WebSocket.Server({port: 8080});
Nginx config file is,
server {
server_name example.com;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://localhost:3000;
}
location /ws {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_http_version 1.1; # Needed
proxy_set_header Upgrade $http_upgrade; # Needed
proxy_set_header Connection "upgrade"; # Needed
proxy_pass http://localhost:3000;
}
}
server {
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name example.com;
return 404; # managed by Certbot
}
I've seen numerous posts regarding the websockets config set up, and the one I have no should definitely work. However, no matter how hard I try it's not working.
I figured it out,
The problem was that I set up the Websocket to be on port 8080, however my proxy_pass is setup for the port 3000.
The solution was to have them both be on the same port.
For the app server
const ws = new WebSocket.Server({port: 3001});
and have the Nginx to have the same port under /ws,
proxy_pass http://localhost:3001;
Related
I have a reverse proxy with nginx:
server {
server_name CENSURED;
access_log /var/log/nginx/CENSURED.access.log;
error_log /var/log/nginx/CENSURED.error_log;
location / {
proxy_buffering off;
proxy_request_buffering off;
# redirect all HTTP traffic to localhost:8080
proxy_pass http://localhost:9090;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
try_files $uri $uri/ =404;
proxy_read_timeout 86400; }
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/CENSURED/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/CENSURED/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot }
The server side is made with node:
var WebSocketServer = require('ws').Server;
var wss = new WebSocketServer({ port: 9090 });
wss.on('connection', function (connection) {
Client side
new WebSocket("ws:"+URL)
I get this server log, which seems to be a client side error? but I can't figure out how I should fix it
2022/07/04 12:09:19 [crit] 1809447#1809447: *45 SSL_do_handshake() failed (SSL: error:142090BA:SSL routines:tls_early_post_process_client_hello:bad cipher) while SSL handshaking, client: CENSURED, server: 0.0.0.0:443
Do I have to add something to the server side for socket to work with SSL? but I already tried a bunch of things and nothing seems to work and it's strange that is works without any issue with wscat
specifying ssl_protocols TLSv1.2 TLSv1.3; and default server solved the problem
I want all traffic going to my web server to be redirected to https on Nginx. However, when I go to my website at http://www.example.com, I get the error "414 Request-URI Too Large" and the URL is ridiculously long -- http://www.example.com/http:://www.example.com/http:: -- and it goes on for a while, which I am assuming is what is giving me this error. But I don't know how to fix this redirection error because my config file for Nginx doesn't contain a $request_uri parameter.
Here's the Nginx config file:
server {
server_name example.com;
return 301 https://www.example.com;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
root /var/www/html;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name www.example.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://localhost:8080; #whatever port your app runs on
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
if ($host = www.example.com) {
return 301 https://www.example.com;
}
if ($host = example.com) {
return 301 https://www.example.com;
}
listen 80 default_server;
server_name example.com www.example.com;
return 404;
}
Any help would be greatly appreciated!
I fix this issue with :
Open your nginx.conf file /etc/nginx/conf.d/nginx.conf
Add this code inside http
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
client_max_body_size 24M;
client_body_buffer_size 128k;
client_header_buffer_size 5120k;
large_client_header_buffers 16 5120k;
Reload your nginx with service nginx reload
I know this is an old question but for anyone looking for a solution, clearing cache & cookies from my browser was the working method.
Open Chrome, at the top right, click the three vertical dots icon.
Click More tools and then Clear browsing data.
At the top, choose a time range. (For me I deleted for the last hour)
Click Clear data.
Source
Here is my node file
I have HTML on /
and node app on /app
server {
listen 80;
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate "/etc/letsencrypt/live/xxxxx/fullchain.pem";
ssl_certificate_key "/etc/letsencrypt/live/xxxxxxx/privkey.pem";
# It is *strongly* recommended to generate unique DH parameters
# Generate them with: openssl dhparam -out /etc/pki/nginx/dhparams.pem 2048
#ssl_dhparam "/etc/pki/nginx/dhparams.pem";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aD$
ssl_prefer_server_ciphers on;
location / {
# This would be the directory where your frontend code resides
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ =404;
}
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
server {
listen 80;
listen [::]:80;
server_name localhost;
return 302 https://xxxxx.com;
}
I have also forwarded 3000 port to 80;
I want to run multiple apps on this server on a different port.
What config should I have for other apps.
This is Nginx config file.
The question is what exactly You want to achieve.
Scenario: Different domains
Add more config files to NGINX with different server_name app1.com, server_name app2.com etc ... Each config should forward to another Node app on different ports.
Scenario: One domain and different PATH.
Multiply location paragraph. Eg. to existing config file add new location /app1paragraph which forwards to Node App on a different port.
I am trying to set up greenlock-express to run behind nginx proxy.
Here is my nginx config
...
# redirect
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location / {
return 301 https://$server_name$request_uri;
}
}
# serve
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
# SSL settings
ssl on;
ssl_certificate C:/path/to/mydomain.com/fullchain.pem;
ssl_certificate_key C:/path/to/mydomain.com/privkey.pem;
# enable session resumption to improve https performance
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# enables server-side protection from BEAST attacks
ssl_prefer_server_ciphers on;
# disable SSLv3(enabled by default since nginx 0.8.19) since it's less secure then TLS
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ciphers chosen for forward secrecy and compatibility
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
# enable OCSP stapling (mechanism by which a site can convey certificate revocation information to visitors in a privacy-preserving, scalable manner)
resolver 8.8.8.8 8.8.4.4;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate C:/path/to/mydomain.com/chain.pem;
# config to enable HSTS(HTTP Strict Transport Security) https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
# added to make handshake take less resources
keepalive_timeout 70;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://127.0.0.1:3001/;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
...
I have node server running on port 3000 (http) and port 3001 (https). Everything else seems to be working, but certificates do not update and expire after 3 months.
If I closed nginx and ran node server on port 80 (http) and port 443 (https), then it updates certs.
I made sure that .well-known/acme-challenge is forwarded to node server, i.e when I go to url http(s)://mydomain.com/.well-known/acme-challenge/randomstr I get following response:
{
"error": {
"message": "Error: These aren't the tokens you're looking for. Move along."
}
}
The easy way to separate the webroot for ACME authentication.
Create a webroot directory for ACME authentication.
C:\www\letsencrypt\.well-known
In the nginx configuration, set the webroot for ACME authentication to the previously created directory.
http://example.com/.well-known/acme-challenge/token -> C:/www/letsencrypt/.well-known/acme-challenge/token
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root C:/www/letsencrypt;
}
location / {
return 301 https://$server_name$request_uri;
}
}
Restart nginx.
You can change your webroot in certbot to get authentication again.
certbot certonly --webroot -w C:\www\letsencrypt\ -d exapmle.com --dry-run
First, test it by adding the --dry-run option. Otherwise, you may experience issues limiting the number of authentication attempts.
The error you are seeing is that when a token is placed in your
webroot/.well-known/acme-challenge/token
Then Let’s Encrypt tries to verify that from the internet. going to http://yourdomain/.well-known/acme-challenge/token it gets a 404 error - page not found. Exactly why it get’s a 404 I can’t be certain. If you place a file there yourself, is it reachable from the internet ?.
If you are wondering there are a couple of automatic ways to renew your SSL's without restarting your nginx. The one most nginx users seem to prefer is the webroot plugin: first, obtain a new cert using something like:
certbot certonly --webroot -w /path/to/your/webroot -d example.com --post-hook="service nginx reload"
Then set up a cron job to run certbot renew once or twice a day; it will only run the post-hook when it actually renews the certificate. You can also use --pre-hook flag if you prefer to stop nginx to run certbot in standalone mode.
There’s also a full nginx plugin, which you can activate with --nginx. It’s still being tested, so experiment at your own risk and report any bugs.
Note:
post-hookFlag will take care of reloading nginx upload renewal of your certs
I need help in fixing this issue.
I am trying to implement ssl to the domain my.domain.com
Front end is Angular and Backend is Meteor
I was able to create ssl certificates properly and was able to get Secure https label on loading the domain, but the page was not rendering because of the error
Uncaught TypeError: a._qs.unescape is not a function
from the build file in the .build/dist/bundle/programs/web.browser
Request URL:https://my.domain.com/5a0c202b90aa3cc1c9414b703c4e1f343fb0dd4e.js?meteor_js_resource=true
Below websocket request will remain pending with status 101
wss://my.domain.com/sockjs/362/4q059yw7/websocket
I have not written any code on Meteor to run it to https, I am trying to handle through nginx.
From angular after adding ssl certificates trying to connect to meteor throughwss://localhost/ instead of ws://localhost:3000/
Please find my nginx file below.
events {
}
http {
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
server_name my.domain.com;
root /client;
index index.html;
location / {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name my.domain.com;
root /client;
index index.html;
ssl_certificate /etc/letsencrypt/live/my.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my.domain.com/privkey.pem; # managed by Certbot
ssl_dhparam /etc/ssl/certs/dhparam.pem;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /api {
proxy_pass http://localhost:3000;
}
location /uploadFile {
proxy_pass http://localhost:3000;
}
error_page 500 502 503 504 /50x.html;
location = /51x.html {
root /client;
}
}
}
Any leads would be appreciated.
I figured out the issue I had.
Issue was in below line in nginx.
proxy_pass http://localhost:3000;
I fixed it by redirecting it to http://localhost:3000/websocket; and location as location /websocket
Snippet is below.
location /websocket {
proxy_pass http://localhost:3000/websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
}