Nginx appears to only be loading the first 72kbs of my javascript files. I've searched all around my nginx config files and cannot see this setting anywhere. I've added things like
location / {
...
proxy_max_temp_file_size 1m;
...
}
and
location / {
...
sendfile on;
sendfile_max_chunk 1m;
...
}
But still I'm unable to overwrite this weird setting that is only allowing first part of the file to load.
The connection uses nginx proxy_pass to foward port 80 to kibanas port '5601'. I feel like there could be a setting that limits file transfer over proxy? Just not sure where to find it.
proxypass connection looks like:
server {
listen 80;
server_name logs.mydomain.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
And my default nginx settings is posted here:
http://www.heypasteit.com/clip/OEKIR
Would appear when disk space is low NGINX defaults to minimal safe settings.
When I checked the VMs health I noticed that the disk drive was full. Elasticsearch was logging a couple of GBs worth of text in to error logs each day. I still Have not fully identified why elastic search is flooding the error logs.
But I think excessive disk space usage has contributed to this error. Nginx could be detecting this and switching over to minimal safe configuration which would only allow 72kbs of data to be served per file.
Once I cleared out the excessive logs, Nginx began serving the full JS files again without needing to restart nginx or kibana.
Related
i'm using Ubuntu 20.04 to host a number of websites, and i'm using nginx as a gateway server to my apache webserver.
the problem i'm facing is that my website won't load one of it's components that's loaded up by the javascript it loads (via AJAX from jQuery). it's the loading of a simple list of background URLs as JSON data, that should normally take under a second. And it does, when queried directly in the browser. But it won't load at all when loaded up by the HTML + Javascript of the website itself. :(
/etc/nginx/sites-enabled/00-default-ssl.conf :
# HTTPS iRedMail
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mail.mydomain.com;
root /var/www/html;
index index.php index.html;
include /etc/nginx/templates/misc.tmpl;
include /etc/nginx/templates/ssl.tmpl;
include /etc/nginx/templates/iredadmin.tmpl;
include /etc/nginx/templates/roundcube.tmpl;
include /etc/nginx/templates/sogo.tmpl;
include /etc/nginx/templates/netdata.tmpl;
include /etc/nginx/templates/php-catchall.tmpl;
include /etc/nginx/templates/stub_status.tmpl;
}
# HTTPS my own server
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
#root /home/myrealname/data1/htdocs/nicer.app;
ssl_certificate /home/myrealname/data1/certificates/other-ssl/all.crt;
ssl_certificate_key /home/myrealname/data1/certificates/other-ssl/mydomain.com.key;
ssl on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers 'kEECDH+ECDSA+AES128 kEECDH+ECDSA+AES256 kEECDH+AES128 kEECDH+AES256 kEDH+AES128 kEDH+AES256 DES-CBC3-SHA +SHA !aNULL !eNULL !LOW !kECDH !DSS !MD5 !RC4 !EXP !PSK !SRP !CAMELLIA !SEED';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparam.pem;
location / {
proxy_pass https://192.168.178.55:444/;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_connect_timeout 159s;
proxy_send_timeout 60;
proxy_read_timeout 60;
send_timeout 60;
resolver_timeout 60;
}
}
/etc/apache2/sites-enabled/00-default-ssl.conf is your typical ssl enabled apache config, serving a VirtualHost on *:444.
UPDATE : the problem appears to be independent of running php7.4-fpm eithe on unix sockets or on a unix port, or plain apache2 with SSL and PHP enabled.
UPDATE2 : the problem is caused by multiple AJAX requests coming from the javascript at the same time, something which is largely beyond my control to fix.
please check if you can load the direct URL while the website loads
the data you request, is it filtered by PHP?
if so, which PHP and Apache are you using? The plain vanilla, or php-fpm?
one of the things my webpage does is load up JSON data from a couchdb server, which i necessarily serve with SSL.
in all the confusion, i had forgotten to re-enable the nginx forward to couchdb http (not https). simply copying it's config file from /etc/nginx/sites-enabled.bak/couchdb to /etc/nginx/sites-enabled/couchdb.conf fixed all my problems :)
As described by the problem statement, I am running openSUSE (leap 42.3). I have a node app running on localhost:3000 which I would like to publish it (make it available to people outside my network). I already have a domain and currently, the files are being served by apache2 on port 80. I have found tons of similar problems online and solutions for them, but none are specific to mine (I think it is because of the operating system). Can anyone give me step by step solution to what I must do?
The first solution I found told me to change the configuration file and this is what I have right now:
<VirtualHost *:80>
ServerName test.mytestsitebyrichard.com
ServerAlias *.test.mytestsitebyrichard.com
DocumentRoot /srv/www/htdocs
#ProxyRequests on <--currently commented but this is what online tutorials told me to do. However, it is crashing the apache2
#ProxyPass /cs/ http://localhost:3000/
</VirtualHost>
Do I need to enable anything? I have enabled the ProxyPass and ProxyPassReverse from the configuration menu. Any help would be appreciated. Thank you.
Note please refer to the screenshots below:
You can achieve this in Nginx with Reverse Proxy.
In your /etc/nginx/sites-enabled/ directory add a new configuration file (e.g. myapp.conf) with following configuration:
server {
listen 80;
server_name yoururl.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I am trying to set up greenlock-express to run behind nginx proxy.
Here is my nginx config
...
# redirect
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location / {
return 301 https://$server_name$request_uri;
}
}
# serve
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
# SSL settings
ssl on;
ssl_certificate C:/path/to/mydomain.com/fullchain.pem;
ssl_certificate_key C:/path/to/mydomain.com/privkey.pem;
# enable session resumption to improve https performance
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# enables server-side protection from BEAST attacks
ssl_prefer_server_ciphers on;
# disable SSLv3(enabled by default since nginx 0.8.19) since it's less secure then TLS
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ciphers chosen for forward secrecy and compatibility
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
# enable OCSP stapling (mechanism by which a site can convey certificate revocation information to visitors in a privacy-preserving, scalable manner)
resolver 8.8.8.8 8.8.4.4;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate C:/path/to/mydomain.com/chain.pem;
# config to enable HSTS(HTTP Strict Transport Security) https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
# added to make handshake take less resources
keepalive_timeout 70;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://127.0.0.1:3001/;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
...
I have node server running on port 3000 (http) and port 3001 (https). Everything else seems to be working, but certificates do not update and expire after 3 months.
If I closed nginx and ran node server on port 80 (http) and port 443 (https), then it updates certs.
I made sure that .well-known/acme-challenge is forwarded to node server, i.e when I go to url http(s)://mydomain.com/.well-known/acme-challenge/randomstr I get following response:
{
"error": {
"message": "Error: These aren't the tokens you're looking for. Move along."
}
}
The easy way to separate the webroot for ACME authentication.
Create a webroot directory for ACME authentication.
C:\www\letsencrypt\.well-known
In the nginx configuration, set the webroot for ACME authentication to the previously created directory.
http://example.com/.well-known/acme-challenge/token -> C:/www/letsencrypt/.well-known/acme-challenge/token
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root C:/www/letsencrypt;
}
location / {
return 301 https://$server_name$request_uri;
}
}
Restart nginx.
You can change your webroot in certbot to get authentication again.
certbot certonly --webroot -w C:\www\letsencrypt\ -d exapmle.com --dry-run
First, test it by adding the --dry-run option. Otherwise, you may experience issues limiting the number of authentication attempts.
The error you are seeing is that when a token is placed in your
webroot/.well-known/acme-challenge/token
Then Let’s Encrypt tries to verify that from the internet. going to http://yourdomain/.well-known/acme-challenge/token it gets a 404 error - page not found. Exactly why it get’s a 404 I can’t be certain. If you place a file there yourself, is it reachable from the internet ?.
If you are wondering there are a couple of automatic ways to renew your SSL's without restarting your nginx. The one most nginx users seem to prefer is the webroot plugin: first, obtain a new cert using something like:
certbot certonly --webroot -w /path/to/your/webroot -d example.com --post-hook="service nginx reload"
Then set up a cron job to run certbot renew once or twice a day; it will only run the post-hook when it actually renews the certificate. You can also use --pre-hook flag if you prefer to stop nginx to run certbot in standalone mode.
There’s also a full nginx plugin, which you can activate with --nginx. It’s still being tested, so experiment at your own risk and report any bugs.
Note:
post-hookFlag will take care of reloading nginx upload renewal of your certs
Is it possible to have a http connection to a node.js server over a website which is generally secured by https ?
how do you suggest to combine a node connection with a website that is operating on https .
As already mentioned in the comments, it is useful to create a proxy to your node.js applications.
A good tutorial is, for instance, from DigitalOcean: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-14-04
In this tutorial, it shows that the host configuration can look like this:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://APP_PRIVATE_IP_ADDRESS:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
In this case, a reverse proxy to the port 8080 was created. Generally, this method can be used for every node.js application.
In your case: add the location block in your https server block and modify the route.
Support for https was just added to node-http-server it can spin up both http and https servers at the same time. It also has some test certs you can use and a cert generation guide for self signing.
note if you want to spin up on port 80 and 443 you'll need to run with sudo
An example for spinning up the same code on both http and https :
var server=require('node-http-server');
server.deploy(
{
port:8000,
root:'~/myApp/',
https:{
privateKey:`/path/to/your/certs/private/server.key`,
certificate:`/path/to/your/certs/server.pub`,
port:4433
}
}
);
Its super simple and has pretty good documentation and several examples in the examples folder.
i read more than ever in this time, this will be my first webpage so i decided mount on nodejs. I make the app very quickly and i test in localhost:9000
so i want to put more apps running on a VPS, i search information and i have two options
first use nginx to proxy the apps...
upstream example1.com {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name www.example1.com;
rewrite ^/(.*) http://example1.com/$1 permanent;
}
# the nginx server instance
server {
listen 80;
server_name example1.com;
access_log /var/log/nginx/example1.com/access.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://example1.com;
proxy_redirect off;
}
}
i dont understand this config file because i never use nginx so i search a second option
using vhost from expressjs()
express()
.use(express.vhost('m.mysite.com', require('/path/to/m').app))
.use(express.vhost('sync.mysite.com', require('/path/to/sync').app))
.listen(80)
im using expressjs and i understand how to configure, but there are some questions about wich is the best option because using express() i have one app managing multiple apps so i think it is not a good practice and a waste of resources.
from this post, David Ellis says
If you don't need to use WebSockets (or any HTTP 1.1 feature, really), you can use NginX as your proxy instead.
The advantage is the total load NginX can handle versus Node is higher (being statically compiled and specialized for this sort of thing, basically), but you lose the ability to stream any data (sending smaller chunks at a time).
For a smaller site, or if you're unsure what features you'll need in the future, it's probably better to stick with node-http-proxy and only switch to NginX if you can demonstrate the proxy is the bottleneck on your server. Fortunately NginX isn't hard to set up if you do need it later.
and from this post i read an example to configure xginx with many apps but i dont understand how to use that for me
upstream example1.com {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name www.example1.com;
rewrite ^/(.*) http://example1.com/$1 permanent;
}
# the nginx server instance
server {
listen 80;
server_name example1.com;
access_log /var/log/nginx/example1.com/access.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://example1.com;
proxy_redirect off;
}
}
upstream example2.com {
server 127.0.0.1:1111;
}
server {
listen 80;
server_name www.example2.com;
rewrite ^/(.*) http://example2.com/$1 permanent;
}
# the nginx server instance
server {
listen 80;
server_name example2.com;
access_log /var/log/nginx/example2.com/access.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://example2.com;
proxy_redirect off;
}
}
so the question is wich is the best option, use nginx or use vhost???
if i have to use nginx there is any tutorial how to configure nginx to serve many apps on node js???
tnx all
your example for Nginx config seems to be what you're looking for. you should create your config files under /etc/nginx/sites-available and then create a symbolic link for those you want to enable to /etc/nginx/sites-enabled
maybe this will help you - http://blog.dealspotapp.com/post/40184153657/node-js-production-deployment-with-nginx-varnish