how to install multiple nodejs app with multiple domains? - javascript

i read more than ever in this time, this will be my first webpage so i decided mount on nodejs. I make the app very quickly and i test in localhost:9000
so i want to put more apps running on a VPS, i search information and i have two options
first use nginx to proxy the apps...
upstream example1.com {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name www.example1.com;
rewrite ^/(.*) http://example1.com/$1 permanent;
}
# the nginx server instance
server {
listen 80;
server_name example1.com;
access_log /var/log/nginx/example1.com/access.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://example1.com;
proxy_redirect off;
}
}
i dont understand this config file because i never use nginx so i search a second option
using vhost from expressjs()
express()
.use(express.vhost('m.mysite.com', require('/path/to/m').app))
.use(express.vhost('sync.mysite.com', require('/path/to/sync').app))
.listen(80)
im using expressjs and i understand how to configure, but there are some questions about wich is the best option because using express() i have one app managing multiple apps so i think it is not a good practice and a waste of resources.
from this post, David Ellis says
If you don't need to use WebSockets (or any HTTP 1.1 feature, really), you can use NginX as your proxy instead.
The advantage is the total load NginX can handle versus Node is higher (being statically compiled and specialized for this sort of thing, basically), but you lose the ability to stream any data (sending smaller chunks at a time).
For a smaller site, or if you're unsure what features you'll need in the future, it's probably better to stick with node-http-proxy and only switch to NginX if you can demonstrate the proxy is the bottleneck on your server. Fortunately NginX isn't hard to set up if you do need it later.
and from this post i read an example to configure xginx with many apps but i dont understand how to use that for me
upstream example1.com {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name www.example1.com;
rewrite ^/(.*) http://example1.com/$1 permanent;
}
# the nginx server instance
server {
listen 80;
server_name example1.com;
access_log /var/log/nginx/example1.com/access.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://example1.com;
proxy_redirect off;
}
}
upstream example2.com {
server 127.0.0.1:1111;
}
server {
listen 80;
server_name www.example2.com;
rewrite ^/(.*) http://example2.com/$1 permanent;
}
# the nginx server instance
server {
listen 80;
server_name example2.com;
access_log /var/log/nginx/example2.com/access.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://example2.com;
proxy_redirect off;
}
}
so the question is wich is the best option, use nginx or use vhost???
if i have to use nginx there is any tutorial how to configure nginx to serve many apps on node js???
tnx all

your example for Nginx config seems to be what you're looking for. you should create your config files under /etc/nginx/sites-available and then create a symbolic link for those you want to enable to /etc/nginx/sites-enabled
maybe this will help you - http://blog.dealspotapp.com/post/40184153657/node-js-production-deployment-with-nginx-varnish

Related

How do I use NGINX or Apache2 on openSUSE to re-route my port 80 to localhost:3000 so I can run my Node.js app

As described by the problem statement, I am running openSUSE (leap 42.3). I have a node app running on localhost:3000 which I would like to publish it (make it available to people outside my network). I already have a domain and currently, the files are being served by apache2 on port 80. I have found tons of similar problems online and solutions for them, but none are specific to mine (I think it is because of the operating system). Can anyone give me step by step solution to what I must do?
The first solution I found told me to change the configuration file and this is what I have right now:
<VirtualHost *:80>
ServerName test.mytestsitebyrichard.com
ServerAlias *.test.mytestsitebyrichard.com
DocumentRoot /srv/www/htdocs
#ProxyRequests on <--currently commented but this is what online tutorials told me to do. However, it is crashing the apache2
#ProxyPass /cs/ http://localhost:3000/
</VirtualHost>
Do I need to enable anything? I have enabled the ProxyPass and ProxyPassReverse from the configuration menu. Any help would be appreciated. Thank you.
Note please refer to the screenshots below:
You can achieve this in Nginx with Reverse Proxy.
In your /etc/nginx/sites-enabled/ directory add a new configuration file (e.g. myapp.conf) with following configuration:
server {
listen 80;
server_name yoururl.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

http to a Node server over https nginx website

Is it possible to have a http connection to a node.js server over a website which is generally secured by https ?
how do you suggest to combine a node connection with a website that is operating on https .
As already mentioned in the comments, it is useful to create a proxy to your node.js applications.
A good tutorial is, for instance, from DigitalOcean: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-14-04
In this tutorial, it shows that the host configuration can look like this:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://APP_PRIVATE_IP_ADDRESS:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
In this case, a reverse proxy to the port 8080 was created. Generally, this method can be used for every node.js application.
In your case: add the location block in your https server block and modify the route.
Support for https was just added to node-http-server it can spin up both http and https servers at the same time. It also has some test certs you can use and a cert generation guide for self signing.
note if you want to spin up on port 80 and 443 you'll need to run with sudo
An example for spinning up the same code on both http and https :
var server=require('node-http-server');
server.deploy(
{
port:8000,
root:'~/myApp/',
https:{
privateKey:`/path/to/your/certs/private/server.key`,
certificate:`/path/to/your/certs/server.pub`,
port:4433
}
}
);
Its super simple and has pretty good documentation and several examples in the examples folder.

How to access api using different port from any network?

In my use case I am using some api's in the frontend javascript. These api endpoints uses different port number other than port 80.
My application folder structure looks like below,
app/
view-events.js
view.html
there are two servers running in port 80 and 8081 respectively. domain.com/view/ api will be returned from 80. This view-events.js uses some api's like http://domain.com:8081/api/services/featured-data/ .
So when I call http://www.domain.com/view/ the page loads but the components from http://domain.com:8081/api/services/featured-data/ are not loading. This happens only in few networks [mostly corporate networks]. Because port other than 80 is not allowed in some corporate networks.
How can I get rid of this problem?
Could someone help me with this? Thanks
The usual way to do this is to have some kind of reverse proxy (usually Apache or nginx or HAProxy) running on port 80, and have it rout the requests to appropriate api (since only one service can run on on port, your apis will then run for example on 8081, 8082, etc..). (This is common setup even with one service that is not running on port 80, for example you can often find Apache running on 80 in front of the Tomcat running on 8080).
The example .conf for Apache could be:
<VirtualHost *:80>
ServerName example.com
DocumentRoot /var/www/html/static-html-for-example.com-if-needed
<Directory /var/www/html/static-html-for-example.com-if-needed>
Options +Indexes
Order allow,deny
Allow from all
</Directory>
#LogLevel debug
ErrorLog logs/error_log
CustomLog logs/access_log common
ProxyPass /api1 http://127.0.0.1:8081/api1
ProxyPassReverse /api1 http://127.0.0.1:8081/api1
ProxyPass /api2 http://127.0.0.1:8082/api2
ProxyPassReverse /api2 http://127.0.0.1:8082/api2
# backend api even does not have to be on the same server, it just has to be reachable from Apache
ProxyPass /api3 http://10.10.101.16:18009/api3
ProxyPassReverse /api3 http://10.10.101.16:18009/api3
</VirtualHost>
And here is a sample nginx conf for app and api [two servers]:
upstream app {
server 127.0.0.1:9090;
keepalive 64;
}
upstream api {
server 127.0.0.1:8081;
keepalive 64;
}
#
# The default server
#
server {
listen 80;
server_name _;
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
#proxy_set_header Connection "";
#proxy_http_version 1.1;
}
location /{
rewrite /(.*) /$1 break;
proxy_pass http://app;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
#proxy_set_header Connection "";
#proxy_http_version 1.1;
}
# redirect not found pages to the static page /404.html
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Best solution is to change the api in port 80, just for secure reason.
if not, you can solve this question by jsonp. When you call the 8081 api, you just make a cross-domain call from the web page.
if you decide you use jsonp, here is the detail solution.
How to make a JSONP request from Javascript without JQuery?
which is tedious and not suggest you do so.

Is there a way to deal with CORS without having access to the backend?

I think I've gone through 100 Stackoverflow posts about enabling CORS with AngularJS. Most of them reference adding headers to the API.
Is there anyway to solve this issue if you don't have direct access to edit the API you're using?
If you need CORS headers but the original API doesn't provide them, you can set up a small proxy server and call through that. Using HAProxy or Nginx, you can restrict the destinations to just the API and add headers on the way through. You may also be able to set up the proxy on a path under the site's origin and avoid headers altogether.
To set up your own CORS proxy with HAProxy, you just need to add a few config lines:
listen http-in
mode http
listen *:80
# Add CORS headers when Origin header is present
capture request header origin len 128
http-response add-header Access-Control-Allow-Origin %[capture.req.hdr(0)] if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Headers:\ Origin,\ X-Requested-With,\ Content-Type,\ Accept if { capture.req.hdr(0) -m found }
I prefer to use HAProxy, because it's incredibly resource-efficient.
With Nginx, you need a little bit more configuration as the headers and proxy are set up separately:
upstream api_server {
server apiserver.com;
}
server {
charset UTF-8;
listen 80;
root /home/web/myclient;
index index.html;
server_name myclient.com;
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://api_server/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location ~ /\. {
deny all;
}
location / {
try_files $uri;
}
}
Both examples are copied from the linked blog posts in case they ever vanish. I did not write either of them.
If you use Mozilla Firefox there is an Add-On called "cors everywhere". If you install it you can activate and disable it on every site you want and it works like a charm.

Meteor WebSocket connection to 'ws://.../websocket' failed: Error during WebSocket handshake: Unexpected response code: 400

I am brand new to things like Meteor.JS, and was wondering about this error. I started the test project (with the button click meter) and it works, but then I go into the console and see
WebSocket connection to 'ws://shibe.ninja/sockjs/243/5gtde_n9/websocket' failed: Error during WebSocket handshake: Unexpected response code: 400
I don't know how to fix it.
Thanks
Maybe a little late but in case you still stuck on this. I got the same issue when deploying the app and using nginx as proxy.
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
Check also the nginx docs here: http://nginx.com/blog/websocket-nginx/
I bumped into this problem myself, but I already had my proxy headers set correctly and it was still not working. But apparently Cloudflare is causing issues. Here is a great article on the subject: https://meteorhacks.com/cloudflare-meets-meteor
As far as I've found, there are three solutions:
Option 1: Use CloudFlare enterprise, which supports sockets.
Option 2: Disable Meteor WebSockets, which will affect your performance as it fallbacks back to use sock.js as a replacment. To do this, just set your meteor environment like this:
export DISABLE_WEBSOCKETS=1
Option 3: In Cloudflare, create a ddp subdomain for the websocket (ddp.yourdomain.com), then disable Cloudflare on the new subdomain. After that set your meteor environment like this:
export DDP_DEFAULT_CONNECTION_URL=http://ddp.example.com
After this my nginx config needed some adjustments, as this has now become a cross-origin (CORS) setup. This is my new nginx config:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80 proxy_protocol;
listen [::]:80 proxy_protocol;
server_name mydomain.com ddp.mydomain.com;
## This allows the CORS setup to work
add_header Access-Control-Allow-Origin 'http://example.com';
## This hides the CORS setup from the Meteor server
## Without this the header is added twice, not sure why?
proxy_hide_header Access-Control-Allow-Origin;
## Idealy the two options above should be disabeled,
## Then use this one instead, but that caused issues in my setup.
# proxy_set_header Access-Control-Allow-Origin 'http://example.com';
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host; # pass the host header
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $remote_addr; # Preserve client IP
proxy_set_header X-Forwarded-For $remote_addr;
proxy_http_version 1.1;
# Meteor browser cache settings (the root path should not be cached!)
if ($uri != '/') {
expires 30d;
}
}
}
Finally, remember to restart nginx.

Categories