In my use case I am using some api's in the frontend javascript. These api endpoints uses different port number other than port 80.
My application folder structure looks like below,
app/
view-events.js
view.html
there are two servers running in port 80 and 8081 respectively. domain.com/view/ api will be returned from 80. This view-events.js uses some api's like http://domain.com:8081/api/services/featured-data/ .
So when I call http://www.domain.com/view/ the page loads but the components from http://domain.com:8081/api/services/featured-data/ are not loading. This happens only in few networks [mostly corporate networks]. Because port other than 80 is not allowed in some corporate networks.
How can I get rid of this problem?
Could someone help me with this? Thanks
The usual way to do this is to have some kind of reverse proxy (usually Apache or nginx or HAProxy) running on port 80, and have it rout the requests to appropriate api (since only one service can run on on port, your apis will then run for example on 8081, 8082, etc..). (This is common setup even with one service that is not running on port 80, for example you can often find Apache running on 80 in front of the Tomcat running on 8080).
The example .conf for Apache could be:
<VirtualHost *:80>
ServerName example.com
DocumentRoot /var/www/html/static-html-for-example.com-if-needed
<Directory /var/www/html/static-html-for-example.com-if-needed>
Options +Indexes
Order allow,deny
Allow from all
</Directory>
#LogLevel debug
ErrorLog logs/error_log
CustomLog logs/access_log common
ProxyPass /api1 http://127.0.0.1:8081/api1
ProxyPassReverse /api1 http://127.0.0.1:8081/api1
ProxyPass /api2 http://127.0.0.1:8082/api2
ProxyPassReverse /api2 http://127.0.0.1:8082/api2
# backend api even does not have to be on the same server, it just has to be reachable from Apache
ProxyPass /api3 http://10.10.101.16:18009/api3
ProxyPassReverse /api3 http://10.10.101.16:18009/api3
</VirtualHost>
And here is a sample nginx conf for app and api [two servers]:
upstream app {
server 127.0.0.1:9090;
keepalive 64;
}
upstream api {
server 127.0.0.1:8081;
keepalive 64;
}
#
# The default server
#
server {
listen 80;
server_name _;
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
#proxy_set_header Connection "";
#proxy_http_version 1.1;
}
location /{
rewrite /(.*) /$1 break;
proxy_pass http://app;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
#proxy_set_header Connection "";
#proxy_http_version 1.1;
}
# redirect not found pages to the static page /404.html
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Best solution is to change the api in port 80, just for secure reason.
if not, you can solve this question by jsonp. When you call the 8081 api, you just make a cross-domain call from the web page.
if you decide you use jsonp, here is the detail solution.
How to make a JSONP request from Javascript without JQuery?
which is tedious and not suggest you do so.
Related
I have a frontend in angular which scrapes from other pages. Because of CORS, I created a proxy server, which also runs on my VPS. If I'm starting Angular in localhost and my proxy on my VPS, I can use my Proxy with the IP address:port of my VPS. That's all good, but I need to use HTTPS, because the whole page is running on HTTPS. Here comes nginx with my domain, which does not work. I set everything up, but it still won't work, seems like I'm having a misconfiguration?
My nginx configuration:
server {
server_name domain.com www.domain.com;
root /var/www/docs;
location / {
try_files $uri $uri/ /index.html =404;
}
location /api {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /proxy { //HERE COMES MY PASS TO MY PROXY
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
.... SSL CONFIGURATION
Probably is the proxy_pass here wrong?
This is my nodejs proxy:
require('dotenv').config();
// Listen on a specific host via the HOST environment variable
var host = process.env.HOST || 'domain.com';
// Listen on a specific port via t.he PORT environment variable
var port = process.env.PORT || 8080;
var cors_proxy = require('cors-anywhere');
cors_proxy.createServer({
originWhitelist: [], // Allow all origins
requireHeader: ['origin', 'x-requested-with'],
removeHeaders: ['cookie', 'cookie2']
}).listen(port, host, function() {
console.log('Running CORS Anywhere on ' + host + ':' + port);
});
If I'm trying with '0.0.0.0' instead of my domain name, it works with http://ip:port/target, but it does not work with domain.com/proxy/target. Then I become an "invalid host" error
Fixed the problem myself.
Problem was a missing / behind my proxy_pass
Solution is: "proxy_pass http://localhost:8080/;" instead of "proxy_pass http://localhost:8080;"
I am running a dAPP on a cloud server and using nginx and parity client with wesocket enabled on it.
I installed a certbot certificate for the https domain.
Now i am having problem that while accessing my website using https it gives an error on chrome that..
web3-providers.umd.js:1269 Mixed Content: The page at 'https://www.
chain.com/' was loaded over HTTPS, but attempted to connect to the
insecure WebSocket endpoint 'ws://40.138.47.154:7546/'. This request has
been blocked; this endpoint must be available over WSS.
then i added the reverse proxy on nginx config file as
location / {
# switch off logging
access_log off;
proxy_pass http://localhost:7556; #Port for parity websocket
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
and then it is giving an error of
"WebSocket interface is active. Open WS connection to access RPC."
What is the problem here and what should i try?
Thanks
The https won't allow loading insecure content on the page.
One possible solution is to use the SSL/TLS terminator between the application server and the client.
From the official Nginx docs, the relevant part of the config file could be like this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server localhost:7546;
}
server {
listen 443;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
ssl on;
# specify cert and key
}
Inside the dApp change 'ws://40.138.47.154:7546/' to wss://40.138.47.154.
I need help in fixing this issue.
I am trying to implement ssl to the domain my.domain.com
Front end is Angular and Backend is Meteor
I was able to create ssl certificates properly and was able to get Secure https label on loading the domain, but the page was not rendering because of the error
Uncaught TypeError: a._qs.unescape is not a function
from the build file in the .build/dist/bundle/programs/web.browser
Request URL:https://my.domain.com/5a0c202b90aa3cc1c9414b703c4e1f343fb0dd4e.js?meteor_js_resource=true
Below websocket request will remain pending with status 101
wss://my.domain.com/sockjs/362/4q059yw7/websocket
I have not written any code on Meteor to run it to https, I am trying to handle through nginx.
From angular after adding ssl certificates trying to connect to meteor throughwss://localhost/ instead of ws://localhost:3000/
Please find my nginx file below.
events {
}
http {
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
server_name my.domain.com;
root /client;
index index.html;
location / {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name my.domain.com;
root /client;
index index.html;
ssl_certificate /etc/letsencrypt/live/my.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my.domain.com/privkey.pem; # managed by Certbot
ssl_dhparam /etc/ssl/certs/dhparam.pem;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /api {
proxy_pass http://localhost:3000;
}
location /uploadFile {
proxy_pass http://localhost:3000;
}
error_page 500 502 503 504 /50x.html;
location = /51x.html {
root /client;
}
}
}
Any leads would be appreciated.
I figured out the issue I had.
Issue was in below line in nginx.
proxy_pass http://localhost:3000;
I fixed it by redirecting it to http://localhost:3000/websocket; and location as location /websocket
Snippet is below.
location /websocket {
proxy_pass http://localhost:3000/websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
}
I am trying to configure my ExpressJS app for https connection. The Express server runs at localhost:8080 and the secure one localhost:8443.
Here is the server.js code related to https:
var app = express();
var https = require('https');
const options = {
cert: fs.readFileSync('/etc/letsencrypt/live/fire.mydomain.me/fullchain.pem'),
key: fs.readFileSync('/etc/letsencrypt/live/fire.mydomain.me/privkey.pem')
};
app.listen(8080, console.log("Server running"));
https.createServer(options, app).listen(8443, console.log("Secure server running on port 8443"));
And here is my Nginx configuration:
server {
listen 80;
listen [::]:80;
server_name fire.mydomain.me;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 443;
listen [::]:443;
server_name fire.mydomain.me;
location / {
proxy_pass https://localhost:8443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
What I did :
Generating SSL certificate with Letsencrypt certonly tool for the domain fire.mydomain.me.
Configuring nginx.
Configuring the server.js node app.
Adding TCP rules for the 443 port in Ufw.
I tried
Commenting the not-ssl server line in server.js to force the connections to go through ssl configuration: this serve the page when I try to go to fire.mydomain.me:443 but not to "https:// fire.mydomain.me". In both cases, no SSL. Trying to go to https:// fire.mydomain.me generate this message "This website doensn't provide a secure connection" in Google Chrome.
I followed this tutorial in the first place to set my ssl node config :
https://medium.com/#yash.kulshrestha/using-lets-encrypt-with-express-e069c7abe625#.93jgjlgsc
You don't need to use HTTPS between your nginx reverse proxy and Node app running on the same host. You can proxy both HTTP requests to port 80 and HTTPS requests to port 443 to the same port in your Node app - 8080 in this case - and you don't need to configure TLS certificates in that case.
You can change your server.js file to:
var app = express();
app.listen(8080, console.log("Server running"));
and use an nginx config that has proxy_pass http://localhost:8080; for both HTTP on port 80 and HTTPS on port 443.
This is how it is usually done. Encrypting traffic on the loopback interface doesn't add any security because to sniff the traffic you need root access to the box and when you have it then you can read the certs and decrypt the traffic anyway. Considering the fact that most of the posts on https://nodejs.org/en/blog/vulnerability/ are related to OpenSSL, one could argue that using SSL in Node can make it less secure in that particular case of encrypting loopback interface traffic. See this discussion on the Node project on GitHub for more info.
Thanks to #rsp solution, here is the working Nginx configuration :
server {
listen 80;
listen 443 ssl;
server_name fire.mydomain.me;
ssl_certificate /etc/letsencrypt/live/fire.mydomain.me/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/fire.mydomain.me/privkey.pem;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
i read more than ever in this time, this will be my first webpage so i decided mount on nodejs. I make the app very quickly and i test in localhost:9000
so i want to put more apps running on a VPS, i search information and i have two options
first use nginx to proxy the apps...
upstream example1.com {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name www.example1.com;
rewrite ^/(.*) http://example1.com/$1 permanent;
}
# the nginx server instance
server {
listen 80;
server_name example1.com;
access_log /var/log/nginx/example1.com/access.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://example1.com;
proxy_redirect off;
}
}
i dont understand this config file because i never use nginx so i search a second option
using vhost from expressjs()
express()
.use(express.vhost('m.mysite.com', require('/path/to/m').app))
.use(express.vhost('sync.mysite.com', require('/path/to/sync').app))
.listen(80)
im using expressjs and i understand how to configure, but there are some questions about wich is the best option because using express() i have one app managing multiple apps so i think it is not a good practice and a waste of resources.
from this post, David Ellis says
If you don't need to use WebSockets (or any HTTP 1.1 feature, really), you can use NginX as your proxy instead.
The advantage is the total load NginX can handle versus Node is higher (being statically compiled and specialized for this sort of thing, basically), but you lose the ability to stream any data (sending smaller chunks at a time).
For a smaller site, or if you're unsure what features you'll need in the future, it's probably better to stick with node-http-proxy and only switch to NginX if you can demonstrate the proxy is the bottleneck on your server. Fortunately NginX isn't hard to set up if you do need it later.
and from this post i read an example to configure xginx with many apps but i dont understand how to use that for me
upstream example1.com {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name www.example1.com;
rewrite ^/(.*) http://example1.com/$1 permanent;
}
# the nginx server instance
server {
listen 80;
server_name example1.com;
access_log /var/log/nginx/example1.com/access.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://example1.com;
proxy_redirect off;
}
}
upstream example2.com {
server 127.0.0.1:1111;
}
server {
listen 80;
server_name www.example2.com;
rewrite ^/(.*) http://example2.com/$1 permanent;
}
# the nginx server instance
server {
listen 80;
server_name example2.com;
access_log /var/log/nginx/example2.com/access.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://example2.com;
proxy_redirect off;
}
}
so the question is wich is the best option, use nginx or use vhost???
if i have to use nginx there is any tutorial how to configure nginx to serve many apps on node js???
tnx all
your example for Nginx config seems to be what you're looking for. you should create your config files under /etc/nginx/sites-available and then create a symbolic link for those you want to enable to /etc/nginx/sites-enabled
maybe this will help you - http://blog.dealspotapp.com/post/40184153657/node-js-production-deployment-with-nginx-varnish