I am trying to get a brand new cloud based server working with a default version of 20.04 server ubuntu working with apache and node. The node server appears to be running without issues reporting 4006 port is open. However I believe my apache config is not. The request will hang for a very very long time. No errors are displayed in the node terminal. So the fault must lie in my apache config seeing as we are getting the below apache errors and no JS errors.
Request error after some time
502 proxy error
Apache Error Log
[Sun Oct 17 20:58:56.608793 2021] [proxy:error] [pid 1596878] (111)Connection refused: AH00957: HTTP: attempt to connect to [::1]:4006 (localhost) failed
[Sun Oct 17 20:58:56.608909 2021] [proxy_http:error] [pid 1596878] [client 207.46.13.93:27392] AH01114: HTTP: failed to make connection to backend: localhost
vhost
<VirtualHost IP_ADDRESS:80>
ServerName api.aDomain.com
Redirect permanent / https://api.aDomain.com/
</VirtualHost>
<IfModule mod_ssl.c>
<VirtualHost IP_ADDRESS:443>
ServerName api.aDomain.com
ProxyRequests on
LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so
LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so
ProxyPass / http://localhost:4006/
ProxyPassReverse / http://localhost:4006/
#certificates SSL
SSLEngine on
SSLCACertificateFile /etc/ssl/api.aDomain.com/apimini.ca
SSLCertificateFile /etc/ssl/api.aDomain.com/apimini.crt
SSLCertificateKeyFile /etc/ssl/api.aDomain.com/apimini.key
ErrorLog ${APACHE_LOG_DIR}/error_api.aDomain.com.log
CustomLog ${APACHE_LOG_DIR}/access_api.aDomain.com.log combined
</VirtualHost>
</IfModule>
terminal output
[nodemon] 1.19.4
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `babel-node -r dotenv/config --inspect=9229 index.js`
Debugger listening on ws://127.0.0.1:9229/c1fcf271-aea8-47ff-910e-fe5a91fce6d2
For help, see: https://nodejs.org/en/docs/inspector
Browserslist: caniuse-lite is outdated. Please run next command `npm update`
🚀 Server ready at http://localhost:4006
Node server
import cors from 'cors'
import scrape from './src/api/routes/scrape'
const express = require('express')
const { ApolloServer, gql } = require('apollo-server-express')
const { postgraphile } = require('postgraphile')
const ConnectionFilterPlugin = require('postgraphile-plugin-connection-filter')
const dbHost = process.env.DB_HOST
const dbPort = process.env.DB_PORT
const dbName = process.env.DB_NAME
const dbUser = process.env.DB_USER
const dbPwd = process.env.DB_PWD
const dbUrl = dbPwd
? `postgres://${dbUser}:${dbPwd}#${dbHost}:${dbPort}/${dbName}`
: `postgres://${dbHost}:${dbPort}/${dbName}`
var corsOptions = {
origin: '*',
optionsSuccessStatus: 200, // some legacy browsers (IE11, various SmartTVs) choke on 204
}
async function main() {
// Construct a schema, using GraphQL schema language
const typeDefs = gql`
type Query {
hello: String
}
`
// Provide resolver functions for your schema fields
const resolvers = {
Query: {
hello: () => 'Hello world!',
},
}
const server = new ApolloServer({ typeDefs, resolvers })
const app = express()
app.use(cors(corsOptions))
app.use(
postgraphile(process.env.DATABASE_URL || dbUrl, 'public', {
appendPlugins: [ConnectionFilterPlugin],
watchPg: true,
graphiql: true,
enhanceGraphiql: true,
})
)
server.applyMiddleware({ app })
//Scraping Tools
scrape(app)
const port = 4006
await app.listen({ port })
console.log(`🚀 Server ready at http://localhost:${port}`)
}
main().catch(e => {
console.error(e)
process.exit(1)
})
Apache Mods Enabled
/etc/apache2/mods-enabled/proxy.conf
/etc/apache2/mods-enabled/proxy.load
/etc/apache2/mods-enabled/proxy_http.load
Updated Error Logs
[Thu Oct 21 10:59:22.560608 2021] [proxy_http:error] [pid 10273] (70007)The timeout specified has expired: [client 93.115.195.232:8963] AH01102: error reading status line from remote server 127.0.0.1:4006, referer: https://miniatureawards.com/
[Thu Oct 21 10:59:22.560691 2021] [proxy:error] [pid 10273] [client 93.115.195.232:8963] AH00898: Error reading from remote server returned by /graphql, referer: https://miniatureawards.com/
In major situations this is caused by selinux (when you have RHEL or CentOS):
# setsebool -P httpd_can_network_connect 1
link: https://unix.stackexchange.com/questions/8854/how-do-i-configure-selinux-to-allow-outbound-connections-from-a-cgi-script
Also check:
connectivity between the machines
back-end port is open
Use static IP-address (IPv4) or use host-name that are in you /etc/hosts file
I cannot exactly predict what exactly happen it could be NodeJS app crushed and no longer running or there are misconfiguration Apache files. But I strongly believe this scenario will be solved from doing things back from the top.
This step would go through updating unbuntu packages, installing needed application, configuring Apache files and setting up reverse proxy with NodeJS and Apache.
Just don't touch your NodeJS files and other code related application and they will be safe. You may also backup just to make sure. Other running application on that ubuntu server example database application like MySQL as will be just fine and still be running.
1. First we need to update ubuntu packages and install Apache, and NodeJS
$ sudo apt update
$ sudo apt install apache2 npm
2. Run this command to enable us to use Apache as a reverse proxy server
sudo a2enmod proxy proxy_http rewrite headers expires
3. Create an Apache virtual host file.
This command would will let you use ubuntu terminal as your text editor follow the guide and prompt from the terminal to write.
NOTE:
Change the "yourSite.com" with the domain of your site. It isn't really important should be the name of the file. But I think its better to name it after your site domain so you can recognize it.
$ sudo nano /etc/apache2/sites-available/yourSite.com.conf
4. Use the nano editor is to write your Apache config file for your site.
Notice: This part is critical so please pay attention
Change your ServerName and ServerAlias with your site domain name.
The ProxyPass and the ProxyPassReverse this has two parameters.
The first one is a back-slash "/" This an absolute path where your NodeJS should be located and since its single back-slash that means its your home directory.
The second one is the url "http://127.0.0.1:3000/" of your NodeJS application. Pay attentions to its PORT "3000" you may need to replaced it with the PORT you use in your NodeJS app.
<VirtualHost *:80>
ServerName example.com // replace this with site domain name without www at the beginning
ServerAlias www.example.com // replace this with site domain name beginning with www. + yourdomainname + .com
ProxyRequests Off
ProxyPreserveHost On
ProxyVia Full
<Proxy *>
Require all granted
</Proxy>
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:30000/
</VirtualHost>
5. disable the default Apache site and enable the new one.
$ sudo a2dissite 000-default
$ sudo a2ensite example.com.conf
6. Restart your Apache Server to apply the changes
sudo systemctl restart apache2
We could be ready at this point as we done setting up Apache as a reverse proxy, But we also need to install the npm package of your project and then run your NodeJS application.
7. The rest of the step is all related to NodeJS deployment. You may be already know this steps.
// install npm packages
npm install
// for a better experience using NodeJS in production install pm2 globally
npm install -g pm2
// Then run your NodeJS application using pm2 command
pm2 start // you should be at root of your NodeJS project folder when running this command
// run this another pm2 command to make sure your NodeJS app will re-run when it encounter downtime.
$ pm2 save
$ pm2 startup
Your Apache and NodeJS server is up and running now
Try to access your site by typing entering your site domain name in the browsers address bar
e.g http://yourSite.com
If you use a docker for your node server, then it might be set up incorrectly
I'm not an expert on this topic, but I have a similar setup; I use socket.io to serve WebSockets...
From your posts it seems you don't need to proxy WebSockets as well, the one shown in your logs seems to be only for debugging purposes (please correct me if I'm wrong).
Following the core of my Apache configuration:
RewriteEngine On
RewriteCond %{REQUEST_URI} ^/socket.io [NC]
RewriteCond %{QUERY_STRING} transport=websocket [NC]
RewriteRule /(.*) ws://127.0.0.1:4006/$1 [P,L]
<Location />
ProxyPass http://127.0.0.1:4006/ retry=2
ProxyPassReverse http://127.0.0.1:4006/
</Location>
Another couple of suggestions.
Warning
Do not enable proxying with ProxyRequests until you have secured your server. Open proxy servers are dangerous both to your network and to the Internet at large.
Source: https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxyrequests
I don't know which is the IPV6 setup on your host, you could try to use 127.0.0.1 rather than localhost in you Apache configuration to try forcing Apache to use IPV4.
Related
My development environment is this:
OS: Microsoft Windows 10
PHP framework: Laravel 8.0
PHP version 7.4
Websocket server: cboden/ratchet 0.4.3
WAMP server 3.2.0 (Apache 2.4.41)
Firefox 91.0.1 (64-bit) / chrome
I created a new Laravel app to implement a Secure Websocket Server and get connected to it using plain javascript on the client side (Laravel blade file).
The websocket server works fine, as far as I can see it running, but the web browser is not able to connect, as seen on this image:
I have tried using different URLs, with and without port number, but to no avail.
I created a SSL certificate and private key files, using openssl.exe tool, and put them in the command folder for testing purposes.
This is my handle code for the Secure Websocket Server:
public function handle()
{
$loop = Factory::create();
$webSock = new SecureServer(
new Server('0.0.0.0:8090', $loop),
$loop,
array(
'local_cert' => 'certificate.crt',
'local_pk' => 'private.key',
'allow_self_signed' => TRUE,
'verify_peer' => FALSE
)
);
// Ratchet magic
$webServer = new IoServer(
new HttpServer(
new WsServer(
new WebSocketController()
)
),
$webSock
);
$loop->run();
}
My virtual host in httpd-ssl.conf file:
<VirtualHost *:443>
ServerName ssa
DocumentRoot "d:/web/app/ssa/public"
SSLEngine on
SSLCertificateFile "${SRVROOT}/conf/certificate.crt"
SSLCertificateKeyFile "${SRVROOT}/conf/private.key"
SSLVerifyClient none
SSLVerifyDepth 10
<Directory "d:/web/app/ssa/public">
Options +Indexes +Includes +FollowSymLinks +MultiViews
AllowOverride All
Require local
</Directory>
ProxyRequests Off
ProxyPass /wss/ ws://ssa:8090
</VirtualHost>
The Apache modules proxy_module, proxy_http_module and proxy_wstunnel_module are loaded.
The web app is running in HTTPS.
Before, it was running over HTTP and WS and everything worked perfectly, but I need to secure this app and I am having issues to connect to the secure websocket server.
Am I missing something?
Is there something wrong with my Websocket server or Apache configuration?
You are surely trying to connect to the wrong destination. It says wss:///ssa/wss/, but probably it should be wss://your.site.domain/ssa/wss/ .
So let's look at front end code and find out what's wrong with it.
Ok, as #apokryfos pointed out, I tried to proxy the websocket server through HTTPS but I was doing it in the wrong way.
I changed my websocket server to a non-secure one and did the following change to my virtual host:
<VirtualHost *:443>
ServerName ssa
DocumentRoot "d:/web/app/ssa/public"
SSLEngine on
SSLCertificateFile "${SRVROOT}/conf/certificate.crt"
SSLCertificateKeyFile "${SRVROOT}/conf/private.key"
SSLVerifyClient none
SSLVerifyDepth 10
<Directory "d:/web/app/ssa/public">
Options +Indexes +Includes +FollowSymLinks +MultiViews
AllowOverride All
Require local
</Directory>
Redirect /wss /wss/
ProxyPass /wss/ ws://127.0.0.1:8090/
ProxyPassReverse /ws/ wss://127.0.0.1:8090/
</VirtualHost>
On the client side, the browser can now contact the backend WS server through the HTTPS port:
// The connection to the WebSocket Server.
var socket = new WebSocket("wss://ssa:443/wss/");
I got this solution from
Apache Config: Websockets Proxy WSS request to WS backend
Now I got my non-secure Websocket server sending/receiving through HTTPS.
This is, for sure, not the solution I expected to apply to my needs but it certainly works.
I still hope to find a formal solution to connecting plain JavaScript client to a Secure Websocket Server (wss://) without using a proxy mechanism.
For not to complicate my first answer with more information, here I provide the answer that really worked for me after all.
I created the Secure Websocket Server as follows:
public function handle() {
$loop = Factory::create();
$webSock = new SecureServer(
new Server('0.0.0.0:8443', $loop),
$loop,
array(
'local_cert' => 'C:/wamp64/bin/apache/apache2.4.41/conf/server.crt',
'local_pk' => 'C:/wamp64/bin/apache/apache2.4.41/conf/server.key',
'allow_self_signed' => TRUE,
'verify_peer' => FALSE
)
);
$webServer = new IoServer(
new HttpServer(
new WsServer(
new WebSocketController()
)
),
$webSock
);
$loop->run();
}
Note I changed the port number to 8443 (I don't think this has something to do) and also changed the certificate and key files for the new ones, generated as follows:
openssl req -config config.conf -new -x509 -out server.crt -days 3650
And the config.conf file is:
[req]
default_bits = 2048
encrypt_key = no
default_md = sha256
default_keyfile = server.key
distinguished_name = req_distinguished_name
prompt = no
[req_distinguished_name]
C = KH
ST = Siem Reap
L = SR
O = AHC
OU = IT
CN = localhost
[bs_section]
CA=false
All the difference lies in the last line CA=false to indicate I did not signed or acted as a Certificate Authority (CA).
This gets rid of the MOZILLA_PKIX_ERROR_CA_CERT_USED_AS_END_ENTITY message.
Then, I got rid of the lines that defined the proxy in my httpd-ssl.conf file:
<VirtualHost *:443>
ServerName ssa
DocumentRoot "d:/web/app/ssa/public"
SSLEngine on
SSLCertificateFile "${SRVROOT}/conf/server.crt"
SSLCertificateKeyFile "${SRVROOT}/conf/server.key"
SSLVerifyClient none
SSLVerifyDepth 10
<Directory "d:/web/app/ssa/public">
Options +Indexes +Includes +FollowSymLinks +MultiViews
AllowOverride All
Require local
</Directory>
#Redirect /wss /wss/
#ProxyPass /wss/ ws://127.0.0.1:8090/
#ProxyPassReverse /ws/ wss://127.0.0.1:8090/
</VirtualHost>
Please notice that for this virtual host I used the same certificate and key files I used for the Secure Websocket Server.
Ok, that was it for my certificate issue.
Now everything works as expected.
I'm running Skaffold with a few apps in Development :
Skaffold.yaml
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: MYDOCKERID/client
context: client
docker:
dockerfile: Dockerfile
sync:
manual:
- src: '**/*.js'
dest: .
Dockerfile of client :
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
client-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: MYDOCKERID/client
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
When executing skaffold dev from command line everything is compiled perfectly :
[92m[client-depl-5bdc8cffcd-s9z9r client] [0mevent - compiled successfully
[92m[client-depl-5bdc8cffcd-s9z9r client] [0mwait - compiling...
[92m[client-depl-5bdc8cffcd-s9z9r client] [0mAttention: Next.js now collects completely anonymous telemetry regarding usage.
[92m[client-depl-5bdc8cffcd-s9z9r client] [0mThis information is used to shape Next.js' roadmap and prioritize features.
[92m[client-depl-5bdc8cffcd-s9z9r client] [0mYou can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
[92m[client-depl-5bdc8cffcd-s9z9r client] [0mhttps://nextjs.org/telemetry
[92m[client-depl-5bdc8cffcd-s9z9r client] [0m
[92m[client-depl-5bdc8cffcd-s9z9r client] [0mevent - compiled successfully
I've added the domain in Hosts file in Windows etc folder :
# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
127.0.0.1 ticketing.dev
However when typing in Chrome ticketing.dev I get :
How can I run the app in Chrome and overcome this message ?
You are missing a certificate to have your connection secure. You will also need to configure the ingress to use the cert that you create.
You should read Manage TLS Certificates in a Cluster.
Kubernetes provides a certificates.k8s.io API, which lets you provision TLS certificates signed by a Certificate Authority (CA) that you control. These CA and certificates can be used by your workloads to establish trust.
You can have a look into a nice guide on how to Adding SSL/TLS support to applications in Kubernetes-native way.
You can create a self-signed certificate, this medium article shows how to do that on Windows.
On Linux you can do following:
[root]# mkdir certs
[root]# openssl req -nodes -newkey rsa:2048 -keyout certs/ticketing.key -out certs/ticketing.csr -subj "/C=/ST=/L=/O=/OU=/CN=default"
[root]# openssl x509 -req -sha256 -days 365 -in certs/ticketing.csr -signkey certs/ticketing.key -out certs/ticketing.crt
This will create a cert that is valid for 365 days.
Then create a secret which will hold your cert:
kubectl create secret generic ticketing-certs --from-file=certs -n default
Once the cert and secret is ready you should create an ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example1-ingress
spec:
tls:
- hosts:
- www.ticketing.dev
secretName: ticketing-cert
rules:
- host: www.ticketing.dev
http:
paths:
- path: /
backend:
serviceName: client-srv
servicePort: 3000
Let me know if you need anything more.
I assume you are using the project for development purposes. If you want to run the app on chrome. To bypass this security warning, just right this on the webpage showing warning blindly :
thisisunsafe
This is my first time deploying ssl. I have express node js module running at localhost:4000. I have generated the self-signed certificate and installed in the server and it is working. Now, I have my angularjs frontend running at localhost:3000(I am using http-server to run the angular code).
To make my point more clearer, here's is the code on the server side:-
// Import node js modules
var https = require('https')
var fs = require('fs')
var express = require('express')
// Load App configuration
var config = require('./config/config')
// Database Integration Here(mongodb)
// Initialize the express app
var app = express()
// App express Configuration
// parse application/json
app.use(bodyParser.json())
// parse application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: true}))
app.use(cors())
app.set('serverHost', config.server.host)
app.set('serverPort', config.server.port)
app.set('serverUrl', config.server.url)
// Initializing various app modules
// Initialize the components
//Initialize the route(controller)
// Start the app with a given port no and mode
var env = process.env.NODE_ENV || 'development'
var httpsOptions = {
key: fs.readFileSync(__dirname + '/cert/server.key'),
cert: fs.readFileSync(__dirname + '/cert/server.crt')
}
https.createServer(httpsOptions, app).listen(app.get('serverPort'), function () {
// Server and mode info
console.log('The homerungurus backend running on server: '
+ app.get('serverHost')
+ ' and the port is: '
+ app.get('serverPort'))
console.log("The mode is: " + env)
})
As you can see I have installed the certs in the server.
I don't need a http-proxy because i will deploy the angular webserver on the standard port 443.
I am not able to understand few things:-
How to enable and set ssl certificate in my angular module so that
express and angular can communicate over ssl.
How will I display the cert of my backend express node to the browser?
I hope I have made my point more clearer.
Any help is appreciated?
Ok, where do we start...
You have a backend (express node js) running on port 4000, and a frontend (angularjs with http-server) running on port 3000, so you basically have two independent webservers running. When you say you "installed" the ssl certificate on the server, I assume you have it sitting in some directory but not actually installed on one of your servers.
You have several options to deploy your code, together with your SSL certificate. The best approach would be to seperate frontend from backend by urls.
That would mean that your frontend gets served from: https://frontend.example.com
and your backend gets served from https://backend.example.com (you can change the urls to whatever you want, so something like https://example.com or https://www.example.com is fine as well)
As far as I recall, if you have https:// on your frontend, you also need https:// on your backend, otherwise you will have problems with browsers security policies. You might also have to look for same origin policy, and allow on your server that https://frontend.example.com can access https://backend.example.com, but for that open a new ticket if you need it :D
The user would see the green symbol from https://frontend.example.com
I assume you know how you would change the backend url so your angular code would use https://backend.example.com instead of http://localhost:4000
To serve now your existing servers on port 443 (that is the default port for https and is always used if you say https://... but do not specify a port) you need an http proxy.
As http proxy (you can google for reverse proxy) you can take either apache or nginx, both are very common.
There are a couple of tutorials out there, how to setup nginx / apache which are OS specific, but Im sure you will manage. Dont forget to install mod_ssl and mod_http_proxy mod for apache (I dont remember if nginx needs something specifc as well)
A typical config for an apache reverse proxy would look like this:
<VirtualHost *:80>
# this part redirects all traffic from normal http to https
ServerName frontend.example.com
ServerSignature Off
RewriteEngine on
RewriteCond %{HTTPS} !=on
RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [NE,R,L]
</VirtualHost>
<virtualhost *:443>
# this is the actual part with some security enhancements
ServerName frontend.example.com
ServerAdmin webmaster#localhost
# be carefull with HSTS, it might break your setup if you
# do not know what you do. If you are not sure, do not
# comment the next line in
# Header always add Strict-Transport-Security "max-age=15768000"
# Enable SSL
SSLEngine on
# only strong encryption ciphers
# for reference https://community.qualys.com/blogs/securitylabs/2013/08/05/configuring-apache-nginx-and-openssl-for-forward-secrecy
# and no RC4 according to https://community.qualys.com/blogs/securitylabs/2013/03/19/rc4-in-tls-is-broken-now-what
SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder on
SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"
SSLCompression Off
SSLCertificateFile /path/to/cert.pem
SSLCertificateKeyFile /path/to/privkey.pem
# this next line is not needed if you have a self signed cert
SSLCertificateChainFile /path/to/chain.pem
ServerSignature Off
RequestHeader set X-FORWARDED-PROTOCOL https
RequestHeader set X-Forwarded-Ssl on
ProxyPreserveHost On
# Ensure that encoded slashes are not decoded but left in their encoded state.
# http://doc.gitlab.com/ce/api/projects.html#get-single-project
AllowEncodedSlashes NoDecode
<Location />
# New authorization commands for apache 2.4 and up
# http://httpd.apache.org/docs/2.4/upgrading.html#access
Require all granted
ProxyPassReverse http://127.0.0.1:3000
ProxyPassReverse http://frontend.example.com/
</Location>
#apache equivalent of nginx try files
# http://serverfault.com/questions/290784/what-is-apaches-equivalent-of-nginxs-try-files
# http://stackoverflow.com/questions/10954516/apache2-proxypass-for-rails-app-gitlab
RewriteEngine on
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
RewriteRule .* http://127.0.0.1:3000%{REQUEST_URI} [P,QSA]
RequestHeader set X_FORWARDED_PROTO 'https'
You will need the exact same twice, for the frontend like shown above and for the backend, where you replace port 3000 with 4000 and frontend.example.com with backend.example.com.
I hope this helps you a bit. Its not as complete as it could, but it should give you a hint how to setup your two http servers behind a http proxy to server your ssl certificate.
The above comment made by #chickahoona is more than enough. My solution is as follows:-
I removed http-server and used nginx for my frontend because i wanted to have html5 mode and for that I needed to have url rewriting.
I have used nginx as a proxy server rather than apache.
That's it and everything else is same as #chickahoona has pointed out.
I am trying to redirect certain subdomains to a specific port on my ubuntu AWS EC2 virtual server. Already tried it with DNS and that wouldn't work so based on the following topics, Default route using node-http-proxy? and How do I use node.js http-proxy for logging HTTP traffic in a computer?, I was trying to create a Node.JS proxy server with logging. That said I mixed it a bit up together (I'm new to Node.JS, still learning) and made the following script:
var httpProxy = require('http-proxy');
var PORT = 80;
logger = function() {
return function (request, response, next) {
// This will run on each request.
console.log(JSON.stringify(request.headers, true, 2));
next();
}
}
var options = {
// this list is processed from top to bottom, so '.*' will go to
// 'http://localhost:3000' if the Host header hasn't previously matched
router : {
'dev.domain.com': 'http://localhost:8080',
'beta.domain.com': 'http://localhost:8080',
'status.domain.com': 'http://localhost:9000',
'health.domain.com': 'http://localhost:9000',
'log.domain.com': 'http://localhost:9615',
'^.*\.domain\.com': 'http://localhost:8080',
'.*': 'http://localhost:3000'
}
};
// Listen to port 80
httpProxy.createServer(logger(), options).listen(PORT);
console.log("Proxy server started, listening to port" + PORT);
Well what happens is that I keep getting the following error and can't figure out how to put this to work:
$node proxyServer.js
Proxy server started, listening to port80
events.js:72
throw er; // Unhandled 'error' event
^
Error: listen EACCES
at errnoException (net.js:904:11)
at Server._listen2 (net.js:1023:19)
at listen (net.js:1064:10)
at Server.listen (net.js:1138:5)
at ProxyServer.listen (/home/ubuntu/QuantBull-Project/node_modules/http-proxy/lib/http-proxy/index.js:130:16)
at Object.<anonymous> (/home/ubuntu/QuantBull-Project/proxyServer.js:28:43)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
In short I'm trying to receive http request on port 80 and if it came from sub1.domain.com it will be redirected to portA and if it came frome sub2.domain.com it'll be redirected to portB from the same IP adress and both ports are open to the public.
Can someone explain how to fix this and explain why it happens?
Port Access:
As mentioned by the previous answer and comments the port below 1024 can't be opened by a regular user. This can be overcome by following these instruction:
If cat /proc/sys/net/ipv4/ip_forward returns 0 uncomment net.ipv4.ip_forward at the file /etc/sysctl.conf and enable these changes: sudo sysctl -p /etc/sysctl.conf, if it returns 1, skip this step;
Set up forwarding from port 80 to one desired above 1024 (i.e. port 8080): sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080;
Open up the Linux firewall to allow connections on port 80: sudo iptables -A INPUT -p tcp -m tcp --sport 80 -j ACCEPT and sudo iptables -A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT
Note: To make these changes stick even when restarting the server you may check the this out.
http-proxy's routefeature is removed:
After taking care of the port access the proxy server continued without working, so after opening an issue it seemed that the routing feature was removed because, according to Nodejitsu Inc.:
The feature was removed due to simplicity. It belongs in a separate module and not in http-proxy itself as http-proxy is just responsible for the proxying bit.
So they recommended to use http-master.
Using http-master:
As described in http-master's README section, node.js is required and we need to run npm install -g http-master (may be needed to run as root depending on your setup). Then we create the config file, i.e. http-master.conf, were we add our routing details and for this specific question, the config file is as followed:
{
# To detect changes made to the config file:
watchConfig: true,
# Enable logging to stdout:
logging: true,
# Here is where the magic happens, definition of our proxies:
ports: {
# because we defined that Port 80 would be redirected to port 8080 before,
# we listen here to that port, could be added more, i.e. for the case of a
# secure connections trough port 443:
8080 : {
proxy: {
# Proxy all traffic for monitor subdomains to port 9000
'status.domain.com' : 9000,
'health.domain.com' : 9000,
# Proxy all traffic for logger subdomains to port 9615
'log.domain.com' : 9615,
# Proxy all traffic from remaining subdomains to port 8000
'*.domain.com' : 8000
},
redirect: {
# redirect .net and .org requests to .com
'domain.net': 'http://domain.com/[path]',
'domain.org': 'http://domain.com/[path]'
}
}
}
}
And we are almost done, now we just run it with: http-master --config http-master.conf and our subdomain routing should be working just fine.
Note: If you want to run the proxy server on the background I recommend using a tool like forever or pm2, and in the case of using pm2 I recommend reading this issue.
If you are running your proxy as a regular user (not root), you can't open ports below 1024. There may be a way to do this as a normal user but usually I just run such things as root.
I tried to run "Hello world" server on AWS t1.micro instance. What I done:
I installed Node on aws
Wrote something like this:
require("http").createServer(function(request, response){
response.writeHeader(200, {"Content-Type": "text/plain"});
response.write("Hello World!");
response.end();
}).listen(8080);
- Run it on aws: node test_server.js
Now I try to send request from my local machine to server like this:
curl http://NAME:8080 where NAME is public DNS name from aws console, but nothing happens.
What I forget? Or what I done wrong
I tried to look for a some kind of tutorial but they describe how to run this on local machine or propose to setup Ngnx. But I look for minimalist example
You need to tell Amazon to authorize inbound traffic on the 8080 port to your instance. See the documentation for the step by step instructions.
In short:
Go to the Amazon EC2 console, click on Instance and open the Security Group preference pane
Add a new rule authorizing inbound traffic from any IP (0.0.0.0/0) on port 8080
Apply changes: the Node web server should now be able to serve HTTP requests.
#Paul is right, but that was only a part of the solution for me. I was still getting "connection refused" from local machine (remote CURL was fine). So, another part was of the puzzle was solved by turning off the Unix firewall (in addition at AWS security groups configs), i.e., iptables!
Are you runing CentOS? Try this:
$ service iptables save
$ service iptables stop
$ chkconfig iptables off
Of course, turning off firewall and opening up AWS console security groups is not a good long-term idea. The proper way is to use iptable, open port 80 for inbound and outbound, and then reroute 80 to Node.js port (3000 or 1337 or something else):
$ sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
$ sudo iptables -A INPUT -p tcp -m tcp --sport 80 -j ACCEPT
$ sudo iptables -A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT
You can also use Nginx to do the redirect . Varnish Cache is a good tool to have as well, because it dramatically decreases load on Node.js processes if you have a lot of users hitting one resource/page.
Further reading about AWS and Node.js:
http://www.lauradhamilton.com/how-to-set-up-a-nodejs-web-server-on-amazon-ec2
How to disable iptables on Linux:
http://www.cyberciti.biz/faq/turn-on-turn-off-firewall-in-linux/
Same on CentOS and Fedora:
http://www.cyberciti.biz/faq/disable-linux-firewall-under-centos-rhel-fedora/