Http response at 400 or 500 level - javascript

I'm novice in gRPC. My program is written with ‍‍nuxtjs and is a simple login page that receives the username and password and sends it to the server using gRPC.
Everything is fine when I submit a request with BloomRPC. But when using the browser, the request is not sent to the server.
My auth class is as follow:
// auth.js
export default class {
constructor(vars) {
this.tokenKey = vars.tokenKey
this.proto = vars.proto
this.client = new vars.proto.AuthenticationClient('http://127.0.0.1:50051', null, null)
}
async loginRequest(user) {
const request = new this.proto.LoginRequest()
request.setUsername(user.username.trim().toLowerCase())
request.setPassword(user.password.trim())
return await this.client.login(request, {})
}
}
This error is shown when requesting to the server with the browser, whether the server is up or not.
net ERROR_CONNECTION_REFUSED
message: 'Http response at 400 or 500 level'
...
Chrome Screenshot:
Do I have to do a specific configuration?
I just want a hint for configuring.
UPDATE:
This link says that you should use Envoy. But why do we need it? And how do I configure it?
BloomRPC screenshot:
As you can see on the right side of the image, the answer is returned correctly.

Where was the problem?
The problem is that requests do not reach the server. So it does not matter if the server is up or down.
> Short Answer:
I needed a proxy to receive requests from the server. So I used ‍envoy proxy. In this way, nginx received the request from the browser and then sent it to a port (for example 5000). On the other hand, envoy listens to port 5000 and then sends the request to the server running on port 50051.
This is how I designed the tracking of a gRPC connection.
> Long Answer:
1. Generate html/css/js file with building nuxt project.
I put my website files in the root/site‍ folder.
2. envoy config:
I config envoy.yaml file as follows according to what the documentions grpc-web said.
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 5000 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: sample_cluster
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: sample_cluster
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: 0.0.0.0, port_value: 50051 }}]
This yaml file has two parts listeners and clusters. In the first part(listeners), we specify which port and address for listening(For example, here is address 0.0.0.0 and port 5000), and in the second part(clusters), we tell it where to send the requests(here is address 0.0.0.0 and port 50051). To use this file, we give it to Docker. So I create a Dockerfile.
# Dockerfile
FROM envoyproxy/envoy:v1.14.3
COPY ./envoy.yaml /etc/envoy/envoy.yaml
EXPOSE 5000
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml
To build the container, I use the following commands inside Dockerfile folder and then run it.
docker build -t my-grpc-container:1.0.0 .
docker run -d --net=host my-grpc-container:1.0.0
By executing the above commands, envoy is up and listens to port
5000 and sends requests to port 500051.
For more information about envoy read this nice blog:
The role of Envoy:
Envoy translates the HTTP/1.1 calls produced by the client into HTTP/2 calls that can be handled by those services
(gRPC uses HTTP/2 for transport).
3. nginx config:
I went to the /etc/nginx/sites-enabled/ folder and open default file and set the path of my website files(see section 1) as follows:
# server block
root /root/site;
Then told nginx that if a request in the url started with /rpc/, send it to 5000 port(where envoy is listening):
# server block
location /rpc/ {
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:5000/;
}
Then I restart nginx:
sudo systemctl restart nginx
By doing this part, nginx is up and send requests that start with
/rpc/ to port 5000, where envoy is ready to receive.
In summary:
The server is running on port 50051.
‍nginx sends requests to port 5000.
envoy, as an interface between the server and nginx, receives
requests from port 5000 and sends them to port 50051.
Done

According to chrome screenshot you trying to access to 5005 port in JS, but according to BloomRPC, screenshot your service listening 50051.

Related

Docker compose, JS: Issue while connecting to doc. container PHP Apache with socket.io from localhost

Windows 10, Docker Desktop + VS;
Docker-compose.yml has images:
node.js with socket.io
php 7.4-apache with socket.io
I need to connect socket.io within php-apache (website) to socket.io within node.js (server to handle data from php-apache) in one docker-compose.yml to run it in remote VM (hah, sounds like it is possible for an ordinary mortal).
First, tried to connect 2 containers within docker-compose (node.js to node.js) to be sure docker ports + socker.io ports set correct (successfully).
(Server side: socket.io listen 90 port, docker 90:90 (service name in docker-compose is node_server) || Client side: io(http://node_server:90), docker 85:85).
Confirmed, 2 containers are linked and ports are set correctly, started to make "php apache - node.js" socket-io docker-compose.yml.
My php-apache container with inner socket.io client composer link is "http://node_client:80".
When tried to connect "localhost:80" to PHP container: everything loaded fine, but error occured: net:ERR_NAME_NOT_RESOLVED (polling-xhr.js:206);
Error which I get while connecting to localhost:80 (docker php-apache container)
docker php-apache container is on 80:80 port in docker-compose.yml
request URL
I can connect to apache (it opens windows and all html stuff written in index.php), but gets an error (err_name_not_resolved) like socket.io in client-side (php-apache) can't make a request(?).
Checked for ping: both have connection (pinged node_server:90\client:80 from each terminal).
Checked "php-apache docker": "curl "http://node_server:90/socket.io/?EIO=4&transport=polling" and it also showed an information.
I do understand that some troubles have to be occured (because I connect from localhost to docker php container, and socket.io client in that container in that container gets mad(what ip to use(172.0.. or 192.168.. etc). But I have even no idea how to solve it (I need to connect somehow to index.php apache and socket.io.
I need to use php-apache and connect it to node.js, I confirmed socket.io worked in node.js - node.js, but in php-apache->node.js something happens while connection localhost.
docker-compose.yml:
version: "3"
services:
node_client:
container_name: client
image: img_client
ports:
- "80:80"
networks:
- test
node_server:
container_name: server
image: img_server
ports:
- "90:90"
networks:
- test
networks:
test:
external: true
Docker.client:
FROM php:7.4-apache
COPY ./client /var/www/html
EXPOSE 80
#(in ./client -> index.php)
./client/index.php:
<script src="https://cdn.socket.io/3.1.3/socket.io.min.js" integrity="sha..." crossorigin="anonymous"></script>
<script>
const socket = io(`http://node_server:90`, {
//secure: true,
//transport: ['websocket'],
//upgrade: false,
//rejectUnauthorized: false,
});
socket.on('connect',() =>{console.log(socket.id)});
</script>
Docker.server:
FROM node:alpine
COPY ./server /app
WORKDIR /app
COPY package*.json ./
COPY . .
CMD [ "node", "./server.mjs" ]
EXPOSE 90
//(in ./server -> server.mjs + node_modules)
./server/server.mjs:
import { createRequire } from 'module';
const require = createRequire(import.meta.url);
const express = require('express')
const app = express();
const cors = require("cors")
//app.use(cors({}))
const server = require('http').createServer(app);
const { Server } = require("socket.io");
const io = new Server(server, {
//rejectUnauthorized: false,
cors: {
origin: '*',
//methods: ["GET", "POST"],
//credentials: true,
}
});
server.listen(90, () => { console.log("Server is Ready!");});
//Also tried (90, '0.0.0.0', () => but there will be CORS troubles i believe, anyway I can't even determine the problem, but socket.io in "node.js + node.js" worked
//server.listen(PORT_SERVER, '0.0.0.0', () => { console.log("Server is Ready!");});
io.on('connection', (socket) => { console.log(`Подключился!`)});
Heh.. I tried to understand what is proxy and can it be used (nginx) somehow to connect 1th docker php-apache with CDN module (client) + 2th docker container node.js (server) all within docker-compose.yml, but gave up.

mosquitto+mqtt.js got "Connection refused: Not authorized"

I built mosquitto on CentOS7 and a node.js client based on mqtt.js,installing with
yum install mosquitto mosquitto-clients
The local test
> mosquitto_sub -h localhost -t test
> mosquitto_pub -h localhost -t test -m "hello world"
works fine, but when I ran:
var mqtt = require('mqtt')
var client = mqtt.connect('mqtt://192.168.1.70')
client.on('connect', function () {
client.subscribe('presence')
client.publish('presence', 'Hello mqtt')
})
client.on('message', function (topic, message) {
// message is Buffer
console.log(message.toString())
client.end()
})
I got Error: Connection refused: Not authorized
The mosquitto.conf is like:
pid_file /var/run/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
allow_anonymous true
and I use systemctl restart mosquitto to restart it several time, which doesn't help. The firewall is down and log file stays empty.
A screenshot on status:
Can anyone help please?
UPDATE:
It turns out that the mosquitto service is somehow broken as the status shows Active: active (exited).
I use mosquitto -p 1884 -v cmd to run another mosquitto process on port 1884, it works fine. Then I try to reload the conf using
> /etc/init.d/mosquitto reload. It gives me
Reloading mosquitto configuration (via systemctl): Job for mosquitto.service invalid.
[FAILED]
So there IS something wrong with mosquitto service.
Not a final solution but I manage to fix this by remove-reboot-install process, the status went green as follow:
SOLUTION
I managed to find out the reason it doesn't work. I've installed rabbitmq on my server, it uses its "rabbitmq_mqtt" which consumes port 1883. Reassigning a port will solve this problem.
I managed to find out the reason. I've installed rabbitmq on my server, it uses its "rabbitmq_mqtt" which consumes port 1883. Reassigning a port will solve this problem. The problem is simple, but yeah, the CLI should have given me more information.
You need to add the authorize information to mqtt connect method.Just like this.
var client=mqtt.connect("ws://192.168.1.1", {
username: "yourUsername",
password: "yourPassword"
}
Add the Authorization details for the client to connect
var mqtt = require('mqtt')
var client = mqtt.connect('mqtt://192.168.1.70', {
username: '<username>',
password: '<password>'
});
client.on('connect', function () {
client.subscribe('presence')
client.publish('presence', 'Hello mqtt')
})
client.on('message', function (topic, message) {
// message is Buffer
console.log(message.toString())
client.end()
})

How to run multitple Node apps on the same port using Restify?

I need to run multiple Node apps on the same port. I've found out that I can run multiple node apps using one port, thanks to this SO question Running multiple Node (Express) apps on same port But it's not working for me probably bec. I'm using Restify unless I did something wrong somewhere.
I already have "app1" running on this one port using PM2 built using Restify. I've made another app "app2". The paths are like these:
/var/www/app1
/var/www/app2
with each app having common routes like these:
app.get('/', func...);
app.get('/about', func...);
app.post('/foo', func...);
app.post('/bar', func...);
I've set up "app1"'s last lines of code as: exports.app = app instead of app.listen(8080, function() { ... });
and, where app is
var app = restify.createServer({
name: 'app1'
});
"app2" is the same as well...
My main.js file (which is saved in /var/www/) is also built on Restify:
main
.use('/app`', require('./app1/index').app)
.listen(8080);
where main is
var main = restify.createServer({
name: 'main'
});
But I'm getting an error such as this when I type node main.js (I haven't tried with PM2 yet):
/var/www/node_modules/restify/node_modules/assert-plus/assert.js:45
throw new assert.AssertionError({
^
AssertionError: handler (function) is required
at process (/var/www/node_modules/restify/lib/server.js:76:24)
at argumentsToChain (/var/www/node_modules/restify/lib/server.js:84:13)
at Server.use (/var/www/node_modules/restify/lib/server.js:625:6)
at Object.<anonymous> (/var/www/main.js:47:8)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
at startup (node.js:119:16)
Note: I've turned off all the apps running under PM2. There are no node apps running on any port.
The only way to do this effectively is to run an HTTP proxy configured to answer requests on a single port and pass them, based upon URL patterns, to servers running on other ports, a simple example of which can be found at A HTTP Proxy Server in 20 Lines of node.js Code.
In essence, your publicly visible proxy server runs on port 80 and you run other servers to handle specific requests.
So for example, if you run three HTTP servers, one as a forwarding proxy and two for specific functions such that:
proxy on port 80
server2 on port 8080 for requests matching regexp:/^\/first(?:\/.*)?$/
server3 on port 8081 for requests matching regexp:/^\/second(?:\/.*)?$/
where the only server that has a public connection is your proxy.
When the proxy receives a request for /first or /first/index.html, it forwards the request to server2 which returns a result document that the proxy then sends back to the original requester.
When it receives a request for /second/foo/bar/page.html, it does the same but using server3 to produce a result.
http-proxy is an implementation of this strategy which uses the http-proxy-rules plugin to process and forward requests based on URL patterns.
UPDATE
For the purposes of clarity, we assume proxy, server2, and server3 above represent individual node HTTP server instances listening on a single IP address but separate ports on the same machine.
Example:
var http = require('http'),
httpProxy = require('http-proxy'),
HttpProxyRules = require('http-proxy-rules');
// Set up proxy rules instance
// where
// any request for /hostname/app1 will be proxy-ed via SERVER2
// any request for /hostname/app2 will be proxy-ed via SERVER3
var proxyRules = new HttpProxyRules({
rules: {
'.*/app1/': 'http://localhost:8080', // TO SERVER2
'.*/app2/': 'http://localhost:8081' // TO SERVER3
}
});
// Create reverse proxy instance
var proxy = httpProxy.createProxy();
// Create http server on hostname:80 that leverages reverse
// proxy instance and proxy rules to proxy requests to
// different one of two target servers
http.createServer(function(req, res) { // PROXY
// a match method is exposed on the proxy rules instance
// to test a request to see if it matches against one
// of the specified rules
var target = proxyRules.match(req);
if (target) {
return proxy.web(req, res, {
target: target
});
}
res.writeHead(500, { 'Content-Type': 'text/plain' });
res.end('No rule found for this request');
}).listen(80);
// create a new HTTP server on localhost:8080 to process
// requests sent from the proxy
http.createServer(function (req, res) { // SERVER2
res.writeHead(200, { 'Content-Type': 'text/plain' });
var headers=JSON.stringify(req.headers,true,2);
res.write('request successfully proxy-ed to SERVER2!' + '\n' + headers);
res.end();
}).listen(8080,'localhost');
// create a new HTTP server on localhost:8081 to process
// requests sent from the proxy
http.createServer(function (req, res) { // SERVER3
res.writeHead(200, { 'Content-Type': 'text/plain' });
var headers=JSON.stringify(req.headers,true,2);
res.write('request successfully proxy-ed to SERVER3!' + '\n' + headers);
res.end();
}).listen(8081,'localhost');
Using this setup:
only the proxy server will be available externally on port 80
the servers running on ports 8080 & 8081 are only available on the local machine
requests received on the proxy at hostname:80 that match the /app1 path (and descendants) will be proxy-ed by the server running on localhost:8080
requests received on the proxy at hostname:80 that match the /app2 path (and descendants) will be served by the server running on localhost:8081

How to put haproxy in front of socket.io with SSL? Getting Mixed content warning

I have been using this setting to put haproxy in front of node.js and socket.io.
Some of the code in haproxy setting:
frontend wwws
bind 0.0.0.0:443 ssl crt /etc/haproxy/ovee.pem
timeout client 1h
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
use_backend websocket_backend if is_websocket
tcp-request inspect-delay 500ms
tcp-request content accept if HTTP
use_backend flashsocket_backend if !HTTP
frontend flash_policy
bind 0.0.0.0:843
timeout client 5s
default_backend nodejs_flashpolicy
backend www_backend
balance roundrobin
option forwardfor
server apache2 apache-backend:3001 weight 1 maxconn 1024 check
timeout connect 5s
timeout http-request 3s
timeout server 25s
backend websocket_backend
mode http
option forwardfor
option http-server-close
option forceclose
no option httpclose
server server1 socket-backend:3000 weight 1 maxconn 16384 check
nodeServer.js
var fs = require('fs');
var express = require('express'),
app = express(),
http = require('http').createServer(app),
io = require("socket.io").listen(http),
It seems to work well on first connection, but then the browser are blocking all the socket connection attempts for Mixed Content.
The page at 'https://domain.com/391' was loaded over HTTPS,
but requested an insecure XMLHttpRequest endpoint
'http://domain.com:3000/socket.io/?EIO=3&transport=polling&t=1456666556035-0'.
This request has been blocked; the content must be served over HTTPS.
I know it's due to my connection to socket.io is via Http instead of Https
client.js
socket = io.connect('http://domain.com:3000', {
'force new connection': false,
'reconnection delay': 500,
'max reconnection attempts': 10
}),
I have tried using SSL for nodeServer.js, but that doesn't connect to the socket, and I don't think it is necessary since I'm using haproxy to do all the forwarding:
In nodeServer.js, I have changed to:
var fs = require('fs'),
sslOption = {
key: fs.readFileSync('/ssl/crt/zz.key'),
cert: fs.readFileSync('/ssl/crt/zz.crt'),
ca: fs.readFileSync('/ssl/crt/zz-ca.crt'),
requestCert: true
},
express = require('express'),
app = express(),
https = require('https').createServer(app),
io = require("socket.io").listen(https),
client.js
socket = io.connect('https://domain.com:3000', {
'force new connection': false,
'reconnection delay': 500,
'max reconnection attempts': 10
}),
Has anyone put haproxy in front of node.js and socket.io? What should I use to connect to socket.io from the client.js?
I have somehow solved the problem. I shouldn't have use https:domain.com:3000 in client.js. I should let the haproxy do the SSL forwarding. Here's my settings to make SSL work with socket.io. I hope someone who may stumble upon it would correct anything that is outdated or wrong:
frontend www
bind 0.0.0.0:80
mode http
timeout client 5s
redirect scheme https if !{ ssl_fc }
acl is_socket hdr(Upgrade) -i WebSocket
acl is_socket hdr_beg(Host) -i wss
use_backend socket if is_socket
frontend www-https
bind 0.0.0.0:443 ssl crt /ssl/domain.pem
timeout client 1h
### Set up a check to identify the connection url initated by socket.io https://domain.com/socket.io/?EIO=3&transport=polling&t=1441877956651, if matched, forward to node backend
acl is_node path_beg /socket.io/
use_backend node if is_websocket
### I'm not sure if this one is necessary for wss://domain.com
acl is_websocket2 hdr(Upgrade) -i WebSocket
acl is_websocket2 hdr_beg(Host) -i ws
use_backend socket if is_websocket2
default_backend apache2
tcp-request inspect-delay 500ms
tcp-request content accept if HTTP
use_backend flashsocket_backend if !HTTP
frontend flash_policy
bind 0.0.0.0:843
timeout client 5s
default_backend nodejs_flashpolicy
### setting for apache, skipped because it's not the focus of the question
backend apache
###
backend flashsocket_backend
server server1 ipaddress:3000 weight 1 maxconn 16384 check
backend nodejs_flashpolicy
server server1 ipaddress:10843 maxconn 16384 check
backend node
mode http
timeout server 1h
timeout connect 1s
option httpclose
option forwardfor
server server1 ipaddress:3000 weight 1 maxconn 1024 check
backend socket
balance roundrobin
option forwardfor
option httpclose
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server server1 127.0.0.1:9000 weight 1 maxconn 1024 check
Then in client.js, I have changed the connection address to:
socket = io.connect('https://domain.com', {
'force new connection': false,
'reconnection delay': 500,
'max reconnection attempts': 10,
})
Use 'https://domain.com' instead of 'http://domain.com:3000' so that the connection will first pass to ha-proxy.
There is nothing required to be changed in nodeServer.js. Keep using http and Haproxy will do all the SSL forwarding for you.
var fs = require('fs'),
express = require('express'),
app = express(),
http = require('http').createServer(app),
io = require("socket.io").listen(http),

Can't access to socket.io.js on Raspberry Pi with Lighttpd [Node.JS & Socket.IO]

I'm completely new to Node.JS and Socket.IO since yesterday.
I try to make Node.JS and Socket.IO work on my Raspberry Pi but it doesn't seem to. I can't access to <myip>:1337/socket.io/socket.io.js.
I have followed this tutorial so my Lighttpd.conf file seems like so:
$HTTP["host"] == "<myURLtomywebsite>" {
proxy.server = (" " => ((
"host" => "<myIP>",
"port" => 1337)
)
)
My server.js look like so:
var http = require('http');
httpServer = http.createServer(function(req, res) {
res.end('Hello World!');
});
httpServer.listen(1337);
var io = require('socket.io').listen(httpServer);
var clients = 0;
io.sockets.on('connection', function(socket) {
++clients;
socket.on('disconnect', function(data) {
--clients;
io.sockets.emit('disusr', clients);
});
io.sockets.emit('newusr', clients);
});
And I bind to the disusr and newusr events in my client.js to display the number of connected users in a div.
Everything looks fine on my localhost but, in production environment, I cannot link to my socket.io.js file on the 1337 port. To be honest, I'm not even sure what address to use? (URL of my website appended with :1337, localhost, some other address I would have created?)
Any help would be much appreciated. Thanks!
I resolved my problem!
I linked socket.io.js like so : <script type="text/javascript" src="/socket.io/socket.io.js"></script>
I used HAProxy instead of Lighttpd mod_proxy as specified in this question
Here is my conf file (amend <...> per your configuration):
# this config needs haproxy-1.1.28 or haproxy-1.2.1
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
uid 99
gid 99
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option http-use-proxy-header
option redispatch
option http-server-close
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend public
bind *:80
acl is_example hdr_end(host) -i <URL.toyourwebsite.com>
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket path_beg -i /websockets
use_backend ws if is_websocket is_example
default_backend www
backend ws
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server apiserver localhost:<PORT> weight 1 maxconn 1024 check
And I made Lighttpd listened to the 8080 port (otherwise HAProxy wouldn't start).
Remind there is no need to use mod_proxy as it is known to be not compatible with websockets. Use HAProxy instead.

Categories