Cannot create PeerJS Server on Port 443 - javascript

My server has an SSL certificate, and the domain works fine with https. I ran "npm install peer", and then ran this javascript:
const { PeerServer } = require('peer');
const peerServer = PeerServer({
port: 9000,
path: '/myapp'
});
It worked, but only under http, and if secure:false. This URL returned the correct JSON:
http://www.example.com:9000/myapp
I then removed the 'port9000' PeerJS Server, and ran this javascript, so it would hopefully work with secure:true:
const { PeerServer } = require('peer');
const peerServer = PeerServer({
port: 443,
ssl: {
key: fs.readFileSync('my.key'),
cert: fs.readFileSync('my.crt')
},
path: '/myapp2'
});
This did not work. The following URL returned a blank directory, and no JSON:
https://www.example.com:443/myapp2
My server is running CentOS v. 7.8 64 bit, and it uses httpd.
Let me know if it will be easier to configure a port that is not "well-known" (i.e. not 443) to use for https communication with the PeerJS Server, so I can set "secure: true" in the client script.

Related

Docker compose, JS: Issue while connecting to doc. container PHP Apache with socket.io from localhost

Windows 10, Docker Desktop + VS;
Docker-compose.yml has images:
node.js with socket.io
php 7.4-apache with socket.io
I need to connect socket.io within php-apache (website) to socket.io within node.js (server to handle data from php-apache) in one docker-compose.yml to run it in remote VM (hah, sounds like it is possible for an ordinary mortal).
First, tried to connect 2 containers within docker-compose (node.js to node.js) to be sure docker ports + socker.io ports set correct (successfully).
(Server side: socket.io listen 90 port, docker 90:90 (service name in docker-compose is node_server) || Client side: io(http://node_server:90), docker 85:85).
Confirmed, 2 containers are linked and ports are set correctly, started to make "php apache - node.js" socket-io docker-compose.yml.
My php-apache container with inner socket.io client composer link is "http://node_client:80".
When tried to connect "localhost:80" to PHP container: everything loaded fine, but error occured: net:ERR_NAME_NOT_RESOLVED (polling-xhr.js:206);
Error which I get while connecting to localhost:80 (docker php-apache container)
docker php-apache container is on 80:80 port in docker-compose.yml
request URL
I can connect to apache (it opens windows and all html stuff written in index.php), but gets an error (err_name_not_resolved) like socket.io in client-side (php-apache) can't make a request(?).
Checked for ping: both have connection (pinged node_server:90\client:80 from each terminal).
Checked "php-apache docker": "curl "http://node_server:90/socket.io/?EIO=4&transport=polling" and it also showed an information.
I do understand that some troubles have to be occured (because I connect from localhost to docker php container, and socket.io client in that container in that container gets mad(what ip to use(172.0.. or 192.168.. etc). But I have even no idea how to solve it (I need to connect somehow to index.php apache and socket.io.
I need to use php-apache and connect it to node.js, I confirmed socket.io worked in node.js - node.js, but in php-apache->node.js something happens while connection localhost.
docker-compose.yml:
version: "3"
services:
node_client:
container_name: client
image: img_client
ports:
- "80:80"
networks:
- test
node_server:
container_name: server
image: img_server
ports:
- "90:90"
networks:
- test
networks:
test:
external: true
Docker.client:
FROM php:7.4-apache
COPY ./client /var/www/html
EXPOSE 80
#(in ./client -> index.php)
./client/index.php:
<script src="https://cdn.socket.io/3.1.3/socket.io.min.js" integrity="sha..." crossorigin="anonymous"></script>
<script>
const socket = io(`http://node_server:90`, {
//secure: true,
//transport: ['websocket'],
//upgrade: false,
//rejectUnauthorized: false,
});
socket.on('connect',() =>{console.log(socket.id)});
</script>
Docker.server:
FROM node:alpine
COPY ./server /app
WORKDIR /app
COPY package*.json ./
COPY . .
CMD [ "node", "./server.mjs" ]
EXPOSE 90
//(in ./server -> server.mjs + node_modules)
./server/server.mjs:
import { createRequire } from 'module';
const require = createRequire(import.meta.url);
const express = require('express')
const app = express();
const cors = require("cors")
//app.use(cors({}))
const server = require('http').createServer(app);
const { Server } = require("socket.io");
const io = new Server(server, {
//rejectUnauthorized: false,
cors: {
origin: '*',
//methods: ["GET", "POST"],
//credentials: true,
}
});
server.listen(90, () => { console.log("Server is Ready!");});
//Also tried (90, '0.0.0.0', () => but there will be CORS troubles i believe, anyway I can't even determine the problem, but socket.io in "node.js + node.js" worked
//server.listen(PORT_SERVER, '0.0.0.0', () => { console.log("Server is Ready!");});
io.on('connection', (socket) => { console.log(`Подключился!`)});
Heh.. I tried to understand what is proxy and can it be used (nginx) somehow to connect 1th docker php-apache with CDN module (client) + 2th docker container node.js (server) all within docker-compose.yml, but gave up.

Http response at 400 or 500 level

I'm novice in gRPC. My program is written with ‍‍nuxtjs and is a simple login page that receives the username and password and sends it to the server using gRPC.
Everything is fine when I submit a request with BloomRPC. But when using the browser, the request is not sent to the server.
My auth class is as follow:
// auth.js
export default class {
constructor(vars) {
this.tokenKey = vars.tokenKey
this.proto = vars.proto
this.client = new vars.proto.AuthenticationClient('http://127.0.0.1:50051', null, null)
}
async loginRequest(user) {
const request = new this.proto.LoginRequest()
request.setUsername(user.username.trim().toLowerCase())
request.setPassword(user.password.trim())
return await this.client.login(request, {})
}
}
This error is shown when requesting to the server with the browser, whether the server is up or not.
net ERROR_CONNECTION_REFUSED
message: 'Http response at 400 or 500 level'
...
Chrome Screenshot:
Do I have to do a specific configuration?
I just want a hint for configuring.
UPDATE:
This link says that you should use Envoy. But why do we need it? And how do I configure it?
BloomRPC screenshot:
As you can see on the right side of the image, the answer is returned correctly.
Where was the problem?
The problem is that requests do not reach the server. So it does not matter if the server is up or down.
> Short Answer:
I needed a proxy to receive requests from the server. So I used ‍envoy proxy. In this way, nginx received the request from the browser and then sent it to a port (for example 5000). On the other hand, envoy listens to port 5000 and then sends the request to the server running on port 50051.
This is how I designed the tracking of a gRPC connection.
> Long Answer:
1. Generate html/css/js file with building nuxt project.
I put my website files in the root/site‍ folder.
2. envoy config:
I config envoy.yaml file as follows according to what the documentions grpc-web said.
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 5000 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: sample_cluster
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: sample_cluster
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: 0.0.0.0, port_value: 50051 }}]
This yaml file has two parts listeners and clusters. In the first part(listeners), we specify which port and address for listening(For example, here is address 0.0.0.0 and port 5000), and in the second part(clusters), we tell it where to send the requests(here is address 0.0.0.0 and port 50051). To use this file, we give it to Docker. So I create a Dockerfile.
# Dockerfile
FROM envoyproxy/envoy:v1.14.3
COPY ./envoy.yaml /etc/envoy/envoy.yaml
EXPOSE 5000
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml
To build the container, I use the following commands inside Dockerfile folder and then run it.
docker build -t my-grpc-container:1.0.0 .
docker run -d --net=host my-grpc-container:1.0.0
By executing the above commands, envoy is up and listens to port
5000 and sends requests to port 500051.
For more information about envoy read this nice blog:
The role of Envoy:
Envoy translates the HTTP/1.1 calls produced by the client into HTTP/2 calls that can be handled by those services
(gRPC uses HTTP/2 for transport).
3. nginx config:
I went to the /etc/nginx/sites-enabled/ folder and open default file and set the path of my website files(see section 1) as follows:
# server block
root /root/site;
Then told nginx that if a request in the url started with /rpc/, send it to 5000 port(where envoy is listening):
# server block
location /rpc/ {
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:5000/;
}
Then I restart nginx:
sudo systemctl restart nginx
By doing this part, nginx is up and send requests that start with
/rpc/ to port 5000, where envoy is ready to receive.
In summary:
The server is running on port 50051.
‍nginx sends requests to port 5000.
envoy, as an interface between the server and nginx, receives
requests from port 5000 and sends them to port 50051.
Done
According to chrome screenshot you trying to access to 5005 port in JS, but according to BloomRPC, screenshot your service listening 50051.

Knex Heroku Error: self signed certificate

I keep getting this error:
Error: self signed certificate
When running this command in the terminal:
knex migrate:latest --env production
My knexfile.js
require('dotenv').config();
module.exports = {
development: {
client: "pg",
connection: {
host: "localhost",
database: "my-movies"
}
},
production: {
client: "pg",
connection: process.env.DATABASE_URL
}
};
My .env file:
DATABASE_URL=<my_database_url>?ssl=true
Heroku app info:
Addons: heroku-postgresql:hobby-dev
Auto Cert Mgmt: false
Dynos:
Git URL: https://git.heroku.com/path-name.git
Owner: xxxxxxxxx#xxxx.com
Region: us
Repo Size: 0 B
Slug Size: 0 B
Stack: heroku-18
Web URL: https://my-appname.herokuapp.com/
I've tried putting a key value pair in the production in the knexfile of ssl: true and I get the same error. I've done it this way in the past many, many times and have never had this issue. Wondering if Heroku has changed anything but while searching their docs I couldn't find anything.
The following config at knexfile.js worked for me.
...
production: {
client: 'postgresql',
connection: {
connectionString: process.env.DATABASE_URL,
ssl: { rejectUnauthorized: false }
}
}
...
where the DATABASE_URL is what you get by running heroku config --yourAppName
This is due to a breaking change in pg#^8 (2020/02/25) cf. this heroku help forum.
You can get the full pg#^8 announcement but here is the relevant passage:
Now we will use the default ssl options to tls.connect which includes rejectUnauthorized being enabled. This means your connection attempt may fail if you are using a self-signed cert.
And it seems heroku is using self-signed certificates somewhere.
possible solutions:
downgrade to pg#^7
instruct pg#^8 to ignore problematic certificates ssl: { rejectUnauthorized: false } (see announcement linked above)
find a way to download and trust the certificate instructions
The ssl: { rejectUnauthorized: false } pg config isn't working for me at the moment either.. but I found a temporary (maybe permanent) solution via the heroku docs
Set the following config var:
heroku config:set PGSSLMODE=no-verify
If you are using a config like:
...
production: {
client: 'postgresql',
connection: {
connectionString: process.env.DATABASE_URL,
ssl: { rejectUnauthorized: false }
}
}
...
...and it still isn't working for you, make sure you don't have a ?ssl=true or sslmode set in DB your connection string.
If ssl is set in your connection string it will override the ssl part of your config, meaning behavior is equivalent to:
...
production: {
client: 'postgresql',
connection: {
connectionString: process.env.DATABASE_URL,
ssl: true
}
}
...
Removing the ssl entry from your connection string will fix the problem.
What worked for me was not using just a connection string but also adding the CA from my database as an option to the connection object in knex.
production: {
client: 'postgresql',
connection: {
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: false,
ca: process.env.POSTGRES_CA,
}
}
}

Error when browser-sync is proxied with local https server

I have a REST API server running at https://localhost:7001. This one uses a JKS keystore for configuring https.
A combination of gulp + browser-sync + proxy-middleware that spins up a server serving static content at https://localhost:3000. All requests to https://localhost:3000/api are proxied to https://localhost:7001/api.
However, I got this error:
Error: self signed certificate
at Error (native)
at TLSSocket.<anonymous> (_tls_wrap.js:1060:38)
at emitNone (events.js:86:13)
at TLSSocket.emit (events.js:185:7)
at TLSSocket._finishInit (_tls_wrap.js:584:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:416:38)
It seems to me that the error happens because of the REST API server in (1.) and the static server in (2.) use different certificates for their HTTPS configs.
The REST API server in (1.) uses a self signed JKS keystore for https.
I believe the static server in (2.) uses a default self-sign certificate that comes with browser sync.
So the two certificates are different. This is my guess for the cause of the error.
Any idea how to fix the problem? And what is the real cause of the error if my guess is wrong?
I could not find any instruction for how to use JKS as a certificate for the static server with browser-sync. In https://browsersync.io/docs/options, there is an instruction for how to put certificate to browser-sync, but that is a different type of certificate, and there was no field for password:
// Enable HTTPS mode with custom certificates
browserSync({
server: "./app",
https: {
key: "path-to-custom.key",
cert: "path-to-custom.crt"
}
});
so I am still clueless.
Any suggestions are greatly appreciated.
My gulp.js file for reference:
var gulp = require('gulp');
var browserSync = require('browser-sync').create();
var url = require('url');
var proxy = require('proxy-middleware');
gulp.task('browserSync', function() {
var proxyOptions = url.parse('https://localhost:7001/api/');
proxyOptions.route = '/api';
// requests to `https://localhost:3000/api/x/y/z` are proxied to `https://localhost:7001/api/x/y/`
browserSync.init({ //initialize a server with the given directory
open: true,
port: 3000,
https: true,
server: {
baseDir: '.',
middleware: [proxy(proxyOptions)]
}
});
});
gulp.task('watch', ['browserSync'], function() {
gulp.watch('app/*.html', browserSync.reload);
gulp.watch('app/**/*.html', browserSync.reload);
gulp.watch('app/**/*.js', browserSync.reload);
});

RethinkDB: trouble with basic JavaScript example

I have the server running on port 8080, I can see the web interface...
When I try to run this example from command line, like this: node test.js (node version: 4.1.0), I get:
playground/rethink/node_modules/rethinkdb/node_modules/bluebird/js/main/async.js:43
fn = function () { throw arg; };
^
ReqlTimeoutError: Could not connect to localhost:8080, operation timed out.
Why?
I installed RethinkDB via Homebrew and I'm on Mavericks.
I assume you changed the port number in the demo code to 8080?
r.connect({ host: 'localhost', port: 28015 }, ...
don't do that.
Port 8080 is reserved for http administrative connections, while client driver connections go through port 28015. Leave it at port 28015 and try it again.

Categories