I'm working on a Google Chrome app that contains the following code:
var socket = chrome.sockets.udp;
var PORT = 5005;
var HOST = '127.0.0.1';
socket.create({}, function(sockInfo){
socket.bind(sockInfo.socketId, HOST, PORT, function(result){
socket.onReceive.addListener(function(info){
// do stuff with the packet data
});
});
});
This mechanism worked perfectly when I was sending data on a looparound from localhost. However, when I try to send from a remote machine, the onReceive.addListener callback function never gets called. At first I thought this might be a local network issue, but when I tcpdump -vv udp port 5005 it reads the data I'm sending, so I know it's reaching my machine. This leads me to believe it's a Chrome issue...in my manifest.json file I've set universal "bind" and "send" permissions for UDP, so I don't see why this isn't working. Any thoughts?
François Beaufort's now deleted answer provided a useful suggestion, sadly in a way that was more appropriate for a comment. I'm making this a community wiki so that he does not feel robbed of reputation.
The HOST part indicates which interface you're listening on for data. Setting it to 127.0.0.1 means that you're only listening for the loopback interface, that is not accessible from outside.
You could provide an explicit IP address of the network interface you have, but an easier solution is to say "I want to listen on them all".
Quote from the docs:
Use "0.0.0.0" to accept packets from all local available network interfaces.
That should solve it in your case; in general, one also needs to check that the firewall is letting the requests through; you already did it.
Related
I am using the free TURN server provided by https://numb.viagenie.ca/. The STUN servers are also public.
I am using the following configuration:
const iceConfiguration = {
iceServers: [
{
url: 'stun:stunserver.stunprotocol.org'
},
{
url: 'stun:stun.sipgate.net:10000'
},
{
url: 'turn:numb.viagenie.ca',
credential: 'mypassword',
username: 'myemail'
}
]
}
I create an offer, send it to the other peer (different NAT) and then attempt to set the remote description with the answer. Upon calling myConnection.setRemoteDescription(answer), it keeps pending indefinitely and does not get resolved. Also, the remote peer can set its remote description without any issues. It all works fine for devices in the same network. So, I guess the problem lies in the relay server.
If so, should I ditch the public Numb server and opt towards using Coturn with DigitalOcean hosting or am I doing something totally wrong here?
Before setting up a brand new TURN server you can try to understand what's actually happening: if you take a trace on the computer with an application like Wireshark, and filter for stun messages, you should be able to see the browser sending Binding Request and Allocate Request methods towards the TURN server.
A missing response from the server may mean that the server is not available, the port is wrong, or a firewall prevents the browser to reach the TURN server.
If instead the credentials are wrong, the browser will receive a 401 error to the Allocate Request with the message-integrity attribute.
You can also verify the TURN URL and credentials by running the WebRTC sample application that deals with ICE candidate gathering at https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/ .
It seems as though the Numb TURN servers do not actually work. No idea why. But they do show up in the WebRTC trickle ICE sample application.
I have a Nodejs based server, which uses a middleware which basically redirects the user to a CAS to manage authentication. The CAS server responds with a ticket, and finally my Nodejs server trades the ticket for an user object with the CAS and stores it in a session.
This process works perfectly fine without cluster.
Today i wanted to clusterize my Nodejs server, using https://nodejs.org/api/cluster.html (thing that i've already done without any problem).
So instead of having:
let server = http.createServer(app);
server.listen(PORT, HOST, () => {
// Keep a track that the server has been launched
logger.log(`Server running at http://${HOST}:${PORT}/`);
});
Where everything was working fine, I now have:
if(cluster.isMaster) {
// This is the mother process
// Let's create the threads
for(let i=0; i < NB_WORKERS; i++) {
cluster.fork();
}
// When a child crash... Restart it
cluster.on('exit', (worker, code, signal) => {
logger.log("info", "A child died (PID: %s). It was killed by code or signal %s. Restarting...", worker.process.pid, dode || signal);
cluster.fork();
});
} else {
// This is a child
// Create the server, based on http
let server = http.createServer(app);
server.listen(PORT, HOST, () => {
// Keep a track that the server has been launched
logger.log(`Server running at http://${HOST}:${PORT}/`);
});
}
When i launch the server, it actually starts the server on NB_WORKERS threads, as expected. But when i want to access the app delivered by my Node server with my browser, i have the following error:
which says if you can't see:
XMLHttpRequest cannot load https://localhost:8443/cas/login?
service=http://localhost:3000. No 'Access-Control-Allow-Origin' header is
present on the requested resource. Origin 'http://localhost:3000' is
therefore not allowed access
https://localhost:8443 is were my CAS server is running, and http://localhost:3000 is where my Node server is running.
Note that if i set NB_WORKERS to 1, everything works fine again.
I understand that setting 'Access-Control-Allow-Origin' header in my CAS server config would probably make everything works fine, but i don't understand why it's working with one thread and not with two or more.
What am i missing ?
I finally managed to make it work, so i post here in case someone come across a similar issue.
About Node session
As I said, my Nodejs server stores datas into a session. In this case, it was a simple express-session with the default MemoryStore since i'm still in development.
When clustering, express-session default store is NOT shared between threads. This means that requests supposed identified by the session were sometimes not, depending of which thread handled the request. This caused the authentication middleware to ask the CAS server again.
To make it work, i had to use a persistant store for my session, such as Redis.
About the CORS issue
I'm not really sure about what caused this strange issue, but here what i thought about:
Since my CAS server uses HTTPS protocol, some handshakes were made between both servers due to TSL (Hello, Certificate exchange, Key exchange). Maybe these were broken if not only one thread tried to do it (One makes the Hello, then the response is sent to another thread => this can't work).
Still it's very strange, because it's very low-level, and almost none cluster Node app would work if this was not managed internally.
In order to make the previous handshakes work, i guess that each thread must be identified somehow. A likely way to do it is to assign a custom port to each thread: the process still run on port 3000, but each thread use a custom one to communicate.
Maybe this was causing my issue, but I can't really tell precisely.
So, i just had to manage correctly my session store to make it work. But please let me know if i was wrong somewhere about the CORS issue.
Is there a way to set the source port for a node js https request? I am not asking about the destination, rather the source, ie the port used to send the request.
The context is I am trying to send https requests from a specific port, rather than random ports, thus allowing for locking down iptables. Node is not running as root, thus the port is not 443.
Update :
It appears there is a bug in Node. The options localAddress and localPort do not work, at least with a TLS socket.
Update :
Found a similar question from last year. The answers were "don't do that", which seems dumb given that node is suppose to be a generic tool. Nodejs TCP connection client port assignment
The feature appears to be undocumented, but you can achieve this by setting BOTH the localAddress and localPort parameters for the options argument in https.request.
For more information, see the code here:
https://github.com/nodejs/node/blob/b85a50b6da5bbd7e9c8902a13dfbe1a142fd786a/lib/net.js#L916
A basic example follows:
var https = require('https');
var options = {
hostname: 'example.com',
port: 8443,
localAddress : '192.168.0.1',
localPort: 8444
};
var req = https.request(options, function(res) {
console.log(res);
});
req.end();
Unfortunately it looks like Node does not support binding a client port. Apparently this isn't a feature that is used much, but it is possible. This link explains port binding fairly well. https://blog.cloudflare.com/cloudflare-now-supports-websockets/ Not sure how to get the nodejs people to consider this change.
You can use the localPort option in http.Request options to configure a source port. Seems like it's available since v14.x LTS. I tested out with Node v18.12.0 and it works. :)
https://nodejs.org/docs/latest-v14.x/api/http.html#http_http_request_url_options_callback
options.localPort <number> Local port to connect from.
I need to share same http server between socket.io and websocket (from 'ws' package) handlers.
Unfortunatelly, despite that they are listening to diffrent prefixes, the first is listening to /socket.io and the second to /websocket urls, for some reasons if they are running on the same server the websocket is not working properly.
I did some debugging, but it seems that the requests are properly handled by both libraries but in the end only socket.io works properly.
Any idea how to solve that?
The way sockets work in node.js is quite a bit different from the way normal requests work. There is no routing, so rather than listening to a url, you have to listen to all sockets. The default behavior of socket.io is to close any socket connections that it doesn't recognize. To fix this, you'll need to add the flag 'destroy upgrade': false to the options (server is an express server):
require('socket.io').listen(server, {'destroy upgrade': false, ...})
You'll also need to check the url when a client connects (in the code handling /websocket) and ignore it if it looks like it belongs to socket.io. You can find the url from the client object (passed in to the on connection handler) as client.upgradeReq.url.
Ok solution is simple (unfortunately half day of debugging and now it's simple :)).
There is an option 'destroy upgrade' for upgrade requests coming from non-socketio clients. Since Websocket (module 'ws') is using the same requests some of them might be for 'ws' not for 'socket.io'. So this option should be disabled.
io = require('socket.io').listen(server, { log: false, 'resource':'/socket.io' });
io.disable('destroy upgrade')
Update for 2016:
io.disable('destroy upgrade');
seems not to be available anymore.
But I succeeded by assigning the websocket module ws a path (using Express):
var wss = new WebSocketServer({ server: server, path: '/ws' }); //do not interfere with socket.io
Of course the client has the same path ws://theserver.com/ws
I did not have to alter the socket.io side at all.
Trying to get Adobe's Socket object up and running. I have this code:
var reply = "";
var conn = new Socket;
// access Adobe’s home page
if (conn.open("www.adobe.com:80"))
{
// send a HTTP GET request
conn.write ("GET /index.html HTTP/1.0\n\n");
// and read the server’s reply
reply = conn.read(999999);
conn.close();
alert(reply.toString());
}
else
{
alert(conn.error);
}
And it doesn't work. conn.error is fired, so I know the problem is with conn.open. This is the error message I get:
Tried a bunch of other sites too; nothing worked. But if I switch www.adobe.com:80 to localhost:8080, everything works as expected.
EDIT
I've definitely narrowed it down to being a proxy problem. But I don't know what to do about it, if I have to fix in my script or if I have to talk to IT to see about allowing proxy connections.
Here's where I'm at with my code:
if (conn.open("proxyserver.com:port"))
{
conn.write ("CONNECT www.adobe.com:443 HTTP/1.0\n\n");
reply = conn.read(999999);
alert(reply.toString());
}
This gets me the following:
But I'm not able to do anything beyond that. I can only do port 443 (https, I think); port 80 doesn't work on any site. I think this is more of a proxy problem than a script problem. When I do port 443 and get a connection, though, I don't know how to do anything with that connection. I tried sending a GET request afterwards and it returned blank.
Make sure your firewall isn't blocking access to your proxy server - the particular port may need to be opened, too.
Or, maybe the proxy is not set up to use port 80? If port 443 is used, isn't that SSL; do you need a certificate? It looks like your proxy only accepts SSL connections.
Maybe try with an ipaddress instead of a domain name: 192.150.14.12:80 is the one the adobe pdf provides.