I am new to websockets.
In my setup I have a trivial websocket server written in Go (playground)
I make a WebSocket object, set up its onmessage callback and call its send method to test.
var w = new WebSocket("ws://localhost:12345/echo")
w.onmessage = (msg) => {
console.log(msg.data)
}
w.onopen = () => {
w.send("Hello") // this fires OK
}
What I expect to happen based on the server code is to receive the "Hello" message and to keep sending "yahoo" every 1.5s to the client. What actually happens is "Hello" is sent, but none of the "yahoo"'s make it thru. It seems somewhere along the WebSocket.readystate becomes 3 (CLOSED).
To clarify, the server receives and prints "Hello" then actually fires a "yahoo" message every 1.5s, but the connection is closed by then so the onmessage callback never fires.
Am I missing or misunderstanding anything?
EDIT: Ran across comparison github.com/gorilla vs. golang.org/x/net, claims golang.org/x/net websocket implementation does not support pong. This may be the confirmation of it.
EDIT: Package golang.org/x/net/websocket closes the websocket connection when the handler ServeHTTP function returns. By default a websocket connection is tied to an instance of the handler.
When the handler function returns, in your case EchoServer the socket will be automatically closed by the http framework.
Since you start a go routine for the loop writing the yahooresponse to the client the EchoServer function will terminate (and therefor closing the socket) before it has time to send a response.
The solution is to remove the spawning of the go routine and just do the loop inside the EchoServer.
Related
Let's say I am building a social app. I want to log into multiple accounts (one per browser instance) without an user interface (all via node), and by calling all respective endpoints to log in and start chatting.
The important part is to test when an user closed the tab or logs out or leaves the group and therefore the websocket's connection closes.
If I understand you correctly.
You would like to make a server-side event happen whenever a client connects or disconnects, without any html,css.... or any other user interface.
You can do it like this in node :
For connection you use :
Server.on("connection,function(ws){some stuff...})
The function that is called on connection will get the websocket that connected as parameter by default. I just use a lambda function here you can also call one that will then get the websocket as parameter.
For disconnection you put a function in the Server.on function to monitor when the client disconnected. Like this :
Server.on("connection,function(ws){
ws.onclose = function (ws) {
some stuff...
}
})
You can again replace the lambda function by another one.
Server is in my case equal to this :
const wsserver = require("ws").Server
const server = new wsserver({ port: someport })
But it can vary.
All you need to do apart from that is connect the client.
I do it like this but it can vary as well.
const ws = new WebSocket("ws://localhost:someport");
I hope this is helpful.
I'm experimenting with node and it's child_process module.
My goal is to create server which will run on maximum of 3 processes (1 main and optionally 2 children).
I'm aware that code below may be incorrect, but it displays interesting results.
const app = require ("express")();
const {fork} = require("child_process")
const maxChildrenRuning = 2
let childrenRunning = 0
app.get("/isprime", (req, res) => {
if(childrenRunning+1 <= maxChildrenRuning) {
childrenRunning+=1;
console.log(childrenRunning)
const childProcess = fork('./isprime.js');
childProcess.send({"number": parseInt(req.query.number)})
childProcess.on("message", message => {
console.log(message)
res.send(message)
childrenRunning-=1;
})
}
})
function isPrime(number) {
...
}
app.listen(8000, ()=>console.log("Listening on 8000") )
I'm launching 3 requests with 5*10^9'ish numbers.
After 30 seconds I receive 2 responses with correct results.
CPU stops doing hard work and goes idle
Surprisingly after next 1 minute 30 seconds 1 thread starts to proceed, still pending, 3rd request and finishes after next 30 seconds with correct answer. Console log displayed below:
> node index.js
Listening on 8000
1
2
{ number: 5000000029, isPrime: true, time: 32471 }
{ number: 5000000039, isPrime: true, time: 32557 }
1
{ number: 5000000063, isPrime: true, time: 32251 }
Either express listens and checks pending requests once for a while or my browser sends actual requests every x time while pending. Can anybody explain what is happening here and why? How can I correctly achieve my goal?
The way your server code is written, if you receive a /isprime request and two child processes are already running, your request handler for /isprime does nothing. It never sends any response. You don't pass that first if test and then nothing happens afterwards. So, that request will just sit there with the client waiting for a response. Depending upon the client, it will probably eventually time out as a dead/inactive request and the client will shut it down.
Some clients (like browsers) may assume that something just got lost in the network and they may retry the request by sending it again. It would be my guess that this is what is happening in your case. The browser eventually times out and then resends the request. By the time it retries, there are less than two child processes running so it gets processed on the retry.
You could verify that the browser is retrying automatically by going to the network tab in the Chrome debugger and watching exactly what the browser sends to your server and watch that third request, see it timeout and see if it is the browser retrying the request.
Note, this code seems to be only partially implemented because you initially start two child processes, but you don't reuse those child processes. Once they finish and you decrement maxChildrenRuning, your code will then start another child process. Probably what you really want to do is to keep track of the two child processes you started and when one finishes, add it to an array of "available child processes" so when a new request comes in, you can just use an existing child process that is already started, but idle.
You also need to either queue incoming requests when all the child processes are full or you need to send some sort of error response to the http request. Never sending an http response to an incoming request is a poor design that just leads to great inefficiencies (connections hanging around much longer than needed that never actually accomplish anything).
I build a server-send-event endpoint with spring webflux. My javascript app subscribes to that endpoint and receives the published events correctly. BUT when call
EventSource.close() it seems the publisher is not informed that the client closed the connection.
Hence, the publisher is still thinking that there is a subscription, and publishes events to that.
The event stream is potentially never ending, so I never get a complete signal on the publisher side, which means the client has to close the connection.
I think that this happens because the close() call can't reach the underlying flux.
Do you have any solution or workaround for this problem?
#GetMapping(value = INSTANCE_UPDATE_ENDPOINT + "/{dealId}", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<InstanceUpdatedQuery>> getInstanceUpdateEventStreamForDeal(#PathVariable final String dealId) {
return this.instanceUpdatedApplicationEventFlux
.filter(instanceUpdatedEvent -> instanceUpdatedEvent.getDealId().equals(dealId))
.map(instanceUpdatedEvent -> ServerSentEvent.<InstanceUpdatedQuery>builder()
.id(instanceUpdatedEvent.getId())
.retry(Duration.ofSeconds(1))
.data(new InstanceUpdatedQuery(instanceUpdatedEvent))
.build())
}
You should add a handler to catch the doOnNext(Signal) and then detect the close signal and respond to it
Goal: To have a Node.js server where only one connection is active at a time.
I can temporarily remove the connection event listener on the server, or only set it up once in the first place by calling once instead of on, but then any connection that gets made while there is no connection event listener seems to get lost. From strace, I can see that Node is still accept(2)ing on the socket. Is it possible to get it to not do that, so that the kernel will instead queue up all incoming request until the server is ready to accept them again (or the backlog configured in listen(2) is exceeded)?
Example code that doesn’t work as I want it to:
#!/usr/bin/node
const net = require("net");
const server = net.createServer();
function onConnection(socket) {
socket.on("close", () => server.once("connection", onConnection));
let count = 0;
socket.on("data", (buffer) => {
count += buffer.length;
if (count >= 16) {
socket.end();
}
console.log("read " + count + " bytes total on this connection");
});
}
server.once("connection", onConnection);
server.listen(8080);
Connect to localhost, port 8080, with the agent of your choice (nc, socat, telnet, …).
Send less than 16 bytes, and witness the server logging to the terminal.
Without killing the first agent, connect a second time in another terminal. Try to send any number of bytes – the server will not log anything.
Send more bytes on the first connection, so that the total number of bytes sent there exceeds 16. The server will close this connection (and again log this to the console).
Send yet more bytes on the second connection. Nothing will happen.
I would like the second connection to block until the first one is over, and then to be handled normally. Is this possible?
.. so that the kernel will instead queue up all incoming request until the server is ready to accept them again (or the backlog configured in listen(2) is exceeded)?
...
I would like the second connection to block until the first one is over, and then to be handled normally. Is this possible?
Unfortunately, it is not possible without catching the connection events that are sent and managing the accepted connections in your application rather than with the OS backlog. node calls libuv with an OnConnection callback that will try to accept all connections and make them available in the JS context.
I am looking at node-xmpp and node-simple-xmpp and I am trying to make a simple client.
Everything works fine, except the disconnect.
I have made the following file after the example of simple-xmpp:
var xmpp = require('simple-xmpp');
xmpp.on('online', function() {
console.log('Yes, I\'m connected!');
xmpp.send('test2#example.com', 'Hello test');
// OK UNTIL HERE, DISCONNECT NOW
});
xmpp.connect({jid: 'test#example.com/webchat', password: 'test', reconnect: 'false'});
But I don't know how to disconnect. I tried to send a stanza with unavailable type:
stanza = new xmpp.Element('presence', {from: 'test#example.com', type: 'unavailable'});
xmpp.conn.send(stanza);
delete xmpp;
This is causing the client to go temporarily offline, but the problem is, it reconnects after a few seconds and keeps sending 'presence' stanza.
I have also tried calling xmpp.conn.end(), which also disconnects but it gives an error afterwards:
node_modules/simple-xmpp/node_modules/node-xmpp/lib/xmpp/connection.js:100
if (!this.socket.writable) {
^
TypeError: Cannot read property 'writable' of undefined
So, what am I doing wrong? I am sure there is an easy way to disconnect.
In the first case, <presence type='unavailable'/> does not always actually disconnect you; in your server, it looks like it might be, but your client is auto-reconnecting. delete xmpp is not actually causing your object to be cleaned up, it's just removing it from the local namespace.
In the second case send() isn't writing your stanza to the underlying socket immediately. If you close the socket with end() right afterwards, the socket is closed when the write actually happens.
If you add a short timeout after you call send(), before calling end() it will work. To make it good, you'll want your library developers to give you a callback when send() has actually written to the socket.
You should also check this out https://www.rfc-editor.org/rfc/rfc7395
In short if client is using to open connection it should use to kill that same connection.
To be more precise:
<close xmlns="urn:ietf:params:xml:ns:xmpp-framing" />
Solution of "Joe Hildebrand" also solved my problem, but this seemed more proper way for me.