How to send WebSocketSubject messages only from client to server? - javascript

I created an observable for my websocket connection using WebSocketSubject from rxjs. So far so good the server-client communication is working. Now the thing is: I can't distinguish between the origins of the message in my client. I send messages by calling next() on the subject. All subscriptions on the client get those messages too. How can I send messages only to the server instead?
The implementation mainly stems from this article: https://medium.com/factory-mind/angular-websocket-node-31f421c753ff
My code:
socket$: WebSocketSubject<any>;
constructor() {
this.socket$ = WebSocketSubject.create(SOCKET_URL);
this.socket$.subscribe(
(message) => console.log('<-- ' + message),
(err) => console.error('Error on WebSocket:', err),
() => console.warn('Completed!')
);
}
send(message: SocketMessage) {
const tmp: any = {};
tmp.type = message.type;
tmp.payload = message.payload;
// This will be received by the server but also by client subscriptions = bad
this.socket$.next(JSON.stringify(tmp));
}

I found a solution trying to have your described behavior :
RxJS6's WebSocket's documentation states that
Calling next does not affect subscribers of WebSocketSubject - they have no information that something was sent to the server (unless of course the server responds somehow to a message).
Thus, by using RxJS 6 instead of RxJS 5 you should not have the described behavior.

Related

How to resolve a buffered values issue in Websocket Rxjs? Sending a message doesn't go to the server but gets stored in buffer

I am using Websocket Rxjs in my application. My connection gets established with the server and after subscribing to it I receive all the data in an array. Now when I try to send the some data back to the server, it just doesn't send, it get's stored in the buffer array of destination object of websocket observable (screenshot below). I am sharing the snippet of the code also.
import { webSocket } from 'rxjs/webSocket';
const subject = webSocket('ws://localhost:8081');
subject.subscribe({
next: msg => console.log('message received: ' + msg),
error: err => console.log(err),
complete: () => console.log('complete')
});
// Upon clicking a button I send this to the sever. You can see it in the screenshot.
subject.next({
"action" : "read",
"id" : 1595
});
My connection remains active though. It doesn't gets closed but still I am facing this issue. What could be the issue with this? Is it something with the backend ? If yes, then what could it be ? Any help will be appreciated. Thank you. :)
It seems that the problem is in your websocket server, but to be sure try instead to connect to a test server like the echo server of Postman:
wss://ws.postman-echo.com/raw
In each time the client sends a message to this server it will send it back to the client directly.
By the way: Postman now has the possibility yo to connect to your websocket server and test it.
Here you can read how to do that:
https://blog.postman.com/postman-supports-websocket-apis/

SSE/Redis - how to recover messages sent when SSE goes offline

On a website I have a very simple Live chat setup that uses SSE/Redis and pub/sub structure.
The basic setup (without going into details) is:
Client-side using EventSource
Opens SSE connection and subscribes to live events sent by SSE daemon. Sends messages to an API endpoint
connect(hash, eventListener) {
const url = `${url}?client=$hash=${hash}`;
sseSource = new EventSource(url);
sseSource.onopen = function(e) {
reconnectFrequencySeconds = 1;
}
sseSource.onerror = err => {
this.closeSSEStream();
this.reconnectSSEStream(hash, eventListener);
};
sseSource.addEventListener('messages', event => {
const messages = JSON.parse(event.data);
eventListener(messages);
});
},
API endpoint
That stores message in the database and pushes it to a Redis channel.
Redis DB
That keeps and serves the messages.
Server-side SSE daemon
Subscribes client to a channel in a Redis DB and forwards messages to the subscribers using SSE stream.
const subscriber = redis.createClient();
subscriber.select(config.redisDatabase);
subscriber.on('message', function (channel, message) {
log(connectionId, 'Redis: new msg on channel: ' + channel, message);
let event = {
event: 'messages',
data: message
};
currentClient.connection.write(event);
});
The whole thing works pretty well, however, it is one tweak away from perfection.
During deploy we restart our workers (including SSE daemon) and while it goes offline users do not receive LIVE updates. It reconnects just fine but messages that have been sent during down time are lost (as daemon starts listening to messages on reconnect only).
My only idea for a workaround involves an overengineered solution where "lost" messages are collected with a separate API endpoint on reconnect and displayed to the user.
Is there an out-of-the-box way to receive messages that have been stored to Redis BEFORE subscribing to a channel? E.g. "pop" unprocessed messages or something like that?
when you have reconnected send request to check if you are new msg with time of last msg
and if you are newer msg send it in result msg to avoid new request

Socket.io disconnected unexpectedly

I have node.js service and angular client using socket.io to transport some message during long time http request.
Service:
export const socketArray: SocketIO.Socket[] = [];
export let socketMapping: {[socketId: string]: number} = {};
const socketRegister: hapi.Plugin<any> = {
register: (server) => {
const io: SocketIO.Server = socket(server.listener);
// Whenever a session connected to socket, create a socket object and add it to socket array
io.on("connection", (socket) => {
console.log(`socket ${socket.id} connected`);
logger.info(`socket ${socket.id} connected`);
// Only put socket object into array if init message received
socket.on("init", msg => {
logger.info(`socket ${socket.id} initialized`);
socketArray.push(socket);
socketMapping[socket.id] = msg;
});
// Remove socket object from socket array when disconnected
socket.on("disconnect", (reason) => {
console.log(`socket ${socket.id} disconnected because: ${reason}`)
logger.info(`socket ${socket.id} disconnected because: ${reason}`);
for(let i = 0; i < socketArray.length; i ++) {
if(socketArray[i] === socket) {
socketArray.splice(i, 1);
return;
}
}
});
});
},
name: "socketRegister",
version: "1.0"
}
export const socketSender = async (socketId: string, channel: string, content: SocketMessage) => {
try {
// Add message to db here
// await storeMessage(socketMapping[socketId], content);
// Find corresponding socket and send message
logger.info(`trying sending message to ${socketId}`);
for (let i = 0; i < socketArray.length; i ++) {
if (socketArray[i].id === socketId) {
socketArray[i].emit(channel, JSON.stringify(content));
logger.info(`socket ${socketId} send message to ${channel}`);
if (content.isFinal == true) {
// TODO: delete all messages of the process if isFinal is true
await deleteProcess(content.processId);
}
return;
}
}
} catch (err) {
logger.error("Socket sender error: ", err.message);
}
};
Client:
connectSocket() {
if (!this.socket) {
try {
this.socket = io(socketUrl);
this.socket.emit('init', 'some-data');
} catch (err) {
console.log(err);
}
} else if (this.socket.disconnected) {
this.socket.connect();
this.socket.emit('init', 'some-data');
}
this.socket.on('some-channel', (data) => {
// Do something
});
this.socket.on('disconnect', (data) => {
console.log(data);
});
}
They usually work fine but produce disconnection error randomly. From my log file, we can see this:
2018-07-21T00:20:28.209Z[x]INFO: socket 8jBh7YC4A1btDTo_AAAN connected
2018-07-21T00:20:28.324Z[x]INFO: socket 8jBh7YC4A1btDTo_AAAN initialized
2018-07-21T00:21:48.314Z[x]INFO: socket 8jBh7YC4A1btDTo_AAAN disconnected because: ping timeout
2018-07-21T00:21:50.849Z[x]INFO: socket C6O7Vq38ygNiwGHcAAAO connected
2018-07-21T00:23:09.345Z[x]INFO: trying sending message to C6O7Vq38ygNiwGHcAAAO
And at the same time of disconnect message, front-end also noticed a disconnect event which saying transport close.
From the log, we can get the work flow is this:
Front-end started a socket connection and sent an init message to back-end. It also save the socket.
Back-end detected the connection and received init message
Back-end put the socket to the array so that it can be used anytime anywhere
The first socket was disconnected unexpectedly and another connection is published without front-end's awareness so front-end never send a message to initialize it.
Since front-end's saved socket is not changed, it used the old socket id when made http request. As a result, back-end sent message with the old socket which was already removed from socket array.
The situation doesn't happen frequently. Does anyone know what could cause the disconnect and unknown connect issue?
It really depends what "long time http request" is doing. node.js runs your Javascript as a single thread. That means it can literally only do one thing at a time. But, since many things that servers do are I/O related (read from a database, get data from a file, get data from another server, etc...) and node.js uses event-driven asynchronous I/O, it can often have many balls in the air at the same time so it appears to be working on lots of requests at once.
But, if your complex http request is CPU-intensive, using lots of CPU, then it's hogging the single Javascript thread and nothing else can get done while it is hogging the CPU. That means that all incoming HTTP or socket.io requests have to wait in a queue until the one node.js Javascript thread is free so it can grab the next event from the event queue and start to process that incoming request.
We could only really help you more specifically if we could see the code for this "very complex http request".
The usual way around CPU-hogging things in node.js is to offload CPU-intensive stuff to other processes. If it's mostly just this one piece of code that causes the problem, you can spin up several child processes (perhaps as many as the number of CPUs you have in your server) and then feed them the CPU-intensive work and leave your main node.js process free to handle incoming (non-CPU-intensive) requests with very low latency.
If you have multiple operations that might hog the CPU, then you either have to farm them all out to child processes (probably via some sort of work queue) or you can deploy clustering. The challenge with clustering is that a given socket.io connection will be to one particular server in your cluster and if it's that process that just happens to be executing a CPU-hogging operation, then all the socket.io connections assigned to that server would have bad latency. So, regular clustering is probably not so good for this type of issue. The work-queue and multiple specialized child processes to handle CPU-intensive work are probably better because those processes won't have any outside socket.io connections that they are responsible for.
Also, you should know that if you're using synchronous file I/O, that blocks the entire node.js Javascript thread. node.js can not run any other Javascript during a synchronous file I/O operation. node.js gets its scalability and its ability to have many operations in flight at the same from its asynchronous I/O model. If you use synchronous I/O, you completely break that and ruin scalability and responsiveness.
Synchronous file I/O belongs only in server startup code or in a single purpose script (not a server). It should never be used while processing a request in a server.
Two ways to make asynchronous file I/O a little more tolerable are by using streams or by using async/await with promisified fs methods.

how to send a message on a typescript websocket

I have an angular application needing to subscribe to a websocket for incoming data.
On the angular side, in a service I have
setContextChannel(channel) {
this.contextWS = new WebSocket(channel);
that = this;
this.contextWS.onmessage = function(event) {
console.log('ws', event.data);
that.patientID = event.data;
};
this.contextWS.onerror = function(event) {
console.log('ws error', event);
}
}
and on the mock server side, I have a typescript node server that creates the socket as follows:
import {Server} from "ws";
var wsServer: Server = new Server({port: 8085});
console.log('ws on 8085');
wsServer.on('connection',websocket => websocket.send('first pushed message'));//////
my question is how to use the wsServer to send messages?
I'm not sure what are you asking about. This line is correct:
wsServer.on('connection',websocket => websocket.send('first pushed message'));
If you want to keep sending messages to all connected clients, you need to either use the property wsServer.clients to send messages to each connected client, or store the references to each connected client (the websocket variable in your code) in an array and then send the messages to each client using forEach().
Take a look at this code sample: https://github.com/Farata/angular2typescript/blob/master/chapter8/http_websocket_samples/server/bids/bid-server.ts

Access grpc stream variable for long-running process in Node

I am using Node.js to connect to a server using gRPC that performs a long running task.
The server sends a unidirectional stream to the client (the Node.js app) while the job is in progress. I need to implement a Stop button and am told that closing the gRPC stream will stop the job in progress.
This is currently my code:
let express = require('express'),
router = express.Router(),
grpc = require('grpc'),
srv = grpc.load(__dirname + '/job_handler.proto').ns;
let startJob = (jobID, parameters) => srv.createJob(jobID, parameters);
router.post('/jobs', (req, res) => {
let lengthyOperation = startJob(jobID, parameters);
lengthyOperation.on('data', (data) => {
console.log(`Data from lengthy operation: ${data}`);
});
lengthyOperation.on('end', () =>
console.log('Lengthy operation completed');
});
res.setHeader('Location', `/jobs/${jobID}`);
res.status(202).send();
});
As you can see, I send an HTTP 202 response to the client upon creating the job and it continues asynchronously in the background.
Questions:
How do I close the stream?
How do I access the lengthyOperation variable to do so?
The lengthyOperation object has a cancel method that cancels the call. So, when you want to stop the stream, just call lengthyOperation.cancel().
Note that when you do this, it will cause the call to end with an error. I would recommend adding a lengthyOperation.on('error', ...) handler to handle that error.

Categories