I have a javascript client which has to communicate with more than 1 than one Websocket server. One of these servers sends small, high frequency payloads that I can process quickly, while the other sends larger, low frequency data that takes a long time to process:
this.hifreq = new WebSocket("ws://192.168.1.2:4646/hi");
this.hifreq.onmessage = this.onHighfreqMessage;
this.lofreq = new WebSocket("ws://192.168.1.3:4646/lo");
this.lofreq.onmessage = this.onLowfreqMessage;
I cannot find any precise documentation indicating how the threading model will work. Everybody seems to be saying that the browser model is single threaded, so there is no way I can receive two payloads and work on them simultaneously, but I can't find the single concrete documentation that says that. Is that correct? and if so, is there a way to handle the messages on different threads?
I want to make the page as responsive as possible, and my current understanding is that once I start processing the large payload, I cannot update the page in the background with the high frequency data (which I can process almost instantaneously).
I am coming form a C++/Java background, so I am trying to understand what my options are here.
You can use a Web Worker to do heavy background task. Note that JavaScript still appears to be single threaded. You have no access to window object of the page in the worker thread. You should use postMessage and onmessage on DedicatedWorkerGlobalScope to communicate with the main script.
Related
I am writing a multi-threaded program. The main thread is constantly receiving network data, and the amount of data is relatively large, so sub-threads are used to process the data.
The received data is a 100-byte packet. Each time I receive a packet, I create a 100-byte SharedArrayBuffer and send it to the child thread via postMessage(). But the main thread receives the data very fast, so it is necessary to frequently call postMessage to notify the sub-thread, which leads to high CPU usage...affecting the response speed of the main thread
So I was thinking, if SharedArraybuffer can grow dynamically, the received data is constantly appended at the end of the SharedArrayBuffer, I only notify the child thread once, so that the child thread can also access the data.
I would like to ask how to dynamically increase the length of SharedArrayBuffer. I have tried to implement it in a chained way, storing a SharedArrayBuffer object in another SharedArrayBuffer object, but the browser does not allow this.
I would like to ask how to dynamically increase the length of SharedArrayBuffer.
From MDN web docs (emphasis mine).
"The SharedArrayBuffer object is used to represent a generic, fixed-length raw binary data buffer, similar to the ArrayBuffer object, but in a way that they can be used to create views on shared memory."
Fixed-length means you can't resize it...so it's not surprising it doesn't have a resize() method.
(Note: One thing that does cross my mind though is I believe there is a very new ability for SharedArrayBuffer to be used in WebAssembly as "linear memory" which has a grow_memory operator. I would imagine taking advantage of this would be very difficult, if it is possible at all, and likely would not be supported in many browsers if it was.)
I have tried to implement it in a chained way, storing a SharedArrayBuffer object in another SharedArrayBuffer object, but the browser does not allow this.
Nope. You can only write numbers.
It might seem that you could use a number to index into a table of SharedArrayBuffers, and link them that way. But then you have to worry about how to share that table between threads--same problem.
So no matter what you do, whichever thread makes the decision to update the shared buffering structure will need to notify the others of the update somehow. For that notification to be able to transfer SharedArrayBuffers, it will have to use postMessage to do it.
Have you considered experimenting with allocating a larger SharedArrayBuffer to start with, and treat it like a circular buffer so that the main thread reads out of the writes the sub threads are doing, in a "producer/consumer" pattern?
If you insist on implementing resizes, you might consider having some portion of the buffer hold an indicator that it is "stale" and a new one must be requested from the thread that resized it. You'll have to control that with synchronization. If you make a small sample that does this, it would probably make a good technical article...and if you have trouble with the small sample, it would be a good basis for further questions here.
There is no way to resize, only copy via a typed array.
But no RAM is actually allocated, until the ram is actually used. Under Node.js (v14.14.0) you can see how the ram usage gradually increases as the buffer is filled or how it is basically instantly used if array.fill is used.
const sharedBuffer = new SharedArrayBuffer(512 * 1024 * 1024)
const array = new Uint8Array(sharedBuffer)
// array.fill(1) // Causes ram to be allocated right away
I have a performance problem in Javascript causing a crash lately at work. With the objective modernising our applications, we are looking into running our applications as webservers, onto which our client would connect via a browser (chrome, firefox, ...), and having all our interfaces running as HTML+JS webpages.
To give you an overview of our performance needs, our application run image processing from camera sources, running in some cases at more than 20 fps, but in the most case around 2-3fps max.
Basically, we have a Webserver written in C++, which HTTP requests, and provides the user with the HTML pages of the interface and the corresponding JS scripts of the application.
In order to simplify the communication between the two applications, I then open a web socket between the webpage and the c++ server to send formatted messages back and forth. These messages can be pretty big, up to several Mos.
It all works pretty well as long as the FPS stays relatively low. When the fps increases the following two things happen.
Either the c++ webserver memory footprint increases pretty fast and crashes when no more memory is available. After investigation, this happens when the network usage full, and the websocket cache fills up. I think this is due to the websocket TCP-IP way of doing stuff, as the socket must wait for the message to be sent and received to send the next one.
Or the browser crashes after a while, showing the Aw snap screen (see figure below). It seems in that case that the same thing more or less happen but it seems this time due to the garbage collection strategy. The other figure below shows the printscreen of the memory usage when the application is running, clearly showing saw pattern. It seems to indicate that garbage collection is doing its work at intervals that are further and further away.
I have trapped the problem down to very big messages (>100Ko) being sent at fast rate per second. And the bigger the message, the faster it happens. In order to use the message I receive, I start a web worker, pass the blob i received to the web worker, the webworker uses a FileReaderSync to convert the message as an ArrayBuffer, and passes it back to the main thread. I expect this to have quite a lot of copies under the hood, but I am not so well versed in JS yet so to be sure of this statement. Also, I initialy did the same thing without the webworker (FileReader), but the framerate and CPU usage were really bad...
Here is the code I call to decode the messages:
function OnDataMessage(msg)
{
var webworkerDataMessage = new Worker('/js/EDXLib/MessageDecoderEvent.js'); // please no comments about this, it's actually a bit nicer on the CPU than reusing the same worker :-)
webworkerDataMessage.onmessage = MessageFileReaderOnLoadComManagerCBack;
webworkerDataMessage.onerror=ErrorHandler;
webworkerDataMessage.postMessage(msg.data);
}
function MessageFileReaderOnLoadComManagerCBack(e)
{
comManager.OnDataMessageReceived(e.data);
}
and the webworker code:
function DecodeMessage(msg)
{
var retMsg = new FileReaderSync().readAsArrayBuffer(msg);
postMessage(retMsg);
}
function receiveDecodingRequest(e)
{
DecodeMessage(e.data);
}
addEventListener("message", receiveDecodingRequest, true);
My question are the following:
Is there a way to make the GC not have to collect so much memory, by for instance telling some of the parts I use to reuse buffers instead of recreating them, or keeping the GC work intervals fixed ? This is something I know how to do in C++, but in JS ?
Is there another method I should use for my big payloads? Keep in mind that the transmission should be as fast as possible.
Is there another method for reading blob data as arraybuffers that would faster than what I did?
I thank you in advance for you help/comments.
As it turns out, the memory problem was due to the new WebWorker line and the new FileReaderSync line in the WebWorker.
Removing these greatly improved the performances!
Also, it turns out that this decoding operation is not necessary if I want to use the websocket as array buffer. I just need to set the binaryType attribute of websockets to "arraybuffer"...
So all in all, a very simple solution to a pain in the *** problem :-)
When using Server-Sent Events should the client establish multiple connections to receive different events it is interested in, or should there be a single connection and the client indicates what it is interested via a separate channel? IMO the latter seems more preferable although to some it might make the client code more complex. The spec supports named events (events that relate to a particular topic), which to me suggests that a Server-Sent Events connection should be used as single channel for all events.
The following code illustrates the first scenario where a multiple Server-Sent Event connections are initiated:
var EventSource eventSource1 = new EventSource("events/topic1");
eventSource1.addEventListener('topic1', topic1Listener, false);
var EventSource eventSource2 = new EventSource("events/topic2");
eventSource2.addEventListener('topic2', topic2Listener, false);
eventSource1 would receive "topic1" events and eventSource2 would receive "topic2" events. Whilst this is pretty straight forward it is also pretty inefficient with a hanging GET occurring for each topic you are interested in.
The alternative is something like the following:
var EventSource eventSource3 = new EventSource("/events?id=1234")
eventSource3.addEventListener('topic3', topic3Listener, false);
eventSource3.addEventListener('topic4', topic4Listener, false);
var subscription = new XMLHttpRequest();
subscription.open("PUT", "/events/topic3?id=1234", true);
subscription.send();
In this example a single EventSource would exist and interest in a particular event would be specified by a separate request with the Server-Sent Event connection and the registration being correlated by the id param. topic3Listener would receive "topic3" events and topic4Listener would not. Whilst requiring slightly more code the benefit is that only a single connection is made, but events can be still be identified and handled differently.
There are a number examples on the web that show the use of named events, but it seems the event names (or topics) are known in advance so there is no need for a client to register interest with the server (example). Whilst I am yet to see an example showing multiple EventSource objects, I also haven't seen an example showing a client using a separate request to register interest in a particular topic, as I am doing above. My interpretation of the spec leads me to believe that indicating an interest in a certain topic (or event name) is entirely up to the developer and that it can be done statically with the client knowing the names of the events it is going to receive or dynamically with the client alerting the server that it is interested in receiving particular events.
I would be pretty interested in hearing other people's thoughts on the topic. NB: I am usually a Java dev so please forgive my mediocre JS code.. :)
I would highly recommend, IMHO, that you have one EventSource object per SSE-providing service, and then emit the messages using different types.
Ultimately, though, it depends on how similar the message types are. For example, if you have 5 different types of messages related to users, have a user EventSource and differentiate with event types.
If you have one event type about users, and another about sandwiches, I'd say keep them in different services, and thus EventSources.
It's a good idea to think of breaking up EventSources the same way you would a restful service. If you wouldn't get two things from the same service with AJAX, you probably shouldn't get them from the same EventSource.
In response to vague and permissive browser standard interpretation*, browser vendors have inconsistently implemented restrictions to the number of persistent connections allowed to a single domain/port. As each event receiver to an async context assumes a single persistent connection allocation for as long as that receiver is open, it is crucial that the number of the EventSource listeners be strictly limited in order to avoid exceeding the varying, vendor-specific limits. In practice this limits you to about 6 EventSource/async context pairs per application. Degradation is graceful (e.g. additional EventSource connection requests will merely wait until a slot is available), but keep in mind there must be connections available for retrieving page resources, responding to XHR, etc.
*The W3C has issued standards with respect to persistent connections that contain language “… SHOULD limit the number of simultaneous connections…” This language means the standard is not mandatory so vendor compliance is variable.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.4
I'm experimenting with web workers, and was wondering how well they would deal with embarassingly parallell problems. I therefore implemented Connaway's Game of Life. (To have a bit more fun than doing a blur, or something. The problems would be the same in that case however.)
At the moment I have one web worker performing iterations and posting back new ImageData for the UI thread to place in my canvas. Works nicely.
My experiment doesn't end there however, cause I have several CPU's available and would like to parallellize my application.
So, to start off simply I split my data in two, down the middle, and make two workers each dealing with a half each. The problem is of course the split. Worker A needs one column of pixels from worker B and vice versa. Now, I can clearly fix this by letting my UI-thread give that column down to the workers, but it would be much better if my threads could pass them to eachother directly.
When splitting further, each worker would only have to keep track of it's neighbouring workers, and the UI thread would only be responsible for updating the UI (as it should be).
My problem is, I don't see how I can achieve this worker-to-worker communication. I tried handing the neighbours to eachother by way of an initialization postMessage, but that would copy my worker rather than hand down a reference, which luckily chrome warned me about being impossible.
Uncaught Error: DATA_CLONE_ERR: DOM Exception 25
Finally I see that there's something called a SharedWorker. Is this what I should look into, or is there a way to use the Worker that would solve my problem?
You should be able to use channel messaging:
var channel = new MessageChannel();
worker1.postMessage({code:"port"}, [channel.port1]);
worker2.postMessage({code:"port"}, [channel.port2]);
Then in your worker threads:
var xWorkerPort;
onmessage = function(event) {
if (event.data.code == "port") {
xWorkerPort = event.ports[0];
xWorkerPort.onmessage = function(event) { /* do stuff */ };
}
}
There's not much documentation around, but you could try this MS summary to get started.
So I have this seriously recursive function that I would like to use with my code. The issue is it doesn't really take advantage of dual core machines because js is single threaded. I have tried using webworkers but don't really know much about multicore programming. Would someone point me to some material that could explain how it is done. I googled to find this sample link but its not really much help without documentation! =/
I would be glad if someone could show me how this could be done without webworkers though! That would be just awesome! =)
I came across this link on whatwg. This is really weird because it explains how to use multicore programming in webworkers etc, but on executing on my chrome browser it throws errors. Same goes with other browsers.
Error: 9Uncaught ReferenceError: Worker is not defined in worker.js
UPDATE (2018-06-21): For people coming here in search of multi-core programming in JavaScript, not necessarily browser JavaScript (for that, the answer still applies as-is): Node.js now supports multi-threading behind a feature flag (--experimental-workers): release info, relevant issue.
Writing this off the top of my head, no guarantees for source code. Please go easy on me.
As far as I know, you cannot really program in threads with JavaScript. Webworkers are a form of multi-programming; yet JavaScript is by its nature single-threaded (based on an event loop).
A webworker is seperate thread of execution in the sense that it doesn't share anything with the script that started it; there is no reference to the script's global object (typically called "window" in the browser), and no reference to any of your main script's variables other than data you send to the thread.
Think as the web worker as a little "server" that gets asked a question and provides an answer. You can only send strings to that server, and it can only parse the string and send back what it has computed.
// in the main script, one starts a worker by passing the file name of the
// script containing the worker to the constructor.
var w = new Worker("myworker.js");
// you want to react to the "message" event, if your worker wants to inform
// you of a result. The function typically gets the event as an argument.
w.addEventListener("message",
function (evt) {
// process evt.data, which is the message from the
// worker thread
alert("The answer from the worker is " + evt.data);
});
You can then send a message (a String) to this thread using its postMessage()-Method:
w.postMessage("Hello, this is my message!");
A sample worker script (an "echo" server) can be:
// this is another script file, like "myworker.js"
self.addEventListener("message",
function (evt) {
var data = JSON.parse(evt.data);
/* as an echo server, we send this right back */
self.postMessage(JSON.stringify(data))
})
whatever you post to that thread will be decoded, re-encoded, and sent back. of course you can do whatever processing you would want to do in between. That worker will stay active; you can call terminate() on it (in your main script; that'd be w.terminate()) to end it or calling self.close() in your worker.
To summarize: what you can do is you zip up your function parameters into a JSON string which gets sent using postMessage, decoded, and processed "on the other side" (in the worker). The computation result gets sent back to your "main" script.
To explain why this is not easier: More interaction is not really possible, and that limitation is intentional. Because shared resources (an object visible to both the worker and the main script) would be subject to two threads interfering with them at the same time, you would need to manage access (i.e., locking) to that resource in order to prevent race conditions.
The message-passing, shared-nothing approach is not that well-known mainly because most other programming languages (C and Java for example) use threads that operate on the same address space (while others, like Erlang, for instance, don't). Consider this:
It is really hard to code a larger project with mutexes (a mutual exclusion mechanism) because of the associated deadlock/race condition complexities. This is stuff that can make grown men cry!
It is really easy in comparison to do message-passing, shared-nothing semantics. The code is isolated; you know exactly what goes into your worker and what comes out of your worker. Deadlocks and race conditions are impossible to achieve!
Just try it out; it is capable of doing interesting things, probably all you want. Bear in mind that it is still implementation defined whether it takes advantage of multicore as far as I know.
NB. I just got informed that at least some implementations will handle JSON encoding of messages for you.
So, to give an answer to your question (it's all above; tl;dr version): No, you cannot do this without web workers. But there is nothing really wrong about web workers aside from browser support, as is the case with HTML5 in general.
As far as I remember this is only possible with the new HTML5 standard. The keyword is "Web-Worker"
See also:
HTML5: JavaScript Web Workers
JavaScript Threading With HTML5 Web Workers
Web workers are the answer to the client side. For NodeJS there are many approaches. Most popular - spawn several processes with pm2 or similar tool. Run single process and spawn/fork child processes. You can google around these and will find a lot of samples and tactics.
Web workers are already well supported by all browsers. https://caniuse.com/#feat=webworkers
API & samples: https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers