I have an application that does the following:
WebSocket.onmessage
put bytes into a queue (Array)
on requestAnimationFrame
flush queue, rendering the received bytes using canvas/webgl
The application receives and plots realtime data. It has a bit of jerk/jank and while profiling my rendering code, I noticed that while the actual rendering seems execute quickly, there are large chunks of idle time during the WebSocket.onmessage handler.
I tried shrinking the window, mentioned in Nat Duca's post, to check if I am "GPU bound". But, even with a small window, the timeline gives pretty much the same results.
What I am suspecting now is Garbage Collection. The application reads data from the WebSocket, plots it, then discards it. So to me, it seems unsurprising that I have a saw-tooth memory profile:
So my question is now two-fold:
1) In the browser, is it even possible to avoid this memory footprint? In other languages, I know I'd be able have a buffer that was allocated once and read from the socket straight into it. But with the WebSocket interface, this doesn't seem possible; I'm getting a newly allocated buffer of bytes that I use briefly and then no longer need.
Update:--- Per pherris' suggestion, I removed the WebSocket from the equation, and while I see improvement, the issue still seems to persist. See the screenshots below:
2) Is this even my main issue? Are there other things in an application like this that I can do to avoid this blocking/idle time?
Related
I am writing a multi-threaded program. The main thread is constantly receiving network data, and the amount of data is relatively large, so sub-threads are used to process the data.
The received data is a 100-byte packet. Each time I receive a packet, I create a 100-byte SharedArrayBuffer and send it to the child thread via postMessage(). But the main thread receives the data very fast, so it is necessary to frequently call postMessage to notify the sub-thread, which leads to high CPU usage...affecting the response speed of the main thread
So I was thinking, if SharedArraybuffer can grow dynamically, the received data is constantly appended at the end of the SharedArrayBuffer, I only notify the child thread once, so that the child thread can also access the data.
I would like to ask how to dynamically increase the length of SharedArrayBuffer. I have tried to implement it in a chained way, storing a SharedArrayBuffer object in another SharedArrayBuffer object, but the browser does not allow this.
I would like to ask how to dynamically increase the length of SharedArrayBuffer.
From MDN web docs (emphasis mine).
"The SharedArrayBuffer object is used to represent a generic, fixed-length raw binary data buffer, similar to the ArrayBuffer object, but in a way that they can be used to create views on shared memory."
Fixed-length means you can't resize it...so it's not surprising it doesn't have a resize() method.
(Note: One thing that does cross my mind though is I believe there is a very new ability for SharedArrayBuffer to be used in WebAssembly as "linear memory" which has a grow_memory operator. I would imagine taking advantage of this would be very difficult, if it is possible at all, and likely would not be supported in many browsers if it was.)
I have tried to implement it in a chained way, storing a SharedArrayBuffer object in another SharedArrayBuffer object, but the browser does not allow this.
Nope. You can only write numbers.
It might seem that you could use a number to index into a table of SharedArrayBuffers, and link them that way. But then you have to worry about how to share that table between threads--same problem.
So no matter what you do, whichever thread makes the decision to update the shared buffering structure will need to notify the others of the update somehow. For that notification to be able to transfer SharedArrayBuffers, it will have to use postMessage to do it.
Have you considered experimenting with allocating a larger SharedArrayBuffer to start with, and treat it like a circular buffer so that the main thread reads out of the writes the sub threads are doing, in a "producer/consumer" pattern?
If you insist on implementing resizes, you might consider having some portion of the buffer hold an indicator that it is "stale" and a new one must be requested from the thread that resized it. You'll have to control that with synchronization. If you make a small sample that does this, it would probably make a good technical article...and if you have trouble with the small sample, it would be a good basis for further questions here.
There is no way to resize, only copy via a typed array.
But no RAM is actually allocated, until the ram is actually used. Under Node.js (v14.14.0) you can see how the ram usage gradually increases as the buffer is filled or how it is basically instantly used if array.fill is used.
const sharedBuffer = new SharedArrayBuffer(512 * 1024 * 1024)
const array = new Uint8Array(sharedBuffer)
// array.fill(1) // Causes ram to be allocated right away
I have a performance problem in Javascript causing a crash lately at work. With the objective modernising our applications, we are looking into running our applications as webservers, onto which our client would connect via a browser (chrome, firefox, ...), and having all our interfaces running as HTML+JS webpages.
To give you an overview of our performance needs, our application run image processing from camera sources, running in some cases at more than 20 fps, but in the most case around 2-3fps max.
Basically, we have a Webserver written in C++, which HTTP requests, and provides the user with the HTML pages of the interface and the corresponding JS scripts of the application.
In order to simplify the communication between the two applications, I then open a web socket between the webpage and the c++ server to send formatted messages back and forth. These messages can be pretty big, up to several Mos.
It all works pretty well as long as the FPS stays relatively low. When the fps increases the following two things happen.
Either the c++ webserver memory footprint increases pretty fast and crashes when no more memory is available. After investigation, this happens when the network usage full, and the websocket cache fills up. I think this is due to the websocket TCP-IP way of doing stuff, as the socket must wait for the message to be sent and received to send the next one.
Or the browser crashes after a while, showing the Aw snap screen (see figure below). It seems in that case that the same thing more or less happen but it seems this time due to the garbage collection strategy. The other figure below shows the printscreen of the memory usage when the application is running, clearly showing saw pattern. It seems to indicate that garbage collection is doing its work at intervals that are further and further away.
I have trapped the problem down to very big messages (>100Ko) being sent at fast rate per second. And the bigger the message, the faster it happens. In order to use the message I receive, I start a web worker, pass the blob i received to the web worker, the webworker uses a FileReaderSync to convert the message as an ArrayBuffer, and passes it back to the main thread. I expect this to have quite a lot of copies under the hood, but I am not so well versed in JS yet so to be sure of this statement. Also, I initialy did the same thing without the webworker (FileReader), but the framerate and CPU usage were really bad...
Here is the code I call to decode the messages:
function OnDataMessage(msg)
{
var webworkerDataMessage = new Worker('/js/EDXLib/MessageDecoderEvent.js'); // please no comments about this, it's actually a bit nicer on the CPU than reusing the same worker :-)
webworkerDataMessage.onmessage = MessageFileReaderOnLoadComManagerCBack;
webworkerDataMessage.onerror=ErrorHandler;
webworkerDataMessage.postMessage(msg.data);
}
function MessageFileReaderOnLoadComManagerCBack(e)
{
comManager.OnDataMessageReceived(e.data);
}
and the webworker code:
function DecodeMessage(msg)
{
var retMsg = new FileReaderSync().readAsArrayBuffer(msg);
postMessage(retMsg);
}
function receiveDecodingRequest(e)
{
DecodeMessage(e.data);
}
addEventListener("message", receiveDecodingRequest, true);
My question are the following:
Is there a way to make the GC not have to collect so much memory, by for instance telling some of the parts I use to reuse buffers instead of recreating them, or keeping the GC work intervals fixed ? This is something I know how to do in C++, but in JS ?
Is there another method I should use for my big payloads? Keep in mind that the transmission should be as fast as possible.
Is there another method for reading blob data as arraybuffers that would faster than what I did?
I thank you in advance for you help/comments.
As it turns out, the memory problem was due to the new WebWorker line and the new FileReaderSync line in the WebWorker.
Removing these greatly improved the performances!
Also, it turns out that this decoding operation is not necessary if I want to use the websocket as array buffer. I just need to set the binaryType attribute of websockets to "arraybuffer"...
So all in all, a very simple solution to a pain in the *** problem :-)
I am using HAPI.JS framework with NodeJS and created a proxy. Think that proxy means i am just maintaining session in redis. Other than that i am not doing anything in the code. May be only thing is i am using setInterval to log my process.memoryUsage() for every 3 mintues.
My Questions:
Why my Memory Keeps on Increasing?
Will it get down?
Is this occurs due to setInterval keeps on logging the process usage?
Is this occurs due to console logging of every request and response?
My Redis Database is kept open till my server crashes, it this causes this ?
Do i need use process mananger like new relic or strong loop to identify this?
So how long this memory will keep on increasing, at some point it must stop (i want to know which point is that?)
I am using sequelize of MSSQL transaction using pooling concept? Does pooling makes this?
P.S I am new to node JS.
Why my Memory Keeps on Increasing?
You got a memory leak
Will it get down?
Sometimes GC kicks in and cleans up some things (that are not leaking)
Is this occurs due to setInterval keeps on logging the process usage?
Usually not, but w/o seeing the code I can't say this for sure
Is this occurs due to console logging of every request and response?
Usually not, but w/o seeing the code I can't say this for sure
My Redis Database is kept open till my server crashes, it this causes this ?
Should not be a problem.
Do i need use process mananger like new relic or strongloop to identify this?
It is one way to do it ... but there are also others.
So how long this memory will keep on increasing, at some point it must stop (i want to know which point is that?)
Depends on the server setup. How much RAM + what else is running etc.
I am using sequelize of MSSQL transaction using pooling concept? Does pooling makes this?
Usually not, but w/o seeing the code I can't say this for sure
Maybe this post helps you find the leak:
https://www.nearform.com/blog/how-to-self-detect-a-memory-leak-in-node/
No problems to record the microphone, connect the analyzer for a nice vu-meter, re-sample the massive amount of data to something we can handle (8Khz, Mono) using 'xaudio.js' from the speex.js lib and to wrap it into an appropriate WAV envelope.
Stopping the recorder seems to be a different story, because the recording process severely lags behind the onaudioprocess functionality. But even this is not a problem as I can calculate the missing samples and wait for them to arrive before I actually store the data.
But what now? How do I stop the audio-process from calling onaudioprocess? Disconnecting all nodes doesn't make a difference. How am I able to re-initialize all buffers to reate a clean and fresh jump-in point for the next recording? Should I destroy the AudioContext? How would I do that? Or is it enough to 'null' the createMediaStreamSource?
What needs to be done to truly set everything up for sequential independent recordings?
Any hint is appreciated.
I'm not sure of all your code structure; personally, I'd try to hang on to the AudioContext and the input stream (from the getUserMedia callback), even if I removed the MediaStreamSourceNode.
To get rid of the ScriptProcessor, though - set the .onaudioprocess in the script processor node to null. That will stop it calling the callback - then if you disconnect it and release all references, it should be garbage collected as usual.
[edit] Oh, and the only way to delete an AudioContext is to get rid of any processing that's happening (disconnect all nodes, remove any onaudioprocess), remove any references to it, and wait for it to be garbage-collected.
I'm having some issues with garbage collection in Chrome. I have some AJAX code that retrieves a large number of objects from a web service (in the tens of thousands), and it then transforms the data into various objects. At some point shortly after the response is received, the JS hangs for around 7 seconds while Chrome does garbage collection.
I want to delay GC until after my code finishes running. I thought saving a reference to the original JSON objects that were returned by the service and then disposing of it later would do the trick, but it hasn't had any effect, the GC still occurs right after the AJAX response arrives. When I try to take a Heap Snapshot, to verify this is what's causing the GC, Chrome crashes (something it's really good at doing, I might add...)
A couple related questions:
Does Chrome not use a separate thread for GC?
Is there anything I can do to delay the GC until after my code has finished running?