Garbage Collection in Chrome - javascript

I'm having some issues with garbage collection in Chrome. I have some AJAX code that retrieves a large number of objects from a web service (in the tens of thousands), and it then transforms the data into various objects. At some point shortly after the response is received, the JS hangs for around 7 seconds while Chrome does garbage collection.
I want to delay GC until after my code finishes running. I thought saving a reference to the original JSON objects that were returned by the service and then disposing of it later would do the trick, but it hasn't had any effect, the GC still occurs right after the AJAX response arrives. When I try to take a Heap Snapshot, to verify this is what's causing the GC, Chrome crashes (something it's really good at doing, I might add...)
A couple related questions:
Does Chrome not use a separate thread for GC?
Is there anything I can do to delay the GC until after my code has finished running?

Related

Garbage collection takes long [How to debug what is being collected?]

I have an issue where the main javascript thread is being blocked due to garbage collection. (see screenshot below). The use case is that I am streaming JSON from a server with oboe.js, process some nodes, and put the nodes in a WebGl renderer and render those while they are streaming. This works pretty well, except for the garbage collector causing 2fps due to collecting blocks of 9mb which takes ~500ms.
The problem is that I do not know what is being collected and how I can prevent it from being collected. So my question is twofold:
How to either reduce the size of blocks the garbage collector collects or postpone the moment that the garbage collector runs?
How to debug what is being collected?
A snippet of what is happening:
I found the issue after all. There was a parsing process that spit out a variable that was a very complex nested object.
someModule.parse(someString, function(result){
// result = complex deep nested object
// process result
});
After the process result stage, the result variable was flagged for garbage collection, which it immediately did. I cached the result in a temporary variable and deleted it 'manually' later, which solved the problem.

Performance in Websockets blob decoding causing memory problems

I have a performance problem in Javascript causing a crash lately at work. With the objective modernising our applications, we are looking into running our applications as webservers, onto which our client would connect via a browser (chrome, firefox, ...), and having all our interfaces running as HTML+JS webpages.
To give you an overview of our performance needs, our application run image processing from camera sources, running in some cases at more than 20 fps, but in the most case around 2-3fps max.
Basically, we have a Webserver written in C++, which HTTP requests, and provides the user with the HTML pages of the interface and the corresponding JS scripts of the application.
In order to simplify the communication between the two applications, I then open a web socket between the webpage and the c++ server to send formatted messages back and forth. These messages can be pretty big, up to several Mos.
It all works pretty well as long as the FPS stays relatively low. When the fps increases the following two things happen.
Either the c++ webserver memory footprint increases pretty fast and crashes when no more memory is available. After investigation, this happens when the network usage full, and the websocket cache fills up. I think this is due to the websocket TCP-IP way of doing stuff, as the socket must wait for the message to be sent and received to send the next one.
Or the browser crashes after a while, showing the Aw snap screen (see figure below). It seems in that case that the same thing more or less happen but it seems this time due to the garbage collection strategy. The other figure below shows the printscreen of the memory usage when the application is running, clearly showing saw pattern. It seems to indicate that garbage collection is doing its work at intervals that are further and further away.
I have trapped the problem down to very big messages (>100Ko) being sent at fast rate per second. And the bigger the message, the faster it happens. In order to use the message I receive, I start a web worker, pass the blob i received to the web worker, the webworker uses a FileReaderSync to convert the message as an ArrayBuffer, and passes it back to the main thread. I expect this to have quite a lot of copies under the hood, but I am not so well versed in JS yet so to be sure of this statement. Also, I initialy did the same thing without the webworker (FileReader), but the framerate and CPU usage were really bad...
Here is the code I call to decode the messages:
function OnDataMessage(msg)
{
var webworkerDataMessage = new Worker('/js/EDXLib/MessageDecoderEvent.js'); // please no comments about this, it's actually a bit nicer on the CPU than reusing the same worker :-)
webworkerDataMessage.onmessage = MessageFileReaderOnLoadComManagerCBack;
webworkerDataMessage.onerror=ErrorHandler;
webworkerDataMessage.postMessage(msg.data);
}
function MessageFileReaderOnLoadComManagerCBack(e)
{
comManager.OnDataMessageReceived(e.data);
}
and the webworker code:
function DecodeMessage(msg)
{
var retMsg = new FileReaderSync().readAsArrayBuffer(msg);
postMessage(retMsg);
}
function receiveDecodingRequest(e)
{
DecodeMessage(e.data);
}
addEventListener("message", receiveDecodingRequest, true);
My question are the following:
Is there a way to make the GC not have to collect so much memory, by for instance telling some of the parts I use to reuse buffers instead of recreating them, or keeping the GC work intervals fixed ? This is something I know how to do in C++, but in JS ?
Is there another method I should use for my big payloads? Keep in mind that the transmission should be as fast as possible.
Is there another method for reading blob data as arraybuffers that would faster than what I did?
I thank you in advance for you help/comments.
As it turns out, the memory problem was due to the new WebWorker line and the new FileReaderSync line in the WebWorker.
Removing these greatly improved the performances!
Also, it turns out that this decoding operation is not necessary if I want to use the websocket as array buffer. I just need to set the binaryType attribute of websockets to "arraybuffer"...
So all in all, a very simple solution to a pain in the *** problem :-)

angularjs 1.5 : How to identify what is getting leaked and fix the leak?

Tested in chrome latest and other browsers. This page starts a timer() to refresh every 60 seconds. On init() and every refresh(), it gets data from the server and display the same in the page. We see that it leaks lot of MB every refresh.
Now, how do I identify the specific objects and/or DOM Nodes that are being leaked
Once I identify the object/Nodes from #1, how do I go about fixing the leaks ?
Are there any books, good tutorials that would cover the above for Angularjs 1.5 ?
You probably found https://developers.google.com/web/tools/chrome-devtools/memory-problems/ and http://www.dwmkerr.com/fixing-memory-leaks-in-angularjs-applications/ as there are no more detailed resource out there.
A DOM node can only be garbage collected when there are no references to it from either the page's DOM tree or JavaScript code. A node is said to be "detached" when it's removed from the DOM tree but some JavaScript still references it. Detached DOM nodes are a common cause of memory leaks.
If you're not holding a reference of the timer but creating a new timer on every refresh - leak, solvable by reusing the $timeout
Checkout - CTRL + F $scope is retained by a context for a closure. on second provided link. The use case explained there is so similar to one you're having. Further on in the article:
We can open the function and examine it for issues. There's an $http.get which has as closure which uses $scope, but alarmingly there is an $interval registered to run every 10 seconds, which is never deregistered. The interval callback uses another $http.get, with a closure that uses $scope. This is the problem.
If none of the above applies then here's the list of open issues in AngularJS having memory leak as a keyword:
https://github.com/angular/angular.js/issues?utf8=%E2%9C%93&q=is%3Aopen%20memory%20leak
I am not sure if this will help you are not(maybe you've already looked into it), but it is worth mentioning. I had a similar issue with a previous application where objects were continuously being duplicated during every ajax request. So from the load of the page i would be using about 50mb of memory, but after making 10-15 ajax calls the memory would sky rocket to >1gb.
I was able to identify and resolve the issue using chrome dev tools -> memory tab. Through here you are able to record allocation profiles of memory and get a heap snapshot. So, for your situation i might cut the timer down to 5 or 10 seconds for testing purposes, then run these profilers. You will be able to get a view of what methods are being called and at what cost.
Hope this helps.

How to free memory of previous stack frame in Javascript

I have a number of functions calling the next one in a chain, processing a rather large set of data to an equally large set of different data:
function first_step(input_data, second_step_callback)
{
result = ... // do some processing
second_step_callback(result, third_step);
}
function second_step(intermediate_data, third_step_callback)
{
result = ... // do some processing
third_step_callback(result);
}
function third_step(intermediate_data) { }
first_step(huge_data, second_step);
In third_step I am running out of memory (Chrome seems to kill the tab when memory usage reaches about 1.5 GB).
I think, when reaching third_step(), the input_data from first_step() is still retained, because first_step() is on the call stack, isn't it? At least when the debugger is running, I can see the data.
Obviously I don't need it anymore. In first_step() there is no code after second_step_callback(result, third_step);. Maybe if I could free that memory, my tab might survive processing a data set of this size. Can I do this?
Without seeing a lot more of what you're really doing that is using memory, it's hard for us to tell whether you're just using too much memory or whether you just need to let earlier memory get freed up.
And, memory in Javascript is not "owned" by stack frames so the premise of the question seems to make a bit of a wrong assumption. Memory in Javascript is garbage collected and is eligible for GC when no live, reachable code still has a reference to the data and will get garbage collected the next time the garbage collector gets to run (during JS idle time).
That said, if you have code that makes a succession of nested function calls like your question shows, you can reduce the amount of memory usage, by doing some of these things:
Clear variables that hold large data (just set them to null) that are no longer needed.
Reduce the use of intermediate variables that hold large data.
Reduce the copying of data.
Reduce string manipulations with intermediate results because each one creates block of memory that then has to be reclaimed.
Clear the stack by using setTimeout() to run the next step in the chain and allow the garbage collector a chance to do its thing on earlier temporary variables.
Restructure how you process or store the data to fundamentally use less memory.

JavaScript WebSocket Idle time in DevTools timeline

I have an application that does the following:
WebSocket.onmessage
put bytes into a queue (Array)
on requestAnimationFrame
flush queue, rendering the received bytes using canvas/webgl
The application receives and plots realtime data. It has a bit of jerk/jank and while profiling my rendering code, I noticed that while the actual rendering seems execute quickly, there are large chunks of idle time during the WebSocket.onmessage handler.
I tried shrinking the window, mentioned in Nat Duca's post, to check if I am "GPU bound". But, even with a small window, the timeline gives pretty much the same results.
What I am suspecting now is Garbage Collection. The application reads data from the WebSocket, plots it, then discards it. So to me, it seems unsurprising that I have a saw-tooth memory profile:
So my question is now two-fold:
1) In the browser, is it even possible to avoid this memory footprint? In other languages, I know I'd be able have a buffer that was allocated once and read from the socket straight into it. But with the WebSocket interface, this doesn't seem possible; I'm getting a newly allocated buffer of bytes that I use briefly and then no longer need.
Update:--- Per pherris' suggestion, I removed the WebSocket from the equation, and while I see improvement, the issue still seems to persist. See the screenshots below:
2) Is this even my main issue? Are there other things in an application like this that I can do to avoid this blocking/idle time?

Categories