I'm working on a 3D game that's built in JavaScript (on top of THREE.js, among other things). We've reached the point of profiling to get our framerate up, and seem to be memory bound. When I use the JavaScript console in Chrome (ctrl-shift-j), and use that to profile my memory usage, it shows that there are about 200 MB in heap allocations - which would be just fine. However, when I hit shift-ESC or go to chrome://memory-internals, it reports that I'm using about 1.34 GB of memory, and another 814 MB on the GPU.
To be clear, I don't seem to be leaking memory (my footprint is stable), I'm just using a lot of it, and I want to know where it's allocated so I can know where to optimize.
So... obviously, optimizing for what's on the heap isn't going to help me, since it's only about 10% of my memory usage. How can I get a handle on where all the rest of that memory is being allocated so that I can trim it down to size (besides taking shots in the dark, of course)?
Related
I am developing a voxel engine in Javascript using Threejs, and I've run into some memory usage problems. The intended environment is within an Electron application, which means I can launch the program with flags (--expose-gc) that enable manual garbage collection with gc().
I noticed that when running the gc() function, much less memory was freed compared to when I clicked the "collect garbage" button in the Memory section of Chrome's developer tools. For example, if the program initially used 2000 MB of memory, after gc() it would free up to 300 MB, but after clicking the aforementioned button, it would free up to 600 MB more, making it about half of what it originally was.
After some digging, I also found out about the --gc-global flag. This would've been perfect, since it behaved in the same way as a manual gc did, but it ran multiple times per second, making my program extremely laggy. I tried controlling the gc rate with --gc-interval, but it didn't affect it at all.
I also found this answer that talked about ProfilerAgent.collectGarbage();, but it always threw a reference error. Additionally, the flag it provided (--debug-devtools-frontend) wasn't recognised by Chrome. That, and the fact that I could find very little info on it outside of the answer, lead me to believe that it's an obsolete feature.
I also profiled the memory usage from the developer tools, and surprisingly, it was much lower compared to the value that Task Manager showed me: it seemed to be unaffected by the leaking memory. This has made it nearly impossible to determine where the memory could be leaking from.
I should also make it clear that while Chrome does garbage-collect on its own, it's far too infrequent, and it rarely does a full gc, meaning that memory usage is about twice of what it could be.
In summary, is there a way to trigger a full garbage collection from either Chrome or nodejs/Electron? Obviously, I'm not concerned about cross-browser support, so any advice would be appreciated.
It turns out that self.gc only collects garbage in the thread/worker it was called from. Running the function in all of my workers freed as much memory as the developer tools garbage collector did.
I run a simple CRUD process using puppeteer for my website. I can see the heap memory growing gradually with time and the chrome's internal objects such as (system), (array), (string), (compiled code) etc. also grow in memory with time. For a 301 MB heap snapshot, approx. 148 MB is allocated for (array) and 59 MB for (system). Is there any possible way, I could arrest this from growing or is this chrome default behaviour ?
If you see unbounded memory growth (i.e. more and more usage until the tab eventually crashes with out-of-memory), then you most likely have a memory leak in your code. You can use DevTools to investigate it; if you need help, there is e.g. a detailed article at https://developers.google.com/web/tools/chrome-devtools/memory-problems/.
If you see a "sawtooth" pattern (i.e. memory usage slowly grows for a while (usually seconds or minutes), then suddenly drops, then slowly grows again, etc), then you're probably just seeing the garbage collector in action. Finding and freeing unused memory takes time, so V8 tries to cleverly balance how much of that it does. When there is lots of free memory available, V8 (within certain bounds) prioritizes executing your code over collecting garbage. When memory gets scarce or when there is idle time, it will spend more time cleaning up.
This happens over a period of 90ish seconds. I'm trying to isolate the cause and I can't even begin to figure out where to start, and i'm at the point now where I'm questioning whether this is even a problem- this seems like Chrome is just good at handling performance, rather than we're doing something right. I'm trying to decrease our JS Heap size in general but I don't know even where to start.
In summary:
Does this look like a memory leak or performance problem?
I've read and watched a bunch of videos about FINDING memory leaks but have yet to find a good example of how to isolate and solve them. Any resources-- preferably google team ones-- would be super helpful
Without knowing anything about your application it is hard to tell, but in general do 100 MB of heap space used not particularly have to be a memory leak. Where the spike is going down is just the garbage collection of the Javascript Engine hitting and freeing all not longer used memory. We have a simple desktop application here in development that already uses 75 MB of heap space when it is just idling without doing any rerendering to hold all the states. For your comparison.
You can also check for sources like
https://auth0.com/blog/four-types-of-leaks-in-your-javascript-code-and-how-to-get-rid-of-them/
and see if you do things that can cause memory leaks.
Check also:
Finding JavaScript memory leaks with Chrome
I have recorded the performances of an angular 4.4 app and I think that what the Chrome dev tools returned me about the js heap could be worrying, but I honestly lack on this subject.
I don't understand the straight drop at ~20000ms, the straight line soon after and the other drop at ~60000ms: what are they due to? Are those behaviours normal or do they means that something should be fixed?
The incline means that the page was allocating memory in the JS heap. This is normal.
The drops mean that the browser freed up memory in the JS heap that was no longer needed. This is called garbage collection. That's normal, too. Nothing alarming about that.
In general, if you see that the total amount of memory is progressively increasing after each garbage collection event, then that's a warning sign that you have a memory leak. The memory leak pattern usually looks like this:
Source
As you can see from the graph, if you leave the page running for long enough, eventually it will use up all of the computer's memory, causing the computer to run slowly, or crash.
See Fix Memory Problems for more techniques for analyzing memory usage.
I've tried so many profilers for node I've lost count. I've never seen a profiler that gives you this:
This image shows second-by-second usage of CPU (top and center) and memory (bottom). I can click on a single "frame" (a dividend of a second) to see exactly which functions executed on that frame and what memory was allocated and deallocated (GC'd). This is Adobe Scout for Flash/AS3.
I need to find a ghost (a memory leak :), and I've successfully used the above interface hundreds of times to eliminate unwanted allocations and debug why memory doesn't get freed when it should.
How do I find which part of my app is allocating memory on a visual timeline? I need a timeline to see specifically which part of my app is allocating memory and why. Right now everything happens so fast I can't use the "objects currently in memory" panel to do anything useful. And comparing "heap snapshots" is harder than using a timeline. Web-based or app is fine. I use Windows 7.
I use pm2 as a process manager and they have a dashboard service keymetrics. You may have a look to see if match your need. :)