I'm trying to measure the memory usage of my site with the memory section of the Timeline tab in Chrome developer tools.
At various points I hit the garbage can button to force a garbage collection. The problem is the graph suddenly goes limp, and stops all measurements. Eventually, after I start doing other things it starts measuring again, but I never see the exact spot / value on the graph where I hit the GC button.
The first two downward slopes start immediately after I hit the garbage collect button, and they later just sort of connect to a new current value after I've been working.
Question is:
Is there a way to force this graph to keep or start measuring? Alternately, is there a simple way in JavaScript to console.log the current memory usage value?
As a related question, is there a way to point to a spot on the graph and see the exact memory usage at that point?
Timeline records events that happen on the renderer side. Each event record also has "memory usage" field. Timeline uses these numbers for the memory graph. So if there are no events for a time interval then memory graph shows nothing.
From the other side if renderer does nothing then the memory size doesn't change.
If you absolutely sure that you need memory data then you can setup a timer that does nothing.
For example you can execute in console setInterval(function() {}, 1000);
In that case Timeline will get timer event with the memory usage data and will draw the memory graph.
Related
I have a simple single-page app that scrolls a bunch of photos indefinitely, meant to be run on displays for days at a time.
Because of the large number of scrolling pics, the memory use in Chrome keeps growing. I want a way to programmatically reduce the memory consumption at intervals (every few hours).
When I stop the animation programmatically, the memory footprint still doesn't go down. Even when I reload the page with location.reload();, it doesn't go down.
Is there a way to do this? How I can I "clear everything" programmatically that has the same effect as closing the tab?
FYI, in Firefox there isn't a memory issue. Just in Chrome. The code is super simple, uses requestAnimationFrame to animate two divs across the screen constantly. I'm not accumulating references anywhere. My question isn't about my code specifically, but rather about general ways to reset the tab's memory if it can be done.
Please find out with chrome or Firefox memory debugger, where the memory leak is.Then, when you find, just think, how you can clean this objects.
Reasonable causes of memory leaks are:
You are loading big images, you need to resize it on server, or simply
draw it to smaller canvases
You have too many dom elements (for
an example more than 10000)
You have some js objects, that are
growing up, and you don't clean it.
When in task manager you will see, that usage of memory is very high. You can see what going on with memory at firefox if you put to address bar about memory , and then press button "Measure".
You can use location.reload(true), with this it clears all the memory(caches etc). Moreover if you want to further clear the storage of a browser then you can use localStorage.clear() in your javascript code. This clears the items stored in the localStorage if you have saved something like this. localStorage.myItem = "something".
I am developing a web application, in which a div is created on every mousemove event.
When profiling using Chrome Developer Tool Timeline, I see an increase of Node Counts (green line), but a really small number of Detached DOM tree when moving the mouse.
When the mouse is not being moved, the Node Counts are steady an never decrease/increase.
I would like to know:
How exactly Node count (green line) works? Does provide memory information cumulatively respect the start of the recording?
I was suspected a DOM memory leak, but taking a HEAP I see few Detached DOM tree. What could be issue for a steady increase of Node count?
Does Node count effect the total memory of the JS application?
What is the difference between Document DOM tree / xxx entries and Object Count?
EDIT:
After some research, I suspect that Node Counts going up does not necessary represents a leak of memory in this case (also Running Chrome/Task Manager I see the JS Memory stable and not continuously increasing).
It most probably represents an in memory usage by the browser, when in fact I do not move the mouse for 30 seconds or open another tab/window, the Garbage Collector kick in and memory is being cleared as in the next image.
By the way any expert advice on this is very very welcome :)
Interesting:
Javascript memory and leak problems
https://developers.google.com/web/tools/chrome-devtools/profile/memory-problems/memory-diagnosis
I want to know if this is normal in a single page app
If you go to: http://todomvc.com/examples/backbone/ and take some heap snapshots of adding and removing the todos -- even if I remove all previous added todos, the memory of the heap snapshot increases every time.
Is this normal?
Should this go back to the initial value if I remove all todos?
Thank you
Should this go back to the initial value if I remove all todos?
Yes and no.
It should go back to it's initial value (or close) but that won't happen untill a garbage collection is actually triggered, which doesn't look like has happened in your case. You can trigger it manually under the "Timeline" tab by clicking the little trashcan icon.
Do that while recording a timeline (and check the memory checkbox) to see the heap usage drop down again.
You'll notice that the number of nodes doesn't drop all the way down to where it was when the page initially loaded and it keeps rising if you keep adding/removing todos and triggering garbage collection several times. That can be an indication a small leak and could be something to investigate further.
It can be a leak or it can be that the application just caches some data structures. Also heap usage may grow as V8 will produce more code, e.g. as a result of function optimizations. Such code may survive several GC phases before it gets collected even if the function is not called any more. This is complicated and in general you shouldn't worry about that as VM should take care of that. Just keep in mind that VM may allocate some internal data structures for its own needs, they are usually collected over time. To see if JavaScript heap growth is valid you can record heap allocations timeline ("Profiles panel" > "Record Heap Allocations"). This will allow you to filter objects by allocation time and then decide if these objects should have survived or not:
I simply want to restart a canvas on a website. I do not want to reload the page, so the restart has to be done dynamically. Problem : after few restarts the animation gets slower and the memory increases.
Actually i face a more complex situation : I use a javascript framework to load pages, MeteorJS with IronRouter. Changing the URL only changes the DOM (nodes are removed) but the 'page' is not really left. So i thought that coming back to my page with the canvas can be assimilated to a dynamic canvas restart, since i have no solution to that too.
That is why i describe the problem from two points of view (same problem on both) :
first a simple page where a button asks the canvas to restart
then applying the solution to MeteorJS with IronRouter and try to come back on my page many times.
Note : my canvas content is some WebGL (in ThreeJS).
1. Deleting and re-initializing canvas dynamically.
In this case i face those two problems :
animation not smooth
memory increase. To be precise :
(source: hostingpics.net)
Page loaded, playing with canvas / 2. Restarting the canvas 30-40 times (i did not stop restarting so i do not know why the increase is not linear) / 3. Playing with canvas / 4. Closing the tab (reloading the page only frees a small fraction of what has been mobilized)
Few questions on SO yet asked how to solve those issues, but reproducing the solutions partially solves the problem. I have read two main things :
the canvas-related parts (nodes, variables, functions) have to be cleaned from the DOM (That is something Meteor does. But in this first part it stays relevant and yet is not enough to prevent the issues).
if requestAnimationFrame has been called, it needs to get assigned to a variable so cancelAnimationFrame(variable) can stop it. That indeed seems to prevent the frame rate drop. While memory stays high, at least the movements remain fluent.
With those solutions the memory problem still happens.
Here is an example : http://codepen.io/Astrak/pen/RNYLxd
2. In MeteorJS : i cannot get this partial solution work.
Here is what i wrote in my app, with id=requestAnimationFrame(myAnimation) declared globaly :
Template.myTemplate.destroyed=function(){
cancelAnimationFrame(id);
document.body.removeChild(document.querySelector('canvas'));//useful in Meteor ?
};
Despite those instructions, my animation gets slow much quicker with MeteorJS than in the previous example : it becomes unusable after 3-4 page change. And same memory problem.
Thanks for every comment and answer and happy coding :)
Related questions :
Performance drop on multiple restart of scene from threejs
Performance drops when trying to reset whole scene with Three.js
How do we handle webgl context lost event in Three.js
Pause, resume and restart Canvas animations with JS
How can I restart my function?
JavaScript restart canvas script
For memory problem, I'd recommend Chrome's javascript profiler - you can compare snapshots before and after canvas is reloaded and see which objects are not freed from memory.
Problem usually comes from event handlers and/or using closures. If you, for example, hook a function to window object's onresize event, all objects referenced in this function will stay in memory as long as window exists. So always remove event handlers from global objects if you want to refresh page content without actually refreshing the browser session.
Similar with closures. They will create link between function's scope and referenced variables preventing GC to collect them. Check this https://www.meteor.com/blog/2013/08/13/an-interesting-kind-of-javascript-memory-leak
I've been updating a Node.JS FFI to SDL to use SDL2. (https://github.com/Freezerburn/node-sdl/tree/sdl2) And so far, it's been going well and I can successfully render 1600+ colored textures without too much issue. However, I just started running into an issue that I cannot seem to figure out, and does not seem to have anything to do with the FFI, GC, speed of Javascript, etc.
The problem is that when I call SDL_RenderPresent with VSYNC enabled, occasionally, every few seconds, this call will take 20-30 or more milliseconds to complete. And it looks like this is happening 2-3 times in a row. This causes a very brief, but noticeable, visual hitch in whatever is moving on the screen. The rest of the time, this call will take the normal amount of time to display whatever was drawn to the screen at the correct time to be synced up with the screen, and everything looks very smooth.
You can see this in action if you clone the repository mentioned above. Build it with node-gyp, then just run test.js. (I can embed the test code into StackOverflow, but I figured it would be easier to just have the full example on GitHub) Requires SDL2, SDL2_ttf, SDL2_image to be in /Library/Frameworks. (this is still in development, so there's nothing fancy put together for finding SDL2 automatically, or having the required code in the repository, or pulled from somewhere, etc.)
EDIT: This should likely go under the gamedev StackExchange site. Don't know if it can be moved/link or not.
Doing some more research online, I've discovered what the "problem" was. This was something I'd never really encountered before, (somehow) so I thought it was some obvious problem where I was not using SDL correctly.
Turns out, graphics being "jittery" is a problem every game can/does face, and there are common ways to get around it. Basically, the problem is that a CPU cannot run every process/thread in the OS completely in parallel. Sometimes a process has to be paused in order to run something else. When this happens during a frame update, it can cause that frame to take up to twice as long as normal to actually be pushed to the screen. This is where the jitter comes from. It became most obvious that this was the problem after reading a Unity question about a similar jitter, where a commenter pointed out that running something such as the Activity Monitor on OS X will cause the jitter to happen regularly, every couple seconds. About the same amount of time between when the Activity Monitor polls all the running processes for information. Killing the Activity Monitor caused the jitter to be much less regular.
So there is no real way to guarantee that your code will be run every 16 milliseconds on the dot, and that it will always ever be another 16 milliseconds before your code gets run again. You have to separate the timing for code that handles events, movement, AI, etc. from the timing for when a new frame will be rendered in order to get a perfectly smooth experience. This generally means that you will run all your logic fewer times per second than you will be drawing frames, and then predicting where every object will be in between actual updates, and draw the object in that spot. See deWiTTERS game loop article for some more concrete details on this, on top of a fantastic overview of game loops in general.
Note that this prediction method of delivering a smooth game experience does not come without problems. The main one being that if you are displaying an object in a predicted location without actually doing the full collision detection on it, that object could very easily clip into other objects for a few frames. In the pong clone I am writing to test the SDL bindings, with the predicted object drawing, if I hold right while up against a wall the paddle will repeatedly clip into the wall before popping back out as location is predicted to be further than it is allowed. This is a separate problem that has to be dealt with in a different way. I am just letting the reader know of this problem.