How to prevent a javascript stack overflow? - javascript

I've been building Conway's Life with javascript / jquery in order to run it in a browser Here. Chrome, Firefox and Opera or Safari do this pretty fast so preferably don't use IE for this. IE9 is ok though.
While generating the new generations of Life I am storing the previous generations in order to be able to walk back through the history. This works fine until a certain point when memory fills up, which makes the browser(tab) crash.
So my question is: how can I detect when memory is filling up? I am storing an array for each generation in an array which forms the history of generations. This takes massive amounts of memory which crashes the browser after a few thousands of generations, depending on available memory.
I am aware of the fact that javascript can't check the amount of available memory but there must be a way...

I doubt that there is a way to do it. Even if there is, it would probably be browser-specific. I can suggest a different way, though.
Instead of storing all the data for each generation, store snapshots taken every once in a while. Since the Conway's Game of Life is deterministic, you can easily re-generate future frames from a given snapshot. You'll probably want to keep a buffer of a few frames so that you can make rewinding nice and smooth.
In reality, this doesn't actually solve the problem, since you'll run out of space eventually. However, if you store every n frames, your application will last n times longer, which might just be long enough. I would recommend that you impose some hard limits on how far into the past you can rewind so that you have a cap on how much you have to store. Determine that how many frames that would be (10 minutes at 30 FPS = 18000 frames). Then, divide frames by how many frames you can store (profile various web browsers to figure this out) and that is the interval between snapshots you should use.

Dogbert pretty much nailed it. You can't know exactly how much available memory there is but you can know how potentially large your dataset will be.
So, take the size of each object stored in the array, multiply by array dimensions and that's the size of one iteration. Multiply that by the desired number of iterations to see how much space total it will take, and adjust accordingly.
Or, inspired by Travis, simply run the pattern in reverse from the last known array. It is deterministic after all.

Related

How to clear browser memory with Javascript

I have a simple single-page app that scrolls a bunch of photos indefinitely, meant to be run on displays for days at a time.
Because of the large number of scrolling pics, the memory use in Chrome keeps growing. I want a way to programmatically reduce the memory consumption at intervals (every few hours).
When I stop the animation programmatically, the memory footprint still doesn't go down. Even when I reload the page with location.reload();, it doesn't go down.
Is there a way to do this? How I can I "clear everything" programmatically that has the same effect as closing the tab?
FYI, in Firefox there isn't a memory issue. Just in Chrome. The code is super simple, uses requestAnimationFrame to animate two divs across the screen constantly. I'm not accumulating references anywhere. My question isn't about my code specifically, but rather about general ways to reset the tab's memory if it can be done.
Please find out with chrome or Firefox memory debugger, where the memory leak is.Then, when you find, just think, how you can clean this objects.
Reasonable causes of memory leaks are:
You are loading big images, you need to resize it on server, or simply
draw it to smaller canvases
You have too many dom elements (for
an example more than 10000)
You have some js objects, that are
growing up, and you don't clean it.
When in task manager you will see, that usage of memory is very high. You can see what going on with memory at firefox if you put to address bar about memory , and then press button "Measure".
You can use location.reload(true), with this it clears all the memory(caches etc). Moreover if you want to further clear the storage of a browser then you can use localStorage.clear() in your javascript code. This clears the items stored in the localStorage if you have saved something like this. localStorage.myItem = "something".

Javascript - optimizing node hierarchy updates for 3D animations

I am using WebGL to do hardware skinning, however updating my model node hierarchies is causing a huge hit for performance.
Every node needs to query its current location/rotation/scale keyframes, construct a local matrix with them, and multiply it with its parent's world matrix if it has a parent.
The matrix math itself is as optimized as it gets (special variants of matrix construction based on gl-matrix).
Still, if I update many models, with tens of nodes each (some even with hundreds, sadly), this hogs all of the execution time of the browser.
I have tried using a dirty state for when nodes don't actually need updating, but simply checking if their local data changed (mostly just checking if the location or rotation changed) actually causes the same amount of processing as just calculating the matrices.
WebCL would have been ideal, but that seems to go nowhere since 2014.
I am starting to think of running it all in a shader, but I can't quite wrap my head on how to design it (e.g. storing the keyframes, which are a map of frame->data, or how to write the data back).
Another way is to cache all of the animation transformations in a texture, but this doesn't scale well. For models with a low enough amount of keyframes, this is ok, but for ones with long animations, this turns to hundreds of megabytes very fast.
This is mostly because I can't think of any way to store sparse data. If that were possible, then I could store the same amount of transformations as there are keyframes, which would not take a lot of memory (right now, I store the transformations for every single frame).
Granted, this would require to do matrix interpolation, and I am not sure how reliable that is.
Does anyone have any ideas?
I dont think its practical to offload the entire node hierarchy calculations to the GPU. The best you can do is upload the 2 absolute world transformations keyframes to GPU and let GPU interpolate between. But I am not sure if the interpolated world transformation is same as if you actually calculated via node hierarchy. If that is possible, then that would be a feasible solution. Note that you cannot interpolate between matrices either. You need to convert it in a form that support interpolation, such as with quaternions + additional data.
I actually ran into this problem as well for my project. I found the updating transformation calculation to be the most time consuming operation, despite having a full collision system + response going as well.
I solved the problem by reducing the amount of times this updating transformation need to be called. For example, if you can conclude that the entire model is out of your view frustum, then you dont need to calculate the transformations at all. This may reduce the amount of calculation you need to do to between ~1/6 to ~1/4.
Secondly, for distant objects, you dont need to update their transformation every frame. Just update their transforms every few frames or so. Remember, there's games that are shipped with only 30FPS and thus a few skipped frame for distant objects may not be noticeable.
Finally, and this may not work for Javascript at all so I didnt do (yet), is that you should store and access data in a cache coherent manner. See these slides. Could have a 10x performance increase. But again, may not work for Javascript because well, Javascript arrays are not guaranteed to pack their data sequentially.

Pushing objects into javascript array performance weirdness

I'm creating a flock simulation using html5 canvas and I'm trying to squeeze every bit of performance out of javascript. I've noticed a "weird" performance boost/blow depending on when/where I add boids to the system.
If I fill the boids array with 400 objects first and then start the animation (using requestAnimationFrame) I get a very decent 40-50fps in Chrome and Safari, and around 30fps in Firefox.
However if the boids array gets filled with objects (400 of them again) during the animation (for example by dragging on the screen), than no matter the browser the performance always drops to about 15-20fps.
In both cases I use boids.push( new Boid() ); to fill the boids array. In the first case I do it from within a for loop and in the second I do it from the mouseDown event handler.
Any idea why the performance of the first would be so much better?
You can find both of the examples here:
Version A and Version B
If your reuestAnimationFrame callback execution takes longer than ~10ms fps drops and you will janks. So the first method probably meets that budget or sometimes slightly takes more since you are not getting ~60fps; but the second method definitely exceed the ~10ms on that frame and the browser have to skip 2-3 frames to complete the execution of that callback therefor you get ~20fps;
Check out this article on google web fundamentals Rendering Performance

How to measure memory usage and efficiency?

I have a web app that uses a lot of JavaScript and is intended to run non-stop (for days/weeks/months) without a page reload.
However, Chrome is crashing after a few hours. Safari doesn't crash as often, but it does slow down considerably.
How can I check whether or not the issues are with my code, or with the browser itself? And what can I do to resolve these issues?
Using Chrome Developer Profile Tools you can get a snapshot of what's using your CPU and get a memory snapshot.
Take 2 snaps shots. Select this first one and switch to comparison as shown below
The triangle column is the mathmatical symbol delta or change. So if your deltas are positive, you are creating more objects in memory. I'd would then take another snapshot after a given period of time, say 5 minutes. Then compare the results again. Looking at delta
If your deltas are constant, you are doing an good job at memory manageemnt. If negative, your code is clean and your used objects are able to be properly collected, again a great job.
If your deltas keep increasing, you probably have a memory leak.
Also,
document.getElementsByTagName('*'); // a count of all DOM elements
would be useful to see if you are steadily increasing you DOM elements.
Chrome also has the "about:memory" page, but I agree with IAbstractDownVoteFactory - developer tools are the way to go!

Disable JavaScript function based on the user's computer's performance

My website has a jQuery script (from Shadow animation jQuery plugin) which constantly changes the colour of box shadow of various <div>s on the home page.
The animation is not essential, but it does take up a lot of CPU time on slower machines.
Is it possible to find out if the script will run 'too slowly'? I can then disable it before it impacts performance.
Is this even a good idea? If not, is there an easy way to break up the jQuery animate?
This may indirectly solve your problem. Pick a few algorithms and performance tests from this site http://dromaeo.com/ that seem similar to your jQuery plugin. Don't run comprehensive tests as they do on the site. Instead, pick fairly small and fast algorithms, and run them for an unnoticeable period of time.
Use a tiny predefined time span to limit how long these tests are allowed to run. Let's say if that span is 200 ms, and on a fast machine with browser A, you can get 100 iterations, while on some random user's machine, it's only able to complete 5 iterations, then you may want to consider disabling it on the user's machine. Tweak and tweak till you find the optimal numbers.
As a bonus, send all test results back to your server so you have a better idea of where your users lie in the speed spectrum. If a big majority of users are using slower computers and older browsers, then it just may make sense to remove the thing altogether.
You could probably do it by timing a few times round a loop which did some intensive processing on page load, but that's going to slow the page and add even further to CPU load, so it doesn't seem like a great solution.
A compromise I've used in the past, though, was to make the decision based on browser version, for example, Internet Explorer 6 users get simpler content whereas newer browsers with better JavaScript performance get the animation. That seemed to work pretty well at a practical level. In practice, browser choice is a big factor in JavaScript performance and you might get a 90% fit with what you want very simply just by taking that into account.
You could do something like $(window).width() to get the browser width. Using this you could make the assumption that anything < 1024px wide is likely to be either a netbook, smartphone or old computer.
This wouldnt be nearly as accurate as timing a loop, but much more efficient.
Obviously its this rule's a generalisation and there are will be slow computers with > 1024px. But in general a 1024px + computer would typically be able to handle a fair bit of javascript (untill the owner puts on loads of software, virus scans and browser toolbars!)
hope this is useful!

Categories