I'm working on a memory intensive web app (on the order of several hundred megabytes to a gigabyte or more) and I'm trying to figure out what my options are for memory management. There doesn't seem to be any way to figure out if my application is approaching the memory limit of the browser / Javascript engine, and once the app exceeds that limit the browser just crashes. I could just be super conservative with the amount of memory I use in order to support browsers running on low end machines, but that will sacrifice performance on higher end machines. I know Javascript was never designed to be able to use large amounts of memory, but here we are now with HTML5, canvas, WebGL, typed arrays, etc and it seems a bit short sighted that there isn't also a way in Javascript to determine how much memory a script is able to use. Will something like this be added to browsers in the future, or is there a browser specific API available now? What are my options here?
Update
I'm not sure that it matters, but for what it's worth I'm displaying and manipulating hundreds of large images, in file formats not supported by web browsers, so I have to do all of the decompression in Javascript and cache the decompressed pixel data. The images will be displayed on a canvas one at a time and the user can scroll through them.
Related
I am trying to understand why it's a hard task for browsers to fully render the DOM many time per second, like game-engines do for their canvas. Games engines can perform many many calculation each frame, calculating light, shadows, physics etc`, and still keep a seamless frame rate.
Why browsers can't do the same, allowing full re-rendering of the DOM many times per second seamlessly?
I understand that rendering a DOM and rendering a Game scene are two completely different tasks, but I don't understand why the later is so much harder in terms of performance.
Please try to focus on specific aspects of rendering a DOM, and explain why games-engines don't face the same problems. For example- "browsers need to parse the HTML, while all the code of the game is pre-compiled and ready to run".
EDIT: I edited my question because it was marked as opinionated. I am not asking for opinions here, only facts. I am asking why browsers can't fully re-render the DOM 60 frames per second like game-engines render their canvas. I understand that browsers faces a more difficult task, but I don't understand why exactly. Please stick with informative answers only, and avoid opinions.
Games are programs written to do operations specific to themselves - they are written in low level languages asm/c/c++ or at least languages that have access to machine level operations. When it comes to graphics, games are able to push programs into the graphics cards for rendering: drawing vectors and colouring / rasterization
https://en.wikipedia.org/wiki/OpenGL
https://en.wikipedia.org/wiki/Rasterisation#:~:text=Rasterisation%20(or%20rasterization)%20is%20the,which%20was%20represented%20via%20shapes)
they also have optimised memory, cpu usage, and IO.
Browsers on the other hand are applications, that have many requirements.
Primarily designed to render HTML documents, via the creation of objects which represent the html elements. Browsers have got a more complex job, as they support multiple version of the dom and document types (DTD), and associated security required by each DTD.
https://en.wikipedia.org/wiki/Document_type_declaration#:~:text=A%20document%20type%20declaration%2C%20or,of%20HTML%202.0%20%2D%204.0).
and have to support rending a very generic set of documents - one page is not the same as another. Have to have libraries for IO, CSS parsing, image parsing (JPEG, PNG, BMP etc.....) and movie players and associated codecs, audio players and their codecs, and web cams support. Additionally they support the JavaScript code environment (not just the language - but IO and event handling) - also have historic support for COM, Java Applets.
This makes them very versatile tools, but heavy weighted - they carry a lot of baggage.
The graphic aspects can never be quite as performant as a dedicated program in this aspect, as the API they provide for such operations is always running at a higher level.
Even the Canvas API (as the name suggests) is a layer of abstraction above the lower level rendering libraries. and each layer of abstraction adds a performance hit.
https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API
For a better graphics performance there is now a new standard available in browsers call webGL - though this is still an API, and runs in a sandbox - so still will not be as performant as dedicated code
https://en.wikipedia.org/wiki/WebGL
Even games using game engines: Unity, Unreal will be accessing graphical features, CPU, memory, and IO in much more a dedicated fashion then browsers would - as the game engines themselves provide dedicated rendering and rasterization functions, that the developer can use in their games for optimised graphical features.. Browser cant as they have to cover many generic cases, but not specific requirements.
https://docs.unrealengine.com/en-US/Engine/index.html
https://learn.unity.com/tutorial/procedural-sky-19-1
First of all, games on the Web don't use the DOM much. They use the faster Canvas API. The DOM is made for changing content on a document (that's what the D in DOM stands for), so it is a really bad fit for games.
How is it possible that my crappy phone can run Call Of Duty seamlessly, but it's so hard to write a big webpage that will run smoothly on it?
I never had performance problems with the DOM. Of course, if you update the whole <body> with a single .innerHTML assignment 60 times a second, I wouldn't be surprised if the performance is bad, because the browser needs to:
Parse the HTML and construct the DOM tree;
Apply styles and calculate the position of each element;
Render the elements.
Each of those steps is a lot of work for the CPU, and the process is mostly single-threaded in most browsers.
You can improve the performance by:
Never using .innerHTML. .innerHTML makes the browser transform HTML into a DOM tree and vice-versa. Use document.createElement() and .appendNode().
Avoid changing the DOM. Change only the CSS styles, if possible.
Generally , it's depend about the game . the most powerful games are developed in C++ or C engine , so they are directly in touch with the memory and use the full power of processor.
Instead to web pages based on DOM , they are wrote it by interpreted language like JavaScript. Also , the problem can be from the server if the webpage it's deployed not correctly or in a bad slow server .
Do javascript variables have a storage capacity limit?
I'm designing one YUI datatable where I fetch the data from database and store it in a js object and wherever required I'll extract it and update the YUI datatable. Right now in Dev I've very few records and its storing correctly. In production I may have 1000s of records, this js object is capable to store all these 1000s of records?
If its not capable I'll create on hidden textarea in jsp and store the data there
Yes, objects and arrays have storage limits. They are sufficiently large to be, for most purposes, theoretical. You will be more limited by the VM than the language.
In your specific case (sending thousands of items to a client), you will run into the same problem whether it is JSON, JavaScript, or plain text on the JSP page: client memory. The client is far more likely to run out of usable system memory than you are to run into a language restriction. For thousands of small objects, this shouldn't be an issue.
Arrays have a limit of 4.2 billion items, shown in the spec at 15.4.2.2, for example. This is caused by the length being a 32-bit counter. Assuming each element is a single integer, that allows you to store 16GB of numeric data in a single array.
The semantics on objects are more complex, but most functions to work with objects end up using arrays, so you're limited to 4.2 billion keys in most practical scenarios. Again, that's over 16GB of data, not counting the overhead to keep references.
The VM, and probably the garbage collector, will start to hang for long periods of time well before you get near the limits of the language. Some implementations will have smaller limits, especially older ones or interpreters. Since the JS spec does not specify minimum limits in most cases, those may be implementation-defined and could be much lower (this question on the maximum number of arguments discusses that).
With a good optimizing VM that tries to track the structures you use, at that size, will cause enough overhead that the VM will probably fall back to using maps for your objects (it's theoretically possible to define a struct representing that much data, but not terribly practical). Maps have a small amount of overhead and lookup times get longer as size increases, so you will see performance implications: just not at any reasonable object size.
If you run into another limit, I suspect it will be 65k elements (2^16), as discussed in this answer. Finding an implementation that supports less than 65k elements seems unlikely, as most browsers were written after 32 bit architectures became the norm.
There isn't such a limit.
It looks that there is a limit at 16GB, but you can read some tests below or in #ssube's answer.
But probably when your object/json is around 50 mb you'll encounter strange behaviour.
For Json here is an interesting article : http://josh.zeigler.us/technology/web-development/how-big-is-too-big-for-json/
For Js Object you have more knowledge here: javascript object max size limit (saying that there isn't such a limit but encounter strange behaviour at ~40 mb)
The limit depends on the available memory of the browser. So every PC, Mac, Mobile setup will give you a different limit.
I don't know how much memory one of your records needs, but I would guess that 1000 records should work on the most machines.
But: You should avoid storing massive data amounts in simple variables, depending on the records memory it slows down the whole website behavior. Your users with average computers may see ugly scrolling effects, delayed hover effects and so on..
I would recommend you to use local storage. I'm sorry to don't know the YUI library, but I am pretty sure that you can point to the storage for your datatable source.
There is a limit on JavaScript objects max js object limit. What i would suggest using is session objects because that's what it sounds like your trying to do anyway.
See if somebody were to create a large application based on WebGL. Let's say that it's a 3D micro management game which by itself take approximately 700 Megabytes in files to run it.
How would one deal with the loading of the assets. I would have assumed that it would have to be done asynchronously but I am unsure how exactly it would work.
P.S. I am thinking RollerCoaster Tycoon as an example, but really it's about loading large assets from server to browser.
Well first off, you dont want your users to download 700 megabytes of data, at least not at once.
One should try to keep as many resources(geometry, textures) as possible procedural.
All data that needs to be downloaded should be loaded in a progressive/on demand manner using multiple web workers
since one will probably still need to process the data with javascript which can become quite cpu heavy when having many resources.
Packing the data into larger packages may also be advisable to prevent request overhead.
Sure thing one would gzip all resources and try to preload data as soon as the user hits the website. When using image textures and/or text content, embedding it into the html(using <img> and <script> tags) allows to exploit the browser cache to some extend.
Using WebSQL/IndexedDB/LocalStorage can be done but due to the currently very low quotas and flaky/not existing implementation of the quota management api its not a feasable solution right now.
I'm working on an extremely performance-constrained devices. Because of the overhead of AJAX requests, I intend to aggressively cache text and image assets in the browser, but I need to configure the cache size per-device to as low as 1MB of text and 9MB of images -- quite a challenge for a multi-screen, graphical application.
Because the device easily hits the memory limit, I must be very cautious about how I manage my application's size: code file size, # of concurrent HTTP requests, # of JS processor cycles upon event dispatch, limiting CSS reflows, etc. My question today is how to develop a size-restrained cache for text assets and images.
For text, I've rolled my own cache using JSON.encode().length for objects and 'string'.length to approximate size. The application manually gets/sets cache entries. Upon hitting a configurable upper limit, the class garbage collects itself from gcLimit to gcTarget sizes, giving weight to the last-accessed properties (i.e., if something has been accessed recently, skip collecting that object the first time around).
For images, I intend to preload interface elements and let the browser deal with garbage collection itself by removing DOM elements and never persistently storing Image() objects. For preloading, I will probably roll my own again -- I have examples to imitate like FiNGAHOLiC's ImgPreloader and this. I need to keep in mind features like "download window size" and "max cache requests" to ensure I don't inadvertently overload the device.
This is a huge challenge working in such a constrained environment, and common frameworks like Backbone don't support "max Collection size". Elsewhere on SO, users quote limits of 5MB for HTML5 localStorage, but my goal is not session persistence, so I don't see the benefit.
I can't help feeling there might be better solutions. Ideas?
Edit: #Xotic750: Thanks for the nod to IndexedDB. Sadly, this app is a standard web page built on Opera/Presto. Even better, the platform offers no persistence. Rock and a hard place :-/.
localStorage and sessionStorage (DOM Storage) limits do not apply (or can be overridden) if the application is a browser extension (you don't mention what your application is).
localStorage is persistent
sessionStorage is sessional
Idea
Take a look at IndexedDB it is far more flexible though not as widely supported yet.
Also, some references to Chrome storage
Managing HTML5 Offline Storage
chrome.storage
With modern javascript engines cpu/gpu performance is not an issue for most apps (except games, heavy animation or flash) on even low powered devices so I suspect your primary issues are memory and io. Optimising for one typically harms the other but I suspect that the issues below will be your primary concern.
I'm not sure you have any control over the cache usage of the browser. You can limit the memory taken up by the javascript app using methods like those you've suggested but the browser will still do it's own thing and that is probably the primary issue in terms of memory. By creating your own caches you will probably be actually duplicating data that is already cached by the browser and so exacerbate the memory problem. The browser (unless you're using something obscure) will normally do a better job of caching than is possible in javascript. In any case, I would strongly recommend letting the browser take care of garbage collection as there is no way in javascript to actually force browsers to free up the memory (they do garbage collection when they want, not when you tell them to). Do you have control over which browser is used on the device? If you do, then changing that may be the best way to reduce memory usage (also can you limit the size of the browser cache?).
For ajax requests ensure you fully understand the difference between GET and POST as this has big implications for caching on the browser and on proxies routing messages around the web (and therefore also affects the logic of your app). See if you can minimise the number of requests by grouping them together (JSON helps here). It is normally latency rather than bandwidth that is the issue for AJAX requests (but don't go too far as most browsers can do several requests concurrently). Ensure you construct your ajax manager to allow prioritisation of requests (i.e stuff that affects what the user sees is prioritised over preloading which is prioritised over analytics - half the web has a google analytics call the first thing that happens after page load, even before ads and other content is loaded).
Beyond that, I would suggest that images are likely to be the primary contributor to memory issues (I doubt code size even registers but you should ensure code is minimised with e.g. google closure). Reduce image resolutions to the bare minimum and experiment with file formats (e.g. gif or png might be significantly smaller than jpeg for some images (cartoons, logos, icons) but much larger for others (photos, gradients).
10MB of cache in your app may sound small but it is actually enormous compared with most apps out there. The majority leave caching to the browser (which in any case will probably still cache the data whether you want it to or not).
You mention Image objects which suggests you are using the canvas. There is a noticeable speed improvement if you create a new canvas to store the image (after which you can discard the Image object). You can use this canvas as the source of any image data you later need to copy to a canvas and as no translation between data types is required this is much faster. Given canvas operations often happen many times a frame this can be a significant boost.
Final note - don't use frameworks / libraries that were developed with a desktop environment in mind. To optimise performance (whether speed or memory) you need to understand every line of code. Look at the source code of libraries (many have some very clever optimised code) but assume that, in general, you are a special case for which they are not optimised.
I make heavy use of the excellent jTemplates plugin for a small web app.
Currently I load all of the templates into the DOM on the initial page load.
Overtime as the app has grown I have gotten more and more templates - currently about 100kb worth.
Because my app is all ajax-based, there is never a need to refresh the page after the initial page load. There is a couple second delay at the beginning while all of the templates load into the DOM, but after that the app behaves very responsively.
I am wondering: In this situation, is there any significant advantage to using jTemplates processTemplateURL method to lazy load templates as needed, as opposed to just bulk loading all of the templates on the initial page load?
(I don't mind the extra 2 or 3 seconds the initial page load takes - so I guess I am wondering -- besides the initial page load delay, is there any reason not to load a large amount of html template data into the DOM? Does having a larger amount of data in the DOM affect performance in any way?)
Thanks (in advance) for your help.
According to yahoo Best Practices for Speeding Up Your Web Site article, They reommand not having more then 500-700 elements in DOM.
The number of DOM elements is easy to test, just type in Firebug's console:
document.getElementsByTagName('*').length
Read more http://developer.yahoo.com/performance/rules.html
Like a jar that contains 100 marbles 10 of which are red color. It is easy to spot and pick the 10 red marbles from a jar of 100, but if that jar contained 1000 marbles, it will take more time to find the red marbles. Comparing this to DOM elements, the more you have the slow your selections will be, and that will effect performance.
You really should optimize your DOM in order to save memory and enhance speed. However, the key is to avoid premature optimizations.
What is your target platform? What browser(s) are your users most likely to be using?
For example, if you are targeting primarily desktop PC's and your users are running modern browsers, then you probably should prefer clarity and simplicity of your code.
If you are targeting desktop PC's but must support IE6, say, then having too many DOM elements will impact your performance and you should think in lines of optimizations.
However, if you are targeting modern browsers, but in areas with poor bandwidth (e.g. on a cruise ship etc.) then your bandwidth considerations may out-weight your DOM considerations.
If you are targeting iPhones, iPads etc., then memory is a scarce resource (as is with CPU), so you definitely should optimize the DOM. In addition, on mobile devices, you're going to give more weight in optimizing the AJAX payload due to bandwidth issues than with anything else. You'll give yet more weight in reducing the number of AJAX calls vs. saving on DOM elements. For example, you may want to just load more DOM elements in order to reduce the number of AJAX calls due to bandwidth considerations -- again, only you can decide on the right balance.
So the answer really is: depends. In a fast-connection, modern browser environment, no real need to prematurely optimize unless you DOM gets really huge. In a slow-connection, or mobile enviornment, overweight on bandwidth optimizations before DOM optimizations, but do optimize on DOM node count as well.
Having whole bunch of extra DOM elements will not only affect your initial load time, it will also affect more or less everything on the page. JavaScript DOM queries will run slower, inserts will run slower, CSS will apply slower and so on, since the entire tag soup is parsed into DOM Tree by the browser, any traversal of that tree is going to be affected. To measure how much it's going to slow your page down, you can use tools like DynaTrace AJAX and run it once with your current code and once with your code with no templates. If you notice large difference between those two runs in terms of JavaScipt execution or rendering, you should probably lazy load (though 100kb is not that much, so you might not see significant difference).