My site crashes in the browser due to low memory on iOS. I'm repeating some action which consumes memory. After several attempts, the browser crashes. However, when I tested the same site on my desktop using Chrome by using timelime from dev tools:
Perform the same action
Collect garbage
All additionally allocated memory is collected.
Why does the browser crash if there are no memory leaks? Is there a way to force garbage collection?
Know iOS Resource Limits
Your webpage performing well on the desktop is no guarantee that it will perform well on iOS.
1.Keep in mind that iOS uses
EDGE (lower bandwidth, higher latency)
3G (higher bandwidth, higher latency)
Wi-Fi (higher bandwidth, lower latency)
to connect to the Internet.
2.You need to minimize the size of your webpage.
Including
unused or unnecessary images
CSS
JavaScript
which adversely affects your site’s performance on iOS.
3.Because of the memory available on iOS, there are limits on the number of resources it can process:
The maximum size for decoded GIF, PNG, and TIFF images
3 megapixels for devices with less than 256 MB RAM
5 megapixels for devices with greater or equal than 256 MB RAM
That is ensure width * height ≤ 3 * 1024 * 1024 for devices with less than 256 MB RAM
Note: that the decoded size is far larger than the encoded size of an image.
The maximum decoded image size for JPEG is 32 megapixels using
subsampling. JPEG images can be up to 32 megapixels due to
subsampling, which allows JPEG images to decode to a size that has one
sixteenth the number of pixels. JPEG images larger than 2 megapixels
are subsampled—that is, decoded to a reduced size. JPEG subsampling
allows the user to view images from the latest digital cameras.
4.The maximum size for a canvas element is
3 megapixels for devices with less than 256 MB RAM
5 megapixels for devices with greater or equal than 256 MB RAM.
The height and width of a canvas object is 150 x 300 pixels if not specified.
5.JavaScript execution time
limited to 10 seconds for each top-level entry point. If your script
executes for more than 10 seconds, Safari on iOS stops executing the
script at a random place in your code, so unintended consequences
may result.
6.The maximum number of documents that can be open at once is
eight on iPhone
nine on iPad.
Please refer Developing Web Content for Safari-Apple Documentation for more info.
Garbage Collection
Mobile safari javascript implementation doesn't have any command like CollectGarbage() in internet explorer for garbage collection.
There are three events that will trigger garbage collection in
mobile safari(Reference).
A dedicated garbage collection timer expires
An allocation occurs when all of a heap's CollectorBlocks are full.
An object with sufficiently large associated storage is allocated.
Its really a bad practice to trigger garbage collection.What we should be doing is to write codes that doesn't leak memory.
Plese refer Memory management in Javascript
Below is the best resource (with benchmarks) which I have ever come across, that explains it:
http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/
I hit those performance hurdles a few weeks back, and please note that iOS does not have any default garbage collection (the article explains why). It is the responsibility of the app (in this case, the browser app). You cannot collect garbage via your web app. A small tip while optimizing your website for iOS (to prevent crashes): avoid CSS transations.
Though I would recommend that you grab a cup of coffee and read the complete article, but I'll paste the summary below:
Javascript is too slow for mobile app use in 2013 (e.g., for photo editing etc.).
It’s slower than native code by about 5
It’s comparable to IE8
It’s slower than x86 C/C++ by about 50
It’s slower than server-side Java/Ruby/Python/C# by a factor of about 10 if your program fits in 35MB, and it degrades exponentially from there
The most viable path for it to get faster is by pushing the hardware to desktop-level performance. This might be viable long-term, but it’s looking like a pretty long wait.
The language itself doesn’t seem to be getting faster these days, and people who are working on it are saying that with the current language and APIs, it will never be as fast as native code
Garbage collection is exponentially bad in a memory-constrained environment. It is way, way worse than it is in desktop-class or server-class environments.
Every competent mobile developer, whether they use a GCed environment or not, spends a great deal of time thinking about the memory performance of the target device
JavaScript, as it currently exists, is fundamentally opposed to even allowing developers to think about the memory performance of the target device
If they did change their minds and allowed developers to think about memory, experience suggests this is a technically hard problem.
Related
I am writing a large application in Google chrome that does many complex thermodynamic calculations and appears to be pushing the limits of my computer. When running it uses ~600 MB of memory as reported by the windows task manager. When I open the chrome task manager, it reports that javascript is using ~400 MB of memory with ~180 MB considered live.
I would like to profile it and see if I can reduce memory usage so I have tried to use the Chrome inspect tools to take heap snapshots of the ~20 web workers and the main thread. The largest of these has a heap size of 7 MB and the average heap size is < 3 MB. By my math, it seems like there should be a total heap size of around 60 MB.
My guess is that the remaining reported memory is being allocated on the stack. Does chrome compile functions so they use stack memory? What is the best method to profile this and see where most of the memory is being used?
Additional information: the application in question is extremely light on graphics / DOM (so all the memory usage is either directly or indirectly from javascript). It communicates using a SharedWorker to another rendering process which handles the graphics. It also spawns around 20 WebWorkers to do various calculations. These are only spawned when the page loads but are passed new calculations every second (but only if they have finished their previous calculations).
I'm working on a 3D game that's built in JavaScript (on top of THREE.js, among other things). We've reached the point of profiling to get our framerate up, and seem to be memory bound. When I use the JavaScript console in Chrome (ctrl-shift-j), and use that to profile my memory usage, it shows that there are about 200 MB in heap allocations - which would be just fine. However, when I hit shift-ESC or go to chrome://memory-internals, it reports that I'm using about 1.34 GB of memory, and another 814 MB on the GPU.
To be clear, I don't seem to be leaking memory (my footprint is stable), I'm just using a lot of it, and I want to know where it's allocated so I can know where to optimize.
So... obviously, optimizing for what's on the heap isn't going to help me, since it's only about 10% of my memory usage. How can I get a handle on where all the rest of that memory is being allocated so that I can trim it down to size (besides taking shots in the dark, of course)?
I have a JavaScript application, which works fine, but certainly needs some memory / CPU performance (it is based on top of Google maps).
So basically it runs fine on a Desktop / Laptop PC, iPad works OK. But with all these different devices nowadays, smaller devices are definitely overloaded.
So basically I want to determine the available runtime and decide whether I start my JavaScript application, or display a message instead.
OK, I could check the browser, but this is a never ending story
I could check on the OS information, but again this is very tedious. Also some OS run on high performance devices as well as low end hardware.
I could check on the screen size and rule out mobile phones, but then it gets difficult
So is there a way to determine the performance of the client? Just to make it clear, I do not care whether it is a tablet for instance, but if it is a low performance tablet with little CPU performance / memory (so Detect phone/tablet/web client using javascript does not help).
In javascript You could check list of features and based on that do conclusion if device is low memory.
For example:
Check core count with window.navigator.hardwareConcurrency
(For better coverage add Core Estimator polyfill)
Check WebGL support (I like user2070775 suggestion): Proper way to detect WebGL support?
Check if its desktop or mobile. Most likely desktop will not have any problems with memory.
Check resolution of screen - most likely small resolution will be on low memory devices.
I'm currently working on a web app, and have been inspired by a couple different apps out there (mainly Cloud9IDE) in how they hold a large majority of their interface in javascript objects. This makes it incredibly easy to add features in the future, and also allows extensibility options in the future.
The question is, at what point does storing data in memory (via javascript) become rude. I'm building a social network (think like Twitter), and essentially I would be storing an object for every "tweet", as well as some more broad objects for interface items.
Are there hard limits forced by browsers on how much memory I can use? Will my website crash if I go over? Or will the entire browser crash? Will it slow down the user? If so, is there a general rule for how much memory will bother the average user?
Absolutely positively don't use anywhere close to 4 GB of memory. Most people use 32-bit browsers, so the browser couldn't support 4 GB anyway :)
On a more practical note, remember that the more memory you take up, the slower your app will usually run. Today's Intel/AMD (I don't know about ARM) processors access registers about 100 times faster than accessing memory that isn't in cache, so if you use a lot of memory you will cause thrashing, which will slow down your application dramatically.
So, assuming that you want users for your social network, you should try to design your website to work well on as many machines as possible. Millions and millions of people are still using Windows XP machines that are 5+ years old. These machines might have as little as 512 MB of RAM, so if you are using a few hundred megabytes, you can thrash all of memory rather than just processor cache, as the kernel keeps swapping out pages that you want to use. So as a rule of thumb I would recommend staying below 150-200 MB of memory. GMail takes up ~100MB of memory on Chrome for Linux, so I think that keeping up with GMail is a reasonable goal.
Another benefit of keeping memory usage relatively low is that your users can more easily view your site on a smartphone. An iPhone 3GS (there are still a lot of them in use) has only 256 MB of RAM, so staying below 200 MB in your website makes it easier for a smartphone user to load your site without having to kill processes indiscriminately.
We are currently trying to optimize page load times on an memory and cpu constrained windows mobile device (the MC9090, 624mhz cpu, 64meg ram, running WinMo 6 and Opera Mobile 9.5). As one of the potential optimiztions, I wanted to combine external js/css files, but now we are observing that page load times actually INCREASE after concatenation.
Now, that flies in the face of everything I have seen and read about with regards to client side performance. Has anyone ever seen this before? Also, are there different rules for embedded device web optimization then desktop browsers?
Browsers can download across multiple connections (4, sometimes 6 - not sure what this version of Opera allows). By putting everything in one file it forces the browser to download all the javascript in one stream, vs. multiple ones. This could be slowing it down.
Images and other items also compete for these connections. You nelly need a tool similar to Firebug's "net" tab to view the traffic and see how the bandwidth resources are allocated.
Also, see: http://loadimpact.com/pageanalyzer.php
out of date but relevant: http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections/