Possible to check 'available memory' within a browser? - javascript

I'm just making up a scenario, but let's say I have a 500MB file that I want to provide an html table for the client to view the data. Let's say there are two scenarios:
They are viewing it via a Desktop where they have 1.2GB available memory. They can download the whole file.
Later, they try and view this same table on their phone. We detect that they only have 27MB available memory, and so give them a warning that says "We have detected that your device does not have enough memory to view the entire table. Would you like to download a sample instead?"
Ignoring things like pagination or virtual tables, I'm just concerned about "if the full dataset can fit in the user's available memory". Is this possible to detect in a browser (even with a user's confirmation). If so, how could this be done?
Update
This answer has been answered about 6 years ago, and the question points to an answer from 10 years ago. I'm wondering what the current state is, as browsers have changed quite a bit since then and there's also webassembly and such.

Use performance.memory.usedJSHeapSize. Though it non-standard and in development, it will be enough for testing the memory used. You can try it out on edge/chrome/opera, but unfortunately not on firefox or safari (as of writing).
Attributes (performance.memory)
jsHeapSizeLimit: The maximum size of the heap, in bytes, that is available to the context.
totalJSHeapSize: The total allocated heap size, in bytes.
usedJSHeapSize: The currently active segment of JS heap, in bytes.
Read more about performance.memory: https://developer.mozilla.org/en-US/docs/Web/API/Performance/memory.
CanIUse.com: https://caniuse.com/mdn-api_performance_memory
CanIUse.com 2020/01/22

I ran into exactly this problem some time ago (a non-paged render of a JSON table, because we couldn't use paging, because :-( ), but the problem was even worse than what you describe:
the client having 8 GB of memory does not mean that the memory is available to the browser.
any report of "free memory" on a generic device will be, ultimately, bogus (how much is used as cache and buffers?).
even knowing exactly "how much memory is available to Javascript" leads to a maintenance nightmare because the translation formula from available memory to displayable rows involves a "memory size for a single row" that is unknown and variable between platforms, browsers, and versions.
After some heated discussions, my colleagues and I agreed that this was a XY problem. We did not want to know how much memory the client had, we wanted to know how many table rows it could reasonably and safely display.
Some tests we ran - but this was a couple of months or three before the pandemic, so September 2019, and things might have changed - showed the following interesting effects: if we rendered off-screen, client-side, a table with the same row, repeated, and random data, and timed how long it took to add each row, this time was roughly correlated with the device performances and limits, and allowed a reasonable estimate of the permissible number of actual rows we could display.
I have tried to reimplement a very crude such test from my memory, it ran along these lines and it transmitted the results through an AJAX call upon logon:
var tr = $('<tr><td>...several columns...</td></tr>')
$('body').empty();
$('body').html('<table id="foo">');
var foo = $('#foo');
var s = Date.now();
for (i = 0; i < 500; i++) {
var t = Date.now();
// Limit total runtime to, say, 3 seconds
if ((t - s) > 3000) {
break;
}
for (j = 0; j < 100; j++) {
foo.append(tr.clone());
}
var dt = Date.now() - t;
// We are interested in when dt exceeds a given guard time
if (0 == (i % 50)) { console.debug(i, dt) };
}
// We are also interested in the maximum attained value
console.debug(i*j);
The above is a re-creation of what became a more complex testing rig (it was assigned to a friend of mine, I don't know the details past the initial discussions). On my Firefox on Windows 10, I notice a linear growth of dt that markedly increases around i=450 (I had to increase the runtime to arrive at that value; the laptop I'm using is a fat Precision M6800). About a second after that, Firefox warns me that a script is slowing down the machine (that was, indeed, one of the problems we encountered when sending the JSON data to the client). I do remember that the "elbow" of the curve was the parameter we ended up using.
In practice, if the overall i*j was high enough (the test terminated with all the rows), we knew we need not worry; if it was lower (the test terminated by timeout), but there was no "elbow", we showed a warning with the option to continue; below a certain threshold or if "dt" exceeded a guard limit, the diagnostic stopped even before the timeout, and we just told the client that it couldn't be done, and to download the synthetic report in PDF format.

You may want to use the IndexedDB API together with the Storage API:
Using navigator.storage.estimate().then((storage) => console.log(storage)) you can estimate the available storage the browser allows the site to use. Then, you can decide whether to store the data in an IndexedDB or to prompt the user with not enaugh storage to downlaod a sample.
void async function() {
try {
let storage = await navigator.storage.estimate();
print(`Available: ${storage.quota/(1024*1024)}MiB`);
} catch(e) {
print(`Error: ${e}`);
}
}();
function print(t) {
document.body.appendChild(document.createTextNode(
t
));
}
(This snippet might not work in this snippet context. You may need to run this on a local test server)
Wide Browser Support
IndexedDB will be available in the future: All browsers except Opera
Storage API will be available in the future with exceptions: All browsers except Apple and IE

Sort of.
As of this writing, there is a Device Memory specification under development. It specifies the navigator.deviceMemory property to contain a rough order-of-magnitude estimate of total device memory in GiB; this API is only available to sites served over HTTPS. Both constraints are meant to mitigate the possibility of fingerprinting the client, especially by third parties. (The specification also defines a ‘client hint’ HTTP header, which allows checking available memory directly on the server side.)
However, the W3C Working Draft is dated September 2018, and while the Editor’s Draft is dated November 2020, the changes in that time span are limited to housekeeping and editorial fixups. So development on that front seems lukewarm at best. Also, it is currently only implemented in Chromium derivatives.
And remember: just because a client does have a certain amount of memory, it doesn’t mean it is available to you. Perhaps there are other purposes for which they want to use it. Knowing that a large amount of memory is present is not a permission to use it all up to the exclusion to everyone else. The best uses for this API are probably like the ones specified in the question: detecting whether the data we want to send might be too large for the client to handle.

Related

Object memory storage in Node.js

I just try this code on both Chromium console and Node.js console:
var map = {};
for(var i = 0; i < 10000000; i++) {
var key = '' + Math.random();
map[key] = true;
}
var time = (new Date()).getTime();
console.log(map['doesNotExists']);
console.log((new Date()).getTime() - time);
In a browser: several seconds.
In Node.js: a few milliseconds.
So, I suppose Node.js use HashMap storage and Webkit does not. Is it correct?
I wonder if Node.js store all objects (even small objects) this way. Do you know if there is a storage rule depending on the size for objects in Node.js?
Update 2017-05-02
This is no longer true, now, the difference is not significant. My guess is storage in a HashMap way has been introduced since 2015.
Even though it feels very console-like...The Chromium console is a software program presenting a graphical user interface for you, it is in fact not the console. This program is running your commands behind the scenes. In particularly, they do quite a bit of pre & post processing of the I/O to make it pretty, collapsable if it's an object, etc -- all of that detection, comparison, even color formatting takes cycles that add up.
All that to say, there is a lot more going on there than just hitting the underlying V8 Engine.
On top of that, all of this is really limited by the browser itself -- think of it a lot more like you saying, "Hey, Chrome - while you're doing whatever else you've got in other tabs -- also fit this in." Finally, you're also going to hit memory and timeout limits governed by the browser. All of this make it an unfair compare!
While Node uses V8 for JS interpretation, it's not going to be doing any of that nonsense for all of your 10 million iterations -- which I would expect would absolutely speed up the NodeJS run...and it doesn't really have anything to do with either storing object a different way.

Chrome usedJSHeapSize property

First of all, I've looked around the internet and found it quite badly documented.
Somewhere in my code I have a big memory leak that I'm trying to track and after using:
window.performance.memory.usedJSHeapSize
it looks like the value remains at the same level of 10MB, which is not true because when we compare to the values either visible here:
chrome://memory-internals/
or if we look at the Timeline in devTools we can see a big difference. Does anyone encountered a similar issue? Do I need to manually update these values (to run a command "update", "measure" etc?)
Following this topic:
Information heap size
it looks like this value is increased by a certain step, can we somehow see what is it or modify it? In my case from what I can see now the page has about 10MB, 30 minutes later there will be about 400MB, and half an hour after the page will crash..
Any ideas guys?
(Why the code is leaking it's a different issue, please treat this one as I was trying to use this variable to create some kind of test).
There's a section of the WebPlatform.org docs that explains this:
The values are quantized as to not expose private information to attackers. If Chrome is run with the flag --enable-precise-memory-info the values are not quantized.
http://docs.webplatform.org/wiki/apis/timing/properties/memory
So, by default, the number is not precise, and it only updates every 20 minutes! This should explain why your number doesn't change. If you use the flag, the number will be precise and current.
The WebKit commit message explains:
This patch adds an option to expose quantized and rate-limited memory
information to web pages. Web pages can only learn new data every 20
minutes, which helps mitigate attacks where the attacker compares two
readings to extract side-channel information. The patch also only
reports 100 distinct memory values, which (combined with the rate
limits) makes it difficult for attackers to learn about small changes in
memory use.

By Using JavaScript "Undefined" (Variable) we can Free system memory?

I am working for an internet speed test application. The site is calculating internet speed using JavaScript . When downloading larger files 250Mb+ The browser crashes. (Chrome , opera , Firefox ). It is saving all test files to DOM. I saw that the memory usage of the browser goes up like crazy while testing 100Mbps + speed connections.
My question: is there any way to limit the memory usage of browsers? Or there is any limit on using browser DOM ?
After using a file, will making it "Undefined" actually delete the item from system memory ?
I saw this browser crash issue after downloading 250Mb + data (next file 250 Mb so 512mb saved in DOM)
You do not need specific data to test network speed. What matters is the size of the data (here, 250 mb). Also, to be sure you are testing for the real speed, there is the additional requirement of not using trivial data (that is, not using all zero, or a quickly repeating pattern).
You may generate 1 mb of random data (or whatever amount that does not crash the application) and send it 250 times : the server still sees 250mb of data, and you do not need to store anything (except that 1 mb).
This works because you are testing for speed : you do not need to check that the data sent is the same as the received data, because the integrity of your data is already insured by the underlying TCP/IP protocol.
Concerning freeing javascript memory : what you can do is assigning the variable another value (var = null;), and check that you do not hold any other reference to that variable. It is then eligible to be garbage collected : the interpreter may or may not free it, depending on its implementation -- you have no control over that beyond this point.

Unlimited size of local storage under IE 9?

I have just re-written test for HTML5 persistent storage (localStorage) capacity (the previous one created 1 key in memory, so it was falling on memory exception). I've created also jsFiddle for it: http://jsfiddle.net/rxTkZ/4/
The testing code is a loop:
var value = new Array(10001).join("a")
var i = 1
var task = function() {
localStorage['key_'+i] = value
$("#stored").text(i*10)
i++
setTimeout(task)
}
task()
The local storage capacity under IE9, as opposite to other browsers, seems to be practically unlimited - I've managed to store over 400 million characters, and the test was still running.
Is it a feature I can rely on? I'm writing application for intranet usage, where the browser that will be used is IE 9.
Simple answer to this: no :)
Don't paint yourself into a corner. Web Storage is not meant to store wast amount of data. The standard only recommends 5 mb, some browser implement this, others less (and considering that each char takes up 2 bytes you only get half of that perceptually).
Opera let users adjust the size (12 branch, dunno about the new webkit based version) but that is a fully a user initiated action.
It's not a reliable storage when it comes to storage space. As to IE9 it must be considered a temporary flaw.
If you need large space consider File API (where you can request user approved quota for tons of megabytes) or Indexed DB instead.

Local storage read/write performance in Mozilla Firefox

I am testing different browsers on how do they read/write large amounts of data to/from local storage. Sample data is 1500 customer records with some set of data for everyone (first name, last name, some ids for their location, type, etc). Test application is built on GWT paltform.
And what I noticed is that IE8, IE9, Chrome improved their performance in at least 30% after moving to loading data from local storage (rather than from web server). And only Firefox (5.0) is the one who worsened the results (around 30% slower). Remote web-server was used to bring some sort of reality into experiment.
The difference between browsers is almost invisible on small data chunks (100-200 records) and the resulting time is also about to be the same. But large amounts reveal the problem.
I found mentioning of this issue on mozilla support site - https://support.mozilla.com/en-US/questions/750266
But still no solution or workaround there how to fix it.
Javascript profiling shows that calls to function implemented in GWT StorageImpl.java class
function $key(storage, index){
return index >= 0 && index < $wnd[storage].length ?
$wnd[storage].key(index) : null;
}
take the lion's share of time during execution. Which is actually storage.getItem(key) call in GWT.
To avoid this frequent calls I would rather prefer a single call to translate storage contents to the map, for example, and it might help me to save time spent on Firefox's cache I/O operations (if any). But Storage interface ( http://dev.w3.org/html5/webstorage/#storage-0 ) contains only getItem() function to receive any contents from the storage.
Any thoughts about how to force Firefox work faster?
P.S. Maybe will be useful for someone: I found FF local storage contents using addon SQLite manager, and loading webappstore.sqlite database from the drop-down list of default built-in databases.
Which version of Firefox are you testing? Your post on support.mozilla.org mentions Firefox 3.6.8, and you mention IE, so are presumably on Windows, in which case you're probably hitting https://bugzilla.mozilla.org/show_bug.cgi?id=536544 which was fixed in Firefox 4. Or are you seeing the problem in a recent Firefox?

Categories