I wanted to know how much ram Memory is consumed by the current opening tab
in browser
And the available ram memory we have
I am looking for a RAM Memory not the JS or heap memory
I wanted to show an alert to the user if his available Ram memory is low
if anyone have Idea how to achieve this in Javascript please let me know
Thanks
You can't get that information from browser-hosted JavaScript. The closest you can get is the Device Memory API (see also MDN) which is experimental and intentionally imprecise and may even lie to you.¹ But even that doesn't give you what you've said you want, it just tells you the total amount of memory the device has (maybe, if it's not lying).
¹ "...may even lie to you." - The API clamps the values to "...protect the privacy of owners of very low- or high-memory devices" (MDN). The exact bounds values used are up to the implementation in the browser; the spec currently recommends a lower bound of 0.25GiB and an upper bound of 8GiB.
Related
I'm just making up a scenario, but let's say I have a 500MB file that I want to provide an html table for the client to view the data. Let's say there are two scenarios:
They are viewing it via a Desktop where they have 1.2GB available memory. They can download the whole file.
Later, they try and view this same table on their phone. We detect that they only have 27MB available memory, and so give them a warning that says "We have detected that your device does not have enough memory to view the entire table. Would you like to download a sample instead?"
Ignoring things like pagination or virtual tables, I'm just concerned about "if the full dataset can fit in the user's available memory". Is this possible to detect in a browser (even with a user's confirmation). If so, how could this be done?
Update
This answer has been answered about 6 years ago, and the question points to an answer from 10 years ago. I'm wondering what the current state is, as browsers have changed quite a bit since then and there's also webassembly and such.
Use performance.memory.usedJSHeapSize. Though it non-standard and in development, it will be enough for testing the memory used. You can try it out on edge/chrome/opera, but unfortunately not on firefox or safari (as of writing).
Attributes (performance.memory)
jsHeapSizeLimit: The maximum size of the heap, in bytes, that is available to the context.
totalJSHeapSize: The total allocated heap size, in bytes.
usedJSHeapSize: The currently active segment of JS heap, in bytes.
Read more about performance.memory: https://developer.mozilla.org/en-US/docs/Web/API/Performance/memory.
CanIUse.com: https://caniuse.com/mdn-api_performance_memory
CanIUse.com 2020/01/22
I ran into exactly this problem some time ago (a non-paged render of a JSON table, because we couldn't use paging, because :-( ), but the problem was even worse than what you describe:
the client having 8 GB of memory does not mean that the memory is available to the browser.
any report of "free memory" on a generic device will be, ultimately, bogus (how much is used as cache and buffers?).
even knowing exactly "how much memory is available to Javascript" leads to a maintenance nightmare because the translation formula from available memory to displayable rows involves a "memory size for a single row" that is unknown and variable between platforms, browsers, and versions.
After some heated discussions, my colleagues and I agreed that this was a XY problem. We did not want to know how much memory the client had, we wanted to know how many table rows it could reasonably and safely display.
Some tests we ran - but this was a couple of months or three before the pandemic, so September 2019, and things might have changed - showed the following interesting effects: if we rendered off-screen, client-side, a table with the same row, repeated, and random data, and timed how long it took to add each row, this time was roughly correlated with the device performances and limits, and allowed a reasonable estimate of the permissible number of actual rows we could display.
I have tried to reimplement a very crude such test from my memory, it ran along these lines and it transmitted the results through an AJAX call upon logon:
var tr = $('<tr><td>...several columns...</td></tr>')
$('body').empty();
$('body').html('<table id="foo">');
var foo = $('#foo');
var s = Date.now();
for (i = 0; i < 500; i++) {
var t = Date.now();
// Limit total runtime to, say, 3 seconds
if ((t - s) > 3000) {
break;
}
for (j = 0; j < 100; j++) {
foo.append(tr.clone());
}
var dt = Date.now() - t;
// We are interested in when dt exceeds a given guard time
if (0 == (i % 50)) { console.debug(i, dt) };
}
// We are also interested in the maximum attained value
console.debug(i*j);
The above is a re-creation of what became a more complex testing rig (it was assigned to a friend of mine, I don't know the details past the initial discussions). On my Firefox on Windows 10, I notice a linear growth of dt that markedly increases around i=450 (I had to increase the runtime to arrive at that value; the laptop I'm using is a fat Precision M6800). About a second after that, Firefox warns me that a script is slowing down the machine (that was, indeed, one of the problems we encountered when sending the JSON data to the client). I do remember that the "elbow" of the curve was the parameter we ended up using.
In practice, if the overall i*j was high enough (the test terminated with all the rows), we knew we need not worry; if it was lower (the test terminated by timeout), but there was no "elbow", we showed a warning with the option to continue; below a certain threshold or if "dt" exceeded a guard limit, the diagnostic stopped even before the timeout, and we just told the client that it couldn't be done, and to download the synthetic report in PDF format.
You may want to use the IndexedDB API together with the Storage API:
Using navigator.storage.estimate().then((storage) => console.log(storage)) you can estimate the available storage the browser allows the site to use. Then, you can decide whether to store the data in an IndexedDB or to prompt the user with not enaugh storage to downlaod a sample.
void async function() {
try {
let storage = await navigator.storage.estimate();
print(`Available: ${storage.quota/(1024*1024)}MiB`);
} catch(e) {
print(`Error: ${e}`);
}
}();
function print(t) {
document.body.appendChild(document.createTextNode(
t
));
}
(This snippet might not work in this snippet context. You may need to run this on a local test server)
Wide Browser Support
IndexedDB will be available in the future: All browsers except Opera
Storage API will be available in the future with exceptions: All browsers except Apple and IE
Sort of.
As of this writing, there is a Device Memory specification under development. It specifies the navigator.deviceMemory property to contain a rough order-of-magnitude estimate of total device memory in GiB; this API is only available to sites served over HTTPS. Both constraints are meant to mitigate the possibility of fingerprinting the client, especially by third parties. (The specification also defines a ‘client hint’ HTTP header, which allows checking available memory directly on the server side.)
However, the W3C Working Draft is dated September 2018, and while the Editor’s Draft is dated November 2020, the changes in that time span are limited to housekeeping and editorial fixups. So development on that front seems lukewarm at best. Also, it is currently only implemented in Chromium derivatives.
And remember: just because a client does have a certain amount of memory, it doesn’t mean it is available to you. Perhaps there are other purposes for which they want to use it. Knowing that a large amount of memory is present is not a permission to use it all up to the exclusion to everyone else. The best uses for this API are probably like the ones specified in the question: detecting whether the data we want to send might be too large for the client to handle.
Is there any limit to the heap size in the chrome memory profile ?
Note: This is a Chrome only answer, see below why.
You should take a look at window.performance.memory in Chrome Dev Tools, there is a jsHeapSizeLimit attribute.
However, I'm not sure this will be the maximum value on any memory profiling y-axis
You can find more informations on MDN : https://developer.mozilla.org/en-US/docs/Web/API/Window/performance
performance.memory :
A non-standard extension added in Chrome.
Linked in MDN (not anymore) : https://webplatform.github.io/docs/apis/timing/properties/memory/
performance.memory :
Note: This property is read-only.
console.log(performance.memory)
// Would show, for example
//{
// jsHeapSizeLimit: 767557632,
// totalJSHeapSize: 58054528,
// usedJSHeapSize: 42930044
//}
Notes
usedJsHeapSize is the total amount of memory being used by JS objects including V8 internal objects, totalJsHeapSize is current size of the JS heap including free space not occupied by any JS objects. This means that usedJsHeapSize can not be greater than totalJsHeapSize. Note that it is not necessarily that there has ever been totalJsHeapSize of alive JS objects.
See the WebKit Patch for how the quantized values are exposed. The tests in particular help explain how it works.
Be careful, values are expressed without units because there isn't. This is because webkit does not want to expose system informations such as available memory size. It only provide a way to compare memory usage (for instace between two different versions of a website).
In theory the memory limit (not LocalStorage, but actual memory) is unlimited, and only bounded by the amount of RAM on the system. In practice, most web browsers will place a limit on each window (for example, 200MB). Sometimes the limit is customizable by the user. Additionally, an operating system can put a limit on the amount of memory used by an application.
First of all, I've looked around the internet and found it quite badly documented.
Somewhere in my code I have a big memory leak that I'm trying to track and after using:
window.performance.memory.usedJSHeapSize
it looks like the value remains at the same level of 10MB, which is not true because when we compare to the values either visible here:
chrome://memory-internals/
or if we look at the Timeline in devTools we can see a big difference. Does anyone encountered a similar issue? Do I need to manually update these values (to run a command "update", "measure" etc?)
Following this topic:
Information heap size
it looks like this value is increased by a certain step, can we somehow see what is it or modify it? In my case from what I can see now the page has about 10MB, 30 minutes later there will be about 400MB, and half an hour after the page will crash..
Any ideas guys?
(Why the code is leaking it's a different issue, please treat this one as I was trying to use this variable to create some kind of test).
There's a section of the WebPlatform.org docs that explains this:
The values are quantized as to not expose private information to attackers. If Chrome is run with the flag --enable-precise-memory-info the values are not quantized.
http://docs.webplatform.org/wiki/apis/timing/properties/memory
So, by default, the number is not precise, and it only updates every 20 minutes! This should explain why your number doesn't change. If you use the flag, the number will be precise and current.
The WebKit commit message explains:
This patch adds an option to expose quantized and rate-limited memory
information to web pages. Web pages can only learn new data every 20
minutes, which helps mitigate attacks where the attacker compares two
readings to extract side-channel information. The patch also only
reports 100 distinct memory values, which (combined with the rate
limits) makes it difficult for attackers to learn about small changes in
memory use.
I have just re-written test for HTML5 persistent storage (localStorage) capacity (the previous one created 1 key in memory, so it was falling on memory exception). I've created also jsFiddle for it: http://jsfiddle.net/rxTkZ/4/
The testing code is a loop:
var value = new Array(10001).join("a")
var i = 1
var task = function() {
localStorage['key_'+i] = value
$("#stored").text(i*10)
i++
setTimeout(task)
}
task()
The local storage capacity under IE9, as opposite to other browsers, seems to be practically unlimited - I've managed to store over 400 million characters, and the test was still running.
Is it a feature I can rely on? I'm writing application for intranet usage, where the browser that will be used is IE 9.
Simple answer to this: no :)
Don't paint yourself into a corner. Web Storage is not meant to store wast amount of data. The standard only recommends 5 mb, some browser implement this, others less (and considering that each char takes up 2 bytes you only get half of that perceptually).
Opera let users adjust the size (12 branch, dunno about the new webkit based version) but that is a fully a user initiated action.
It's not a reliable storage when it comes to storage space. As to IE9 it must be considered a temporary flaw.
If you need large space consider File API (where you can request user approved quota for tons of megabytes) or Indexed DB instead.
I just ran into a very interesting issue when someone posted a jsperf benchmark that conflicted with a previous, nearly identical, benchmark I ran.
Chrome does something drastically different between these two lines:
new Array(99999); // jsperf ~50,000 ops/sec
new Array(100000); // jsperf ~1,700,000 ops/sec
benchmarks: http://jsperf.com/newarrayassign/2
I was wondering if anyone has any clue as to what's going on here!
(To clarify, I'm looking for some low-level details on the V8 internals, such as it's using a different data structure with one vs the other and what those structures are)
Just because this sounded pretty interesting, I searched through the V8 codebase for a static defined as 100000, and I found this kInitialMaxFastElementArray var, which is the subsequently used in the builtin ArrayConstructInitializeElements function function. While I'm not a c programmer and don't know the nitty-gritty here, you can see that it's using an if loop to determine if it's smaller than 100,000, and returning at different points based on that.
Well, there always is some threshold number when you design algorithms that adapt to the size of data (for example SharePoint changes the way it works when you add 1000 items to a list). So, the guess would be that you have found the actual number and the performance differs, as different data structures or algorithms are used.
I don't know what operating system you're using, but if this is Linux, I'd suspect that Chrome (i.e. malloc) is allocating memory from a program-managed heap (size determined using the sbrk system call, and the free lists are managed by the C standard library), but when you reach a certain size threshold, it switches to using mmap to ask the kernel to allocate large chunks of memory that don't interfere with the sbrk-managed heap.
Doug Lea describes how malloc works in the GNU C Library, better than I could. He wrote it.
Or maybe 100000 hits some kind of magic threshold for the amount of space needed that it triggers the garbage collector more frequently when trying to allocate memory.