I just try this code on both Chromium console and Node.js console:
var map = {};
for(var i = 0; i < 10000000; i++) {
var key = '' + Math.random();
map[key] = true;
}
var time = (new Date()).getTime();
console.log(map['doesNotExists']);
console.log((new Date()).getTime() - time);
In a browser: several seconds.
In Node.js: a few milliseconds.
So, I suppose Node.js use HashMap storage and Webkit does not. Is it correct?
I wonder if Node.js store all objects (even small objects) this way. Do you know if there is a storage rule depending on the size for objects in Node.js?
Update 2017-05-02
This is no longer true, now, the difference is not significant. My guess is storage in a HashMap way has been introduced since 2015.
Even though it feels very console-like...The Chromium console is a software program presenting a graphical user interface for you, it is in fact not the console. This program is running your commands behind the scenes. In particularly, they do quite a bit of pre & post processing of the I/O to make it pretty, collapsable if it's an object, etc -- all of that detection, comparison, even color formatting takes cycles that add up.
All that to say, there is a lot more going on there than just hitting the underlying V8 Engine.
On top of that, all of this is really limited by the browser itself -- think of it a lot more like you saying, "Hey, Chrome - while you're doing whatever else you've got in other tabs -- also fit this in." Finally, you're also going to hit memory and timeout limits governed by the browser. All of this make it an unfair compare!
While Node uses V8 for JS interpretation, it's not going to be doing any of that nonsense for all of your 10 million iterations -- which I would expect would absolutely speed up the NodeJS run...and it doesn't really have anything to do with either storing object a different way.
Related
I'm just making up a scenario, but let's say I have a 500MB file that I want to provide an html table for the client to view the data. Let's say there are two scenarios:
They are viewing it via a Desktop where they have 1.2GB available memory. They can download the whole file.
Later, they try and view this same table on their phone. We detect that they only have 27MB available memory, and so give them a warning that says "We have detected that your device does not have enough memory to view the entire table. Would you like to download a sample instead?"
Ignoring things like pagination or virtual tables, I'm just concerned about "if the full dataset can fit in the user's available memory". Is this possible to detect in a browser (even with a user's confirmation). If so, how could this be done?
Update
This answer has been answered about 6 years ago, and the question points to an answer from 10 years ago. I'm wondering what the current state is, as browsers have changed quite a bit since then and there's also webassembly and such.
Use performance.memory.usedJSHeapSize. Though it non-standard and in development, it will be enough for testing the memory used. You can try it out on edge/chrome/opera, but unfortunately not on firefox or safari (as of writing).
Attributes (performance.memory)
jsHeapSizeLimit: The maximum size of the heap, in bytes, that is available to the context.
totalJSHeapSize: The total allocated heap size, in bytes.
usedJSHeapSize: The currently active segment of JS heap, in bytes.
Read more about performance.memory: https://developer.mozilla.org/en-US/docs/Web/API/Performance/memory.
CanIUse.com: https://caniuse.com/mdn-api_performance_memory
CanIUse.com 2020/01/22
I ran into exactly this problem some time ago (a non-paged render of a JSON table, because we couldn't use paging, because :-( ), but the problem was even worse than what you describe:
the client having 8 GB of memory does not mean that the memory is available to the browser.
any report of "free memory" on a generic device will be, ultimately, bogus (how much is used as cache and buffers?).
even knowing exactly "how much memory is available to Javascript" leads to a maintenance nightmare because the translation formula from available memory to displayable rows involves a "memory size for a single row" that is unknown and variable between platforms, browsers, and versions.
After some heated discussions, my colleagues and I agreed that this was a XY problem. We did not want to know how much memory the client had, we wanted to know how many table rows it could reasonably and safely display.
Some tests we ran - but this was a couple of months or three before the pandemic, so September 2019, and things might have changed - showed the following interesting effects: if we rendered off-screen, client-side, a table with the same row, repeated, and random data, and timed how long it took to add each row, this time was roughly correlated with the device performances and limits, and allowed a reasonable estimate of the permissible number of actual rows we could display.
I have tried to reimplement a very crude such test from my memory, it ran along these lines and it transmitted the results through an AJAX call upon logon:
var tr = $('<tr><td>...several columns...</td></tr>')
$('body').empty();
$('body').html('<table id="foo">');
var foo = $('#foo');
var s = Date.now();
for (i = 0; i < 500; i++) {
var t = Date.now();
// Limit total runtime to, say, 3 seconds
if ((t - s) > 3000) {
break;
}
for (j = 0; j < 100; j++) {
foo.append(tr.clone());
}
var dt = Date.now() - t;
// We are interested in when dt exceeds a given guard time
if (0 == (i % 50)) { console.debug(i, dt) };
}
// We are also interested in the maximum attained value
console.debug(i*j);
The above is a re-creation of what became a more complex testing rig (it was assigned to a friend of mine, I don't know the details past the initial discussions). On my Firefox on Windows 10, I notice a linear growth of dt that markedly increases around i=450 (I had to increase the runtime to arrive at that value; the laptop I'm using is a fat Precision M6800). About a second after that, Firefox warns me that a script is slowing down the machine (that was, indeed, one of the problems we encountered when sending the JSON data to the client). I do remember that the "elbow" of the curve was the parameter we ended up using.
In practice, if the overall i*j was high enough (the test terminated with all the rows), we knew we need not worry; if it was lower (the test terminated by timeout), but there was no "elbow", we showed a warning with the option to continue; below a certain threshold or if "dt" exceeded a guard limit, the diagnostic stopped even before the timeout, and we just told the client that it couldn't be done, and to download the synthetic report in PDF format.
You may want to use the IndexedDB API together with the Storage API:
Using navigator.storage.estimate().then((storage) => console.log(storage)) you can estimate the available storage the browser allows the site to use. Then, you can decide whether to store the data in an IndexedDB or to prompt the user with not enaugh storage to downlaod a sample.
void async function() {
try {
let storage = await navigator.storage.estimate();
print(`Available: ${storage.quota/(1024*1024)}MiB`);
} catch(e) {
print(`Error: ${e}`);
}
}();
function print(t) {
document.body.appendChild(document.createTextNode(
t
));
}
(This snippet might not work in this snippet context. You may need to run this on a local test server)
Wide Browser Support
IndexedDB will be available in the future: All browsers except Opera
Storage API will be available in the future with exceptions: All browsers except Apple and IE
Sort of.
As of this writing, there is a Device Memory specification under development. It specifies the navigator.deviceMemory property to contain a rough order-of-magnitude estimate of total device memory in GiB; this API is only available to sites served over HTTPS. Both constraints are meant to mitigate the possibility of fingerprinting the client, especially by third parties. (The specification also defines a ‘client hint’ HTTP header, which allows checking available memory directly on the server side.)
However, the W3C Working Draft is dated September 2018, and while the Editor’s Draft is dated November 2020, the changes in that time span are limited to housekeeping and editorial fixups. So development on that front seems lukewarm at best. Also, it is currently only implemented in Chromium derivatives.
And remember: just because a client does have a certain amount of memory, it doesn’t mean it is available to you. Perhaps there are other purposes for which they want to use it. Knowing that a large amount of memory is present is not a permission to use it all up to the exclusion to everyone else. The best uses for this API are probably like the ones specified in the question: detecting whether the data we want to send might be too large for the client to handle.
This is more of a fundamental question, but the context is specifically in terms of JavaScript. Given that Math.random is not cryptographically secure, can the results still be considered secure when it has been called a certain number of times that cannot be predicted?
So if I was to generate a 32bit number using window.crypto.getRandomValues for example and select one of the digits as an iteration count – calling Math.random that number of times and using the last result, is the result still predictable?
The purpose of this is to generate a set of secure random numbers between 0 and 1 (exclusive) without having the ability to manually seed Math.random.
My initial thoughts are that the result shouldn't be predictable – but I want to make sure I'm not overlooking something crucial.
Here is a simple Math.random()-style CSPRNG drop-in:
Math.randomer=function(){
return crypto.getRandomValues(new Uint32Array(1))[0] / Math.pow(2,32);
};
// usage demo:
alert(Math.randomer());
Unlike the unsafe random(), this code will still rate-limit because of the use of crypto.getRandomValues, but that's probably a good thing, and you can get dozens of KBs a second with this.
Let's start with a warning; just in case
Honestly, I'm not sure why you would want to use something beyond window.crypto.getRandomValues (or its Linux equivalent /dev/random). If you're planning to "stretch" its output for some reason, chances are you're doing it wrong. Whatever your scenario is, don't hardcode such a seed seed into your script before serving it to clients. Not even if your .js file is created dynamically on the server side. That would be as if you would push encrypted data together with your encryption key… voiding any security gains in its root.
That being said, let's look at your question in your line of thinking…
About your idea
The output of math.random is insecure as it produces predictable outputs. Meaning: having a sequence of outputs, an attacker can successfully recover the state and the following outputs it will produce. Seeding it with a cryptographically secure seed from window.crypto.getRandomValues (or its Linux equivalent /dev/random) will not fix that problem.
As a securer approach you might want to take a look at ChaCha20, which is a cryptographically secure stream cipher. It definitely produces securer outputs than math.random and I've seen several pure vanilla implementation of ChaCha20 at Github et al. So, using something "safer" than math.random shouldn't be all too hard to implement in your script(s). Seed ChaCha20 with window.crypto.getRandomValues (or its Linux equivalent /dev/random) as you were planning to do and you're set.
But…
Please note that I haven't dived into the use of Javascript for crypto purposes itself. Doing so tends to introduce attack vectors. Which is why you'ld (at least) need HTTPS when your project is served online. I'll have to skip mentioning all the other related nitpicks… mainly because you didn't mention such details in your question, but also to prevent this answer from getting too broad/long. A quick search at Security.SE tends to enlighten you about using-Javascript-for-crypto related issues.
Instead - use the Web Cryptographic API
Last but not least, I'ld like to get back to what I said for starters and point you to the fact that you might as well simply use window.crypto.getRandomValues (or its Linux equivalent /dev/random) for all randomness purposes. The speed gains of not doing so are minimal in most scenarios.
Crypto is hard… don't break your neck trying to solve problems on your own. Even for Javascript, an applicable solution already exist:
Web Cryptographic API - Example:
/* assuming that window.crypto.getRandomValues is available */
var array = new Uint32Array(10);
window.crypto.getRandomValues(array);
console.log("Your lucky numbers:");
for (var i = 0; i < array.length; i++) {
console.log(array[i]);
}
See, most modern browsers support a minimum of CryptoAPI which allows your clients to call obj.getRandomValues() from within Javascript - which is practically a call to the system's getRandomValues or /dev/random.
The WebCrypto API was enabled by default starting in Chrome 37 (August 26, 2014)
Mozilla Firefox supports it
Internet Explorer 11 supports it
etc.
Some final words regarding polyfills
If you really must support outdated browsers, decent polyfills can close the gap. But when it comes to security, both "using old browsers" as well as "using polyfills" is a nightmare waiting to go wrong. Instead, be professional and educate clients about the fact that its easier to upgrade to a newer browser, than to pick up polyfills and the problems that come with them.
Murphy's Law applies here: When using polyfills for security/cryptography, what can go wrong will go wrong!
In the end, its always better to be safe and not use polyfills just to support some outdated browsers, than to be sorry when stuff hits the fan. A browser update will cost your client a few minutes. A cryptographic polyfill that fails ruins your reputation forever. Remember that!
I am developing a web-based interface for a hardware sensor that produces approx. 250kB/s of raw data (125 kS/s, 16 bit per sample). The web application is designed to visualize (using Canvas) and store (using IndexedDB) this data in real-time. I am having performance issues with indexedDB storage.
This application is designed to run for days or even weeks and should reliably store large amounts of data (tens to the low hundreds of MB)
Because write commits seem to be the general performance issue, I have rewritten my application to only store a big chunk of data every 5 seconds as a non-sparse integer array object. This kind of works, but I still get very choppy visualization performance, high CPU and high memory usage. The exact storage code:
//dataDB = indexedDB database opened in another function
//slice = data to be stored
//sessionID = object store index
//this function is called about once every 5 seconds
//with 700 000 values in the slice array
//slice is a multidimensional array
function storeFastData(slice, sessionID){
var s = dataDB.transaction(["fastData"],"readwrite").objectStore("fastData");
var fdreq = s.get(sessionID);
fdreq.onsuccess = function(e){
var d = fdreq.result;
for(i = 0; i < slice.length; i++){
d.data[i][1] = slice[i][1];
}
s.put(d);
}
}
Concretely:
Is IndexedDB the right choice for this application?
Am I being an idiot in the way I implemented this? This is the first IndexedDB-based project I am doing
I have read that using WebWorkers may at least fix the stuttering issues as it can run on another thread. Would this solve my performance problems?
I am willing to use new (draft) functionality, but requiring user interaction for storage beyond 5MB (e.g. using Quota Management API) every time the application is opened, is quite bothersome and if at all possible I would like to avoid this.
I do not use jquery. This cannot be written as a native application (it has to run inside a browser).
IndexedDB is excellent choice for your case. If you store as soon as data are available quickly and frequently, it should be OK. You don't wait for 5 sec, store right away around 200 ms interval. Generally an indexeddb write op takes 20ms.
I have just re-written test for HTML5 persistent storage (localStorage) capacity (the previous one created 1 key in memory, so it was falling on memory exception). I've created also jsFiddle for it: http://jsfiddle.net/rxTkZ/4/
The testing code is a loop:
var value = new Array(10001).join("a")
var i = 1
var task = function() {
localStorage['key_'+i] = value
$("#stored").text(i*10)
i++
setTimeout(task)
}
task()
The local storage capacity under IE9, as opposite to other browsers, seems to be practically unlimited - I've managed to store over 400 million characters, and the test was still running.
Is it a feature I can rely on? I'm writing application for intranet usage, where the browser that will be used is IE 9.
Simple answer to this: no :)
Don't paint yourself into a corner. Web Storage is not meant to store wast amount of data. The standard only recommends 5 mb, some browser implement this, others less (and considering that each char takes up 2 bytes you only get half of that perceptually).
Opera let users adjust the size (12 branch, dunno about the new webkit based version) but that is a fully a user initiated action.
It's not a reliable storage when it comes to storage space. As to IE9 it must be considered a temporary flaw.
If you need large space consider File API (where you can request user approved quota for tons of megabytes) or Indexed DB instead.
I made a pretty CPU intensive webpage with lots of CSS3 and Javascript. I want to use Javascript to test if the user's computer is capable of handling the scripts. I think a possible method is to run some CPU intensive scripts and see how long it took. However, I don't know how to actually implement this.
Here's the webpage: http://leojiang.me/ (3D cube only viewable in webkit browsers).
You can profile how long it takes to render a frame or a couple of frames that should give you and idea of what fps would be on the client.
var StartTime = new Date().getTime();
BenchMarkTestFunction(); // render frame for example
var EndTime = new Date().getTime();
var ElapsedMilliseconds = EndTime - StartTime;
var AcceptableTime = 1000; // one second
var IsGoodPerformance = ElapsedMilliseconds < AcceptableTime; // some number being acceptable performace
if(!IsGoodPerformance) {
alert("Sorry your browser is not good enough to run this site - go somewhere else");
}
You can determine what the AcceptableTime should be by testing your site on different browsers/devices and seeing how it performs and what the value for ElapsedMilliseconds was.
Barring setting localstorage to run a script (essentially hacking a user's machine --please don't do this), I don't believe you can do anything except find the OS and architecture. I feel as if I've seen this in flash, but strictly js will not find the speed. I agree with Scott. If your potential users could have issues, redesign. Otherwise, my i5 was entirely happy with the site. Good luck!
There are ways to assess the CPU or graphics capabilities of the host computer using javascript. For example, you could run a set of iterations using those operations and measure the time from beginning to end.
In general, it's not that useful to just try to measure a single CPU performance number as it's much more important to measure exactly what your critical operations are.
For example, if you're concerned with a certain type of graphics rendering, you can do a sample animation and see how many frames can be rendered in a particular time.