Performance issues with IndexedDB storing 250kB/s streaming data - javascript

I am developing a web-based interface for a hardware sensor that produces approx. 250kB/s of raw data (125 kS/s, 16 bit per sample). The web application is designed to visualize (using Canvas) and store (using IndexedDB) this data in real-time. I am having performance issues with indexedDB storage.
This application is designed to run for days or even weeks and should reliably store large amounts of data (tens to the low hundreds of MB)
Because write commits seem to be the general performance issue, I have rewritten my application to only store a big chunk of data every 5 seconds as a non-sparse integer array object. This kind of works, but I still get very choppy visualization performance, high CPU and high memory usage. The exact storage code:
//dataDB = indexedDB database opened in another function
//slice = data to be stored
//sessionID = object store index
//this function is called about once every 5 seconds
//with 700 000 values in the slice array
//slice is a multidimensional array
function storeFastData(slice, sessionID){
var s = dataDB.transaction(["fastData"],"readwrite").objectStore("fastData");
var fdreq = s.get(sessionID);
fdreq.onsuccess = function(e){
var d = fdreq.result;
for(i = 0; i < slice.length; i++){
d.data[i][1] = slice[i][1];
}
s.put(d);
}
}
Concretely:
Is IndexedDB the right choice for this application?
Am I being an idiot in the way I implemented this? This is the first IndexedDB-based project I am doing
I have read that using WebWorkers may at least fix the stuttering issues as it can run on another thread. Would this solve my performance problems?
I am willing to use new (draft) functionality, but requiring user interaction for storage beyond 5MB (e.g. using Quota Management API) every time the application is opened, is quite bothersome and if at all possible I would like to avoid this.
I do not use jquery. This cannot be written as a native application (it has to run inside a browser).

IndexedDB is excellent choice for your case. If you store as soon as data are available quickly and frequently, it should be OK. You don't wait for 5 sec, store right away around 200 ms interval. Generally an indexeddb write op takes 20ms.

Related

Possible to check 'available memory' within a browser?

I'm just making up a scenario, but let's say I have a 500MB file that I want to provide an html table for the client to view the data. Let's say there are two scenarios:
They are viewing it via a Desktop where they have 1.2GB available memory. They can download the whole file.
Later, they try and view this same table on their phone. We detect that they only have 27MB available memory, and so give them a warning that says "We have detected that your device does not have enough memory to view the entire table. Would you like to download a sample instead?"
Ignoring things like pagination or virtual tables, I'm just concerned about "if the full dataset can fit in the user's available memory". Is this possible to detect in a browser (even with a user's confirmation). If so, how could this be done?
Update
This answer has been answered about 6 years ago, and the question points to an answer from 10 years ago. I'm wondering what the current state is, as browsers have changed quite a bit since then and there's also webassembly and such.
Use performance.memory.usedJSHeapSize. Though it non-standard and in development, it will be enough for testing the memory used. You can try it out on edge/chrome/opera, but unfortunately not on firefox or safari (as of writing).
Attributes (performance.memory)
jsHeapSizeLimit: The maximum size of the heap, in bytes, that is available to the context.
totalJSHeapSize: The total allocated heap size, in bytes.
usedJSHeapSize: The currently active segment of JS heap, in bytes.
Read more about performance.memory: https://developer.mozilla.org/en-US/docs/Web/API/Performance/memory.
CanIUse.com: https://caniuse.com/mdn-api_performance_memory
CanIUse.com 2020/01/22
I ran into exactly this problem some time ago (a non-paged render of a JSON table, because we couldn't use paging, because :-( ), but the problem was even worse than what you describe:
the client having 8 GB of memory does not mean that the memory is available to the browser.
any report of "free memory" on a generic device will be, ultimately, bogus (how much is used as cache and buffers?).
even knowing exactly "how much memory is available to Javascript" leads to a maintenance nightmare because the translation formula from available memory to displayable rows involves a "memory size for a single row" that is unknown and variable between platforms, browsers, and versions.
After some heated discussions, my colleagues and I agreed that this was a XY problem. We did not want to know how much memory the client had, we wanted to know how many table rows it could reasonably and safely display.
Some tests we ran - but this was a couple of months or three before the pandemic, so September 2019, and things might have changed - showed the following interesting effects: if we rendered off-screen, client-side, a table with the same row, repeated, and random data, and timed how long it took to add each row, this time was roughly correlated with the device performances and limits, and allowed a reasonable estimate of the permissible number of actual rows we could display.
I have tried to reimplement a very crude such test from my memory, it ran along these lines and it transmitted the results through an AJAX call upon logon:
var tr = $('<tr><td>...several columns...</td></tr>')
$('body').empty();
$('body').html('<table id="foo">');
var foo = $('#foo');
var s = Date.now();
for (i = 0; i < 500; i++) {
var t = Date.now();
// Limit total runtime to, say, 3 seconds
if ((t - s) > 3000) {
break;
}
for (j = 0; j < 100; j++) {
foo.append(tr.clone());
}
var dt = Date.now() - t;
// We are interested in when dt exceeds a given guard time
if (0 == (i % 50)) { console.debug(i, dt) };
}
// We are also interested in the maximum attained value
console.debug(i*j);
The above is a re-creation of what became a more complex testing rig (it was assigned to a friend of mine, I don't know the details past the initial discussions). On my Firefox on Windows 10, I notice a linear growth of dt that markedly increases around i=450 (I had to increase the runtime to arrive at that value; the laptop I'm using is a fat Precision M6800). About a second after that, Firefox warns me that a script is slowing down the machine (that was, indeed, one of the problems we encountered when sending the JSON data to the client). I do remember that the "elbow" of the curve was the parameter we ended up using.
In practice, if the overall i*j was high enough (the test terminated with all the rows), we knew we need not worry; if it was lower (the test terminated by timeout), but there was no "elbow", we showed a warning with the option to continue; below a certain threshold or if "dt" exceeded a guard limit, the diagnostic stopped even before the timeout, and we just told the client that it couldn't be done, and to download the synthetic report in PDF format.
You may want to use the IndexedDB API together with the Storage API:
Using navigator.storage.estimate().then((storage) => console.log(storage)) you can estimate the available storage the browser allows the site to use. Then, you can decide whether to store the data in an IndexedDB or to prompt the user with not enaugh storage to downlaod a sample.
void async function() {
try {
let storage = await navigator.storage.estimate();
print(`Available: ${storage.quota/(1024*1024)}MiB`);
} catch(e) {
print(`Error: ${e}`);
}
}();
function print(t) {
document.body.appendChild(document.createTextNode(
t
));
}
(This snippet might not work in this snippet context. You may need to run this on a local test server)
Wide Browser Support
IndexedDB will be available in the future: All browsers except Opera
Storage API will be available in the future with exceptions: All browsers except Apple and IE
Sort of.
As of this writing, there is a Device Memory specification under development. It specifies the navigator.deviceMemory property to contain a rough order-of-magnitude estimate of total device memory in GiB; this API is only available to sites served over HTTPS. Both constraints are meant to mitigate the possibility of fingerprinting the client, especially by third parties. (The specification also defines a ‘client hint’ HTTP header, which allows checking available memory directly on the server side.)
However, the W3C Working Draft is dated September 2018, and while the Editor’s Draft is dated November 2020, the changes in that time span are limited to housekeeping and editorial fixups. So development on that front seems lukewarm at best. Also, it is currently only implemented in Chromium derivatives.
And remember: just because a client does have a certain amount of memory, it doesn’t mean it is available to you. Perhaps there are other purposes for which they want to use it. Knowing that a large amount of memory is present is not a permission to use it all up to the exclusion to everyone else. The best uses for this API are probably like the ones specified in the question: detecting whether the data we want to send might be too large for the client to handle.

How do I efficiently add items to an array in the Chrome Storage API?

From what I understand, if you want to have an array stored in the Chrome Storage API to which you want to continually add items, you need something like this:
function addToHistory(url) {
chrome.storage.sync.get('history', function(obj) {
var history = obj.hasOwnProperty('history') ? obj.history : [];
history.push(url);
chrome.storage.sync.set({'history': history}, function() {
if (chrome.runtime.lastError)
console.log(chrome.runtime.lastError);
else
console.log("History saved successfully");
});
});
}
This code bothers me; loading and then saving the same array every time you push a single item onto the end is horribly inefficient (especially if your history array starts getting several thousand entries).
Is there a more efficient way of doing this? I'm assuming I'm not the first to want to push to an array, so is there already a recommended way of achieving this?
I don't think that chrome.storage.sync API is ideal for what you need. Basically this API is great for remembering user preferences or simple and short data.
Sync API have limitations for usage like:
- 102.4 KB for all data kept in the storage
- 8 KB of data per item
- 1,800 writes per hour
So if you planning to use this API to store some historical data the app may exceed limits very fast.
I'm assuming you are developing an extension, not the app. In app you have access to the chrome.syncFileSystem API which may be used to save syncable data in a file.
Answering your question there's no one way to optimize your function. You can try to store the data periodically - for example every 30 seconds or so. You just need to remember to save data after a user closes the app.
You can also store this value as a variable in memory and save it when the user leave the screen or close the app but it is dangerous because the app may close before asynchronous task complete.
Anyway I think that this API is not the best solution for your app.

Can localStorage slow down my website when used frequently?

I'm developing a HTML5 game and I need to know if updating localStorage properties frequently can slow down the page.
I'm actually storing my hero's position in four localStorage properties (two for the actual position and two for the past position to use in a collision detection system) and updating it every 1 second interval, but I want to update it at 60fps to save every hero movement.
Using localStorage in that frequency can result in performance issues?
Local storage stores the data on your user's hard drive. It takes a bit longer to read and write to the hard drive than it does to RAM.
The conclusion to take away from this is that you could optimize your performance by reading from local storage on start up and only write to it when the user logs out.
Now, whether or not that optimization will significantly affect your project is something you'll have to figure out, and, as R3tep said, http://jsperf.com/ is a good solution.
My advice, though, is to just go with the optimization anyway, just because it's less "satisfying", I guess, to have a program run more slowly than it could for no good reason.
Save your data to object {} and save it to locatlStorage then use I/O don't active or when user going away (onunload event):
var DATA = {},
syncTimer;
function syncFunction () {
localStorage.set('myData',JSON.stringify(DATA));
}
function someHandler() {
// some handler that change your DATA
// which can be called many times per second
//if calling many times, data will not sync
if (syncTimer) {
clearTimeout(syncTimer);
}
//change data
DATA.somefield = 'some data';
//set timer if data not changed - save it
syncTimer = setTimeout(syncFunction, 100)
}
window.onunload = syncFunction;
P.S. Test saving to var with saving to storage. Storage sync is more expensive.

JavaScript performance when handling large arrays

I'm currently writing an image editing program in JavaScript. I've chosen JS because I wanted to learn more about it. The average image I'm handling is about 3000 x 4000 pixels big. When converted into imageData (for editing the pixels), that adds up to 48000000 values I have to deal with. That's why I decided to introduce webworkers and let them edit only the n-th-part of the array. Pretending that I have ten webworkers, each worker will have to deal with 4800000 values.
To be able to use webworkers I'm dividing the big array through the amount of threads I've chosen. The piece of code I use looks like this:
while(pixelArray.length > 0){
cD.pixelsSliced.push(pixelArray.splice(0, chunks)); //Chop off a chunk from the picture array
}
Later after the workers have done something with the array, they save it into another array. Each worker has an ID and saves his part in the mentioned array at the place of his id (to make sure the arrays stay in the correct order). I use $.map to concat that array (looking like [[1231][123213123][213123123]] into one big array [231231231413431] from which I will later create the imageData I need. It looks like that:
cD.newPixels = jQuery.map(pixelsnew, function(n){
return n;
});
After this array (cD.pixelsSliced) is created I create imageData and copy this image into the imageData-Object like so:
cD.imageData = cD.context.createImageData(cD.width, cD.height);
for(var i = 0; i < cD.imageData.data.length; i +=4){ //Build imageData
cD.imageData.data[i + eD.offset["r"]] = cD.newPixels[i + eD.offset["r"]];
cD.imageData.data[i + eD.offset["g"]] = cD.newPixels[i + eD.offset["g"]];
cD.imageData.data[i + eD.offset["b"]] = cD.newPixels[i + eD.offset["b"]];
cD.imageData.data[i + eD.offset["a"]] = cD.newPixels[i + eD.offset["a"]];
}
Now I do realize that I'm dealing with a huge amount of data here and that I probably shouldn't use the browser for image editing, but a different language (I'm using Java in uni). However I was wondering if you have any tips regarding the performance, because frankly I was pretty surprised when I tried a big image for the first time. I didn't figure, that it would take "that" long to load the image (First peace of code). Firefox actually thinks that my script is broken. The other two parts of codes are those ones which I found to slow down the script (which is normal). So yeah I would be thankful for any tips.
Thank you
I would recommend looking into Transferable Objects instead of Structured Cloning when using Web Workers. Web Workers normally use structured cloning to pass objects, in other words a copy is made. This can take loads of time for large objects such as large images.
When using Transferable Objects data is transferred from one context to another. In other words, zero-copy, which should improve the performance of sending data to a Worker.
For more info check:
http://www.w3.org/html/wg/drafts/html/master/infrastructure.html#transferable-objects
Also, another idea perhaps would be to move the task of splitting and butting back the large array to a web worker.
Just brainstorming here, but, perhaps you could first spaw a Web Worker, let's call it Mother Worker. This worker could split the array and then spawn 10 other child workers that performs the heavy duty task and sends back to their mother.
The mother finally puts it all back together and send back to main application.

Unlimited size of local storage under IE 9?

I have just re-written test for HTML5 persistent storage (localStorage) capacity (the previous one created 1 key in memory, so it was falling on memory exception). I've created also jsFiddle for it: http://jsfiddle.net/rxTkZ/4/
The testing code is a loop:
var value = new Array(10001).join("a")
var i = 1
var task = function() {
localStorage['key_'+i] = value
$("#stored").text(i*10)
i++
setTimeout(task)
}
task()
The local storage capacity under IE9, as opposite to other browsers, seems to be practically unlimited - I've managed to store over 400 million characters, and the test was still running.
Is it a feature I can rely on? I'm writing application for intranet usage, where the browser that will be used is IE 9.
Simple answer to this: no :)
Don't paint yourself into a corner. Web Storage is not meant to store wast amount of data. The standard only recommends 5 mb, some browser implement this, others less (and considering that each char takes up 2 bytes you only get half of that perceptually).
Opera let users adjust the size (12 branch, dunno about the new webkit based version) but that is a fully a user initiated action.
It's not a reliable storage when it comes to storage space. As to IE9 it must be considered a temporary flaw.
If you need large space consider File API (where you can request user approved quota for tons of megabytes) or Indexed DB instead.

Categories