I am using a jQuery plugin called jQuery Phoenix. It is a plugin that makes localStorage with forms very easy.
My question:
This script auto-saves every second. I have a form with around 275 fields and it saves all of the fields.
(I know saving that many fields that often is a bit overkill, but it's the default setting. I'm going to change it to save on an onblur event but it will still be saving 275 fields every time the person changes fields.)
If it is saving that often, will I run into any type of performance issues in browsers?
I do not know much about localStorage or how it affects performance, especially when saving this many fields of data that often.
As has been mentioned, you can optimise your use of localStorage a fair bit from what is proposed, but that isn't the question.
LocalStorage is pretty fast, as confirmed in some tests written about here:
https://gomakethings.com/how-fast-is-vanilla-js-localstorage/
they were seeing speeds around 12ms to write an object of 10,000 values and read it back again.
I had a play and storing each item on its own, and it tended to be between 70-100ms, but when you're 'only' dealing with around 300 values it's less than 2ms, this was the same if the values were strings or integers.
Here's the code I used, which is largely based on the code in the linked article:
// Timestamp before the test
var start = performance.now();
// Set/get data to localStorage
var count = 10000;
for (var i = 0; i < count; i++) {
localStorage.setItem(`perfTest_${i}`, i);
var result = localStorage.getItem(`perfTest_${i}`);
if (parseInt(result) !== i) {
console.error(`${result} !== ${i}`);
}
}
// Timestamp after the test
var end = performance.now();
// Duration of the test
console.log('It took ' + (end - start) + 'ms.');
The easiest way to run these tests is just to paste them into your browser's console!
Performance is surely going to differ and it will be an issue since internal implementation of localStorage is browser specific.
There is also a time difference between first read and subsequent read.
Beside local storage has a limited capacity(though it can be changed)
Also you wont want to stringify before saving. This will be inefficient.
Note: If all 275 fields are not changing changing simultaneously you can you only save the changed field
Related
Am using nested if to iterate through a list of window attributes and am deleting the window attribute based on a criteria. Since this s a nested loop, the execution time takes .5seconds. We have to make it quicker and bring it down to millisecs. the problem is we have a lot of automation scripts to run our regression tests and since the document object takes to reload during iteration the automation test pack fails to find elements in the html and throws error in most of the cases.
We asked them automation team to leave a thread.sleep or wait between every action they do but since they have more than 200 test scenarios it's difficult for them to add the time delays in every single action so it has come back to the dev to check on improving the performance.
Please suggest best optimal solution.
I tried multiple iterations to check on the execution time but traditional for seems to beat the rest of it.
Snippet:
function resetWindow(){
const ALL_WIN_KEYS = Object.keys(window);
for(let i=0;i<ALL_WIN_KEYS.length;i++){
let matchFound = false;
for(let j=0;j<DEFAULT_WIN_KEYS.length;j++){
if(ALL_WIN_KEYS[i] == DEFAULT_WIN_KEYS[j]){
matchFound = true;
break;
}
}
if(!matchFound){
delete window[ALL_WIN_KEYS[i]];
}
}
}
Note: DEFAULT_WIN_KEYS is a const declared globally.
I wonder if any of you can help me with this? I have developed a web application that uses an indexed database with 95 object stores. The DB installs fine on Chrome, but usually produces an error on IE (where I would like to get it running for reasons I won't discuss here). The problem is the creating of 95 object stores in the onUpgrade routine (even if I put no data in them). Here is an example of the code:
encode (NoofCKlistItems,"NoofCKlistItems");
encode (pgtxt,"pgtxt");
//there are 95 statements like the ones above
function encode(j_ThisField,StoreName) {
objectStore = dbInterview.createObjectStore(StoreName);
for (i in j_ThisField) {
itemnumber = parseInt(j_ThisField[i][0], 10);
objectStore.put(j_ThisField[i][1],itemnumber); }
}
I cannot think of a way round this. I have tried chaining the 95 calls to the encode function with callbacks, but no change. I have tried putting a delay in the end of the encode function, but then it loses the transaction. The only things that reliably work are (a) to halve the number of object stores, or (b) to put an alert at the end of the encode function, so the program has to stop at every variable (so you have to click 95 times!).
I have not found a way of adding to the object stores once the DB is installed. An upgrade event removes everything and starts again, so the problem remains.
Any ideas?
Many thanks
Steve Moss
I have solved the problem. I simply put a check at the end of the encode routine, to see if the last of the 95 variables is being written to the indexeddb. If it is the last one, an alert is fired to say the download is complete. This seems to give IE the breathing space it needs to finish processing. So the encode routine becomes:
function encode(j_ThisField,StoreName) {
objectStore = dbInterview.createObjectStore(StoreName);
for (i in j_ThisField) {
itemnumber = parseInt(j_ThisField[i][0], 10);
objectStore.put(j_ThisField[i][1],itemnumber); }
counter++;
if (StoreName===<THE LAST VARIABLE NAME>){alert('complete');}
}
It works every time.
Sorry, I missed a crucial line. I also had to add an alert when the counter has reached 45. This relates to having found that IE copes with half of the 90 variables, but not the whole 90. So the encode routine is:
function encode(j_ThisField,StoreName) {
objectStore = dbInterview.createObjectStore(StoreName);
for (i in j_ThisField) {
itemnumber = parseInt(j_ThisField[i][0], 10);
objectStore.put(j_ThisField[i][1],itemnumber); }
counter++;
if (counter===45){alert('Click to continue dowloading');}
if (StoreName===<THE LAST VARIABLE NAME>){alert('Click to complete the download');}
}
Thanks for your suggestions
What I want to do: Group all the like elements on a page (of a certain kind) into an object which I can later iterate on -- or apply sweeping changes to every element within.
My code is successful at accomplishing the given task but when the number of elements grows to 200-300+ then the performance drastically drops off and users have noticed. I have isolated the offending lines of code and want to know if there is another way of accomplishing the same problem.
The add() function appears to be the problematic operation based on timers I have placed around them. At first the time required to perform the operation is .001 but grows until the number of elements reaches 300 and it takes ~.1 of a second for each additional element AND continues slowing down.
I have researched (and more) for jQuery performance enhancing abilities and have implemented a few of them (namely 3) but they have not given me any meaningful performance increases. Amazingly, this code performs within 1 second (!) for Firefox (300+ calls to add()) while Chrome and IE take roughly 10-20x longer or more...
Here is my code:
rowsToChange = $([]);
// Grab all the ids greater than whichever one I'm currently looking at:
var arr = $.makeArray($("[id^=stackLocatorLinkFillUp]:gt("+(uniqueID-1)+")"));
for (var i=0; i<arr.length; i++) {
$this = arr[i];
// <<< VARIOUS CONDITIONALS that make this as selective as possible REMOVED >>>
startTimer = new Date().getTime();
// **************************
// PROBLEMATIC LINE FOLLOWS when 200+ records:
rowsToChange = rowsToChange.add($this);
// Grows from .001 to .1xx after 300 iterations
console.log("innertiming:"+(new Date().getTime() - startTimer)/1000);
// **************************
}
The end result looks like this (via Chrome Inspector):
[<div style="display:none" id="stackLocatorLinkFillUp1">itemType=BOUND&ccLocale=PERIODICAL</div>,
<div style="display:none" id="stackLocatorLinkFillUp2">itemType=BOUND&ccLocale=PERIODICAL</div>,
...
]
Eventually I process all these as follows (which I love the simplicity of!):
var superlink = "...new <a> goodness to display for all elements...";
rowsToChange.html(superlink).css("display","block");
This looked like it could be a valid solution (different add method?) but I would prefer to continue gathering a list of objects together so that the last line can work its magic.
(am not i am pointed out that the following is not true -- regarding concatenation; thanks 'am not i am')
It seems like the add() operation must be concatenating strings since that appears to be one of the main problems others face. But transforming my add() statement into += doesn't look like it works.
Thanks for checking this out;
Chrome: 18.0.1025.142 m
Firefox: 11.0
IE: 8.0.7600.16385
First observation: add saves the previous element set. Try rowsToChange = jQuery.merge(rowsToChange, [$this]); instead.
Second observation: it seems as though rowsToChange will end up being the exact same element set as the one you called $.makeArray on. Why not just save the original set?
DCoder shows how to appropriately merge the information together if you are using a for loop. However, if you come here and are using a .each() loop, use what follows.
The main difference is that brackets are unnecessary / necessary depending on the structure of 'this'. It also seems to be generally accepted that .each() is at least slightly slower than the native javascript for loop. (evidence from 2009) (timing test_copied from question above)
var $this, rowsToChange = $([]);
// slower than a for loop
$("[id^=stackLocatorLinkFillUp]:gt("+(uniqueID-1)+")").each( function() {
// If statements <removed> that decide whether or not to include in the new container
$this = $(this); // probably unnecessary under most situations
rowsToChange = jQuery.merge(rowsToChange, $this);
});
Operate on every piece of the new sub-group decided upon by the removed if statements!
rowsToChange.html("...");
Thanks to everyone who viewed the question, took the time to answer, voted it up, etc.!
Let's make it immediately clear: this is not a question about memory leak!
I have a page which allows the user to enter some data and a JavaScript to handle this data and produce a result.
The JavaScript produces incremental outputs on a DIV, something like this:
(function()
{
var newdiv = document.createElement("div");
newdiv.innerHTML = produceAnswer();
result.appendChild(newdiv);
if (done) {
return;
} else {
setTimeout(arguments.callee, 0);
}
})();
Under certain circumstances the computation will produce so much data that IE8 will fail
with this message:
not enough storage when dealing with too much data
The question is:
is there way I can work out how much data is too much data?
as I said there is no bug to solve. It's a genuine out of memory because the computation
requires to create too many html elements.
My idea would be to run a function before executing the computation to work out ahead if the browser will succeed. But to do so, in a generic way, I think I need to find the memory available to my browser.
Any suggestion is welcome.
Javascript (in the browser) is run in a sandbox, which means that it is fenced-off from accessing things that could cause security issues such as local files, system resources etc - so no, you can't detect memory usage.
As the other answers state, you can make the task easier for the browser by pausing between implementations or using less resource-intensive code, but every browser has its limits.
Have a play with this...
document.write(performance.memory.jsHeapSizeLimit+'<br><br>');
document.write(performance.memory.usedJSHeapSize+'<br><br>');
document.write(performance.memory.totalJSHeapSize);
A loop will use less memory than recursion.
do
{
var newdiv = document.createElement("div");
newdiv.innerHTML = produceAnswer();
result.appendChild(newdiv);
} while (!done);
You could also put some upper limit on the number of answers produced.
var answerCount = 0;
do
{
var newdiv = document.createElement("div");
newdiv.innerHTML = produceAnswer();
result.appendChild(newdiv);
} while (!done && answerCount++ < 1000);
I suspect having a 0 timeout delay is the problem - it's trying to re-run instantly. Try increasing this.
On one page of my website the user has the ability to choose and remove up to 2000 items through selecting multiple string representations of them in a dropdown list.
On page load, the objects are loaded onto the page from a previous session into 7 different drop-down lists.
In the window.onload event, the function looping through the items in the drop-downs makes an internal collection of the objects by adding them to a global array - This makes the page ridiculously slow to load, so, I'm fairly certain probably doing it wrong!
How else am I supposed to store these variables?
This is their internal representation:
function Permission(PName, DCID, ID) {
this.PName = PName;
this.DCID = DCID;
this.ID = ID;
}
where: PName is string. DCID is int. ID is int.
EDIT:
Thanks for the quick replies! I appreciate the help, I'm not great with JS! Here is more information:
'selectChangeEvent' is added to the Change and Click event of the Drop down list.
function selectChangeEvent(e) {
//...
addListItem(id);
//...
}
'addListItem(id)' sets up the visual representation of the objects and then calls :
function addListObject(x, idOfCaller) {
var arIDOfCaller = idOfCaller.toString().split('-');
if (arIDOfCaller[0] == "selLocs") {
var loc = new AccessLocation(x, arIDOfCaller[1]);
arrayLocations[GlobalIndexLocations] = loc;
GlobalIndexLocations++;
totalLocations++;
}
else {
var perm = new Permission(x, arIDOfCaller[1], arIDOfCaller[2]);
arrayPermissions[GlobalIndexPermissions] = perm;
GlobalIndexPermissions++;
totalPermissions++;
}
}
Still not enough to go on, but there are some small improvements I can see.
Instead of this pattern:
var loc = new AccessLocation(x, arIDOfCaller[1]);
arrayLocations[GlobalIndexLocations] = loc;
GlobalIndexLocations++;
totalLocations++;
which seems to involve redundant counters and has surplus assignment operations, try:
arrayLocations[arrayLocations.length] = new AccessLocation(x, arIDOfCaller[1]);
and just use arrayLocations.length where you would refer to GlobalIndexLocations or totalLocations (which fromt he code above would seem to always be the same value).
That should gain you a little boost, but this is not your main problem. I suggest you add some debugging Date objects to work out where the bottleneck is.
You may want to consider a design change to support the load. Some sort of paged result set or similar, to cut down on the number of concurrent records being modified.
As much as we desperately want them to be, browsers aren't quite there yet in terms of script execution speed that allow us to do certain types of heavy lifting on the client.
While I haven't tested this idea, I figured I'd throw it out there - might it be faster to return a JSON string from the server side, where your array is fully calculated on that side?
From that point, I'd wager that eval()'ing it (as evil as this may be) might be fast enough to where you could then write the contents onto the page, and your array setup would already be taken care of.
Then again, I suppose the amount of work it'd take the browser to construct the 2k new objects and inject them into the DOM wouldn't necessarily help the speed side of things in the end. At the end of the day, a design change is probably necessary, but sometimes we're stuck with what we've got, eh?