Let's make it immediately clear: this is not a question about memory leak!
I have a page which allows the user to enter some data and a JavaScript to handle this data and produce a result.
The JavaScript produces incremental outputs on a DIV, something like this:
(function()
{
var newdiv = document.createElement("div");
newdiv.innerHTML = produceAnswer();
result.appendChild(newdiv);
if (done) {
return;
} else {
setTimeout(arguments.callee, 0);
}
})();
Under certain circumstances the computation will produce so much data that IE8 will fail
with this message:
not enough storage when dealing with too much data
The question is:
is there way I can work out how much data is too much data?
as I said there is no bug to solve. It's a genuine out of memory because the computation
requires to create too many html elements.
My idea would be to run a function before executing the computation to work out ahead if the browser will succeed. But to do so, in a generic way, I think I need to find the memory available to my browser.
Any suggestion is welcome.
Javascript (in the browser) is run in a sandbox, which means that it is fenced-off from accessing things that could cause security issues such as local files, system resources etc - so no, you can't detect memory usage.
As the other answers state, you can make the task easier for the browser by pausing between implementations or using less resource-intensive code, but every browser has its limits.
Have a play with this...
document.write(performance.memory.jsHeapSizeLimit+'<br><br>');
document.write(performance.memory.usedJSHeapSize+'<br><br>');
document.write(performance.memory.totalJSHeapSize);
A loop will use less memory than recursion.
do
{
var newdiv = document.createElement("div");
newdiv.innerHTML = produceAnswer();
result.appendChild(newdiv);
} while (!done);
You could also put some upper limit on the number of answers produced.
var answerCount = 0;
do
{
var newdiv = document.createElement("div");
newdiv.innerHTML = produceAnswer();
result.appendChild(newdiv);
} while (!done && answerCount++ < 1000);
I suspect having a 0 timeout delay is the problem - it's trying to re-run instantly. Try increasing this.
Related
Am using nested if to iterate through a list of window attributes and am deleting the window attribute based on a criteria. Since this s a nested loop, the execution time takes .5seconds. We have to make it quicker and bring it down to millisecs. the problem is we have a lot of automation scripts to run our regression tests and since the document object takes to reload during iteration the automation test pack fails to find elements in the html and throws error in most of the cases.
We asked them automation team to leave a thread.sleep or wait between every action they do but since they have more than 200 test scenarios it's difficult for them to add the time delays in every single action so it has come back to the dev to check on improving the performance.
Please suggest best optimal solution.
I tried multiple iterations to check on the execution time but traditional for seems to beat the rest of it.
Snippet:
function resetWindow(){
const ALL_WIN_KEYS = Object.keys(window);
for(let i=0;i<ALL_WIN_KEYS.length;i++){
let matchFound = false;
for(let j=0;j<DEFAULT_WIN_KEYS.length;j++){
if(ALL_WIN_KEYS[i] == DEFAULT_WIN_KEYS[j]){
matchFound = true;
break;
}
}
if(!matchFound){
delete window[ALL_WIN_KEYS[i]];
}
}
}
Note: DEFAULT_WIN_KEYS is a const declared globally.
Take the following html:
<div id="somediv" style="display:none;"></div>
<script>
document.getElementById("somediv").style.display = 'none';
</script>
somediv is already hidden, some javascript runs, effectively doing nothing. I need code that detects when style.display has been used in javascript, regardless of if style was changed or not.
I've tried MutationObserver:
var observer = new MutationObserver(function(mutations) {
mutations.forEach(function(mutationRecord) {
alert(mutationRecord.target.id);
});
});
observer.observe(document.getElementById("somediv"), { attributes : true, attributeFilter : ['style'] });
The above only triggers if there was a style change. I need it to trigger regardless if there was a style change or not.
So I did come up with an answer. The way it works, is you grab every script tag, replace .style.display with your own function, and finally replace the DOM (which is the real trick):
//loop through <script> tags
$('script').each(function(){
var scripthtml = $(this).html();
if (scripthtml.indexOf('style.display') != -1){
scripthtml = scripthtml.replace(/.style.display = 'none'/g, ".customdisplay('none')");
scripthtml = scripthtml.replace(/.style.display = "none"/g, '.customdisplay("none")');
scripthtml = scripthtml.replace(/.style.display ='none'/g, ".customdisplay('none')");
scripthtml = scripthtml.replace(/.style.display ="none"/g, '.customdisplay("none")');
scripthtml = scripthtml.replace(/.style.display='none'/g, ".customdisplay('none')");
scripthtml = scripthtml.replace(/.style.display="none"/g, '.customdisplay("none")');
$(this).replaceWith('<script>' + scripthtml + '</script>');
}
});
Now here is my .style.display replacement function:
HTMLElement.prototype.customdisplay = function(showhide){
//insert whatever code you want to execute
this.style.display = showhide;
alert('Success! .style.display has been detected!');
};
.replaceWith is what actually changes the DOM. The only thing this script doesn't do, is it isn't able to look through included javascript files. Thank you all for your comments <3.
UPDATE:
When using replaceWith to add the script tag, ipad/iphone/ipod will execute the script tag a second time. to prevent this double execution, you need to do this:
$(this).replaceWith('<script>if (1==0){' + scripthtml + '}</script>');
Your functions will be valid, but anything outside of the function will not be executed.
I don't think you fully realise what you are asking for, but here is a short description of closest options I am aware of:
https://blog.sessionstack.com/how-javascript-works-tracking-changes-in-the-dom-using-mutationobserver-86adc7446401
Basically, MutationObserver is the major improvement over past here that is closest to what you want and supported in modern browsers. But the whole point of all this is it listens to changes.
You are basically asking to detect even non-changes. I don't see you getting out of this problem other than:
Writing a wrapper for the ways you use to change this in code and then instead of calling the change call the wrapper. Simple, easy, requires refactoring all the calls in code.
Overwriting the actual functions that make a change. This saves you the refactor but you are playing with fire here. Rewriting a well-known function on a global level means a PERMANENT source of problems to you and all developers who work on the project.
Polling - in short calling over and over on some element to check properties. Not only it detects changes with a lag of 0 to the polling interval it also uses resources and if you want to monitor everything you will have to write a recursion that descends through the current DOM from the top to each node and check it. You are gonna kill the performance with this. How hard I can't tell but I suspect either polling interval will be long thus increasing the detection lag or your performance will dive down like a gray falcon.
I have 1 big question for you:
What led you to the state where you need this?
You basically want the ability for a program to detect when it is using a specific part of itself (be that as it is, a core part, one implemented by the browser).
This sounds similar to request: hey whenever I change or not change a value in any variable in my code I should be able to react to it.
I am not trying to sound like Naggin Nancy here but instead encourage you to question train of thought that led to this being something you need and figure out whether you want to sink further time into this, because I don't think you are getting that what you desire easily and I suspect it came to be due to poor design decisions in the past.
UPDATE: After having the code out in the wild for a while, I found two issues on Citrix and iOS Safari. Both seem to be around the use of Eval. The citrix issue could have been resolved by updating the CSS to :
emailerrormessage = 'please enable Javascript'; ahref.attr('data-edomain') + '\0040' + ahref.attr('data-ename');
The iOS Safari issue was not something I managed to resolve. I ended up cutting out the CSS element altogether.
EDIT: I'm opening up this question for recommendations on why my solution might be considered bad programming in general, and if anyone else has a better way of obfuscating emails through a combination of CSS, HTML and JS. I've added an answer of my own findings, but haven't marked it as the answer in case someone else might have better insight into this technique.
I've been tasked to obfuscate email addresses on some webpages, and after encountering an answer suggesting use of CSS and data attributes I tried implementing it myself and found that getting the produced email address back into a mailto element was impossible without some JavaScript. The next problem encountered was that the JavaScript I ended up using to grab the rendered email address was not cross-browser as some browsers retrieve "attr(data-xx)" instead of the actual value. However I still like the idea of producing a solution that spans HTML, JS and CSS for maximum complexity. The last resort was to store a line of JS in the CSS content property, and use eval to produce the final email address.
Obfuscation is not supposed to be pretty, but I want to know if what I've done is potentially compromising security or performance by introducing eval() and/or storing JS in CSS. I haven't found another example of someone doing something similar (maybe for good reason).
My HTML is
<a class="redlinktext ninjemail" data-ename="snoitagitsevni" data-edomain="ua.moc.em" data-elinktext="click me"></a>
My CSS is
.ninjemail:before {
content: "'please enable Javascript'; ahref.attr('data-edomain') + '\0040' + ahref.attr('data-ename');"
}
My JavaScript is
$('.ninjemail').each(function () {
var fullLink = "ma";
var ahref = $(this);
fullLink += "ilto" + ":";
var codeLine = window.getComputedStyle(this, ':before').content.replace(/\"|\\/g, '');
var emailAddress = eval(codeLine);
emailAddress = emailAddress.split('').reverse().join('');
fullLink += emailAddress;
ahref.attr('href', fullLink);
var linkText = ahref.attr('data-elinktext');
if (linkText && linkText.length > 0) {
ahref.text(linkText);
} else {
ahref.text(emailAddress);
}
ahref.removeClass('ninjemail');
});
Based on the research I've done so far the only real drawback from using this method for obfuscation is that in teams where there is a clear separation of roles, Javascript in CSS files may confuse layout designers, and vice versa (programmers won't want to have to edit CSS files).
Performance wise there will always be more overhead in processing the email address across the three technologies. I can't say I'm an expert on performance testing, but I did a page refresh timing of before and after and got a DOMContentLoaded time of .956s vs .766s without obfuscation. Load time was 1.22s vs 1.17s without obfuscation. Not a concern for my own situation, but may be an issue on more intensive applications. Note that this was a once off test, and not an average of multiple runs in a controlled environment.
I am using a jQuery plugin called jQuery Phoenix. It is a plugin that makes localStorage with forms very easy.
My question:
This script auto-saves every second. I have a form with around 275 fields and it saves all of the fields.
(I know saving that many fields that often is a bit overkill, but it's the default setting. I'm going to change it to save on an onblur event but it will still be saving 275 fields every time the person changes fields.)
If it is saving that often, will I run into any type of performance issues in browsers?
I do not know much about localStorage or how it affects performance, especially when saving this many fields of data that often.
As has been mentioned, you can optimise your use of localStorage a fair bit from what is proposed, but that isn't the question.
LocalStorage is pretty fast, as confirmed in some tests written about here:
https://gomakethings.com/how-fast-is-vanilla-js-localstorage/
they were seeing speeds around 12ms to write an object of 10,000 values and read it back again.
I had a play and storing each item on its own, and it tended to be between 70-100ms, but when you're 'only' dealing with around 300 values it's less than 2ms, this was the same if the values were strings or integers.
Here's the code I used, which is largely based on the code in the linked article:
// Timestamp before the test
var start = performance.now();
// Set/get data to localStorage
var count = 10000;
for (var i = 0; i < count; i++) {
localStorage.setItem(`perfTest_${i}`, i);
var result = localStorage.getItem(`perfTest_${i}`);
if (parseInt(result) !== i) {
console.error(`${result} !== ${i}`);
}
}
// Timestamp after the test
var end = performance.now();
// Duration of the test
console.log('It took ' + (end - start) + 'ms.');
The easiest way to run these tests is just to paste them into your browser's console!
Performance is surely going to differ and it will be an issue since internal implementation of localStorage is browser specific.
There is also a time difference between first read and subsequent read.
Beside local storage has a limited capacity(though it can be changed)
Also you wont want to stringify before saving. This will be inefficient.
Note: If all 275 fields are not changing changing simultaneously you can you only save the changed field
On one page of my website the user has the ability to choose and remove up to 2000 items through selecting multiple string representations of them in a dropdown list.
On page load, the objects are loaded onto the page from a previous session into 7 different drop-down lists.
In the window.onload event, the function looping through the items in the drop-downs makes an internal collection of the objects by adding them to a global array - This makes the page ridiculously slow to load, so, I'm fairly certain probably doing it wrong!
How else am I supposed to store these variables?
This is their internal representation:
function Permission(PName, DCID, ID) {
this.PName = PName;
this.DCID = DCID;
this.ID = ID;
}
where: PName is string. DCID is int. ID is int.
EDIT:
Thanks for the quick replies! I appreciate the help, I'm not great with JS! Here is more information:
'selectChangeEvent' is added to the Change and Click event of the Drop down list.
function selectChangeEvent(e) {
//...
addListItem(id);
//...
}
'addListItem(id)' sets up the visual representation of the objects and then calls :
function addListObject(x, idOfCaller) {
var arIDOfCaller = idOfCaller.toString().split('-');
if (arIDOfCaller[0] == "selLocs") {
var loc = new AccessLocation(x, arIDOfCaller[1]);
arrayLocations[GlobalIndexLocations] = loc;
GlobalIndexLocations++;
totalLocations++;
}
else {
var perm = new Permission(x, arIDOfCaller[1], arIDOfCaller[2]);
arrayPermissions[GlobalIndexPermissions] = perm;
GlobalIndexPermissions++;
totalPermissions++;
}
}
Still not enough to go on, but there are some small improvements I can see.
Instead of this pattern:
var loc = new AccessLocation(x, arIDOfCaller[1]);
arrayLocations[GlobalIndexLocations] = loc;
GlobalIndexLocations++;
totalLocations++;
which seems to involve redundant counters and has surplus assignment operations, try:
arrayLocations[arrayLocations.length] = new AccessLocation(x, arIDOfCaller[1]);
and just use arrayLocations.length where you would refer to GlobalIndexLocations or totalLocations (which fromt he code above would seem to always be the same value).
That should gain you a little boost, but this is not your main problem. I suggest you add some debugging Date objects to work out where the bottleneck is.
You may want to consider a design change to support the load. Some sort of paged result set or similar, to cut down on the number of concurrent records being modified.
As much as we desperately want them to be, browsers aren't quite there yet in terms of script execution speed that allow us to do certain types of heavy lifting on the client.
While I haven't tested this idea, I figured I'd throw it out there - might it be faster to return a JSON string from the server side, where your array is fully calculated on that side?
From that point, I'd wager that eval()'ing it (as evil as this may be) might be fast enough to where you could then write the contents onto the page, and your array setup would already be taken care of.
Then again, I suppose the amount of work it'd take the browser to construct the 2k new objects and inject them into the DOM wouldn't necessarily help the speed side of things in the end. At the end of the day, a design change is probably necessary, but sometimes we're stuck with what we've got, eh?