Improve performance in js `for` loop - javascript

So I'm looking for some advice on the best method for toggling the class (set of three) of an element in a loop ending at 360 iterations. I'm trying to avoid nested loops, and ensure good performance.
What I have:
// jQuery flavour js
// vars
var framesCount = '360'; // total frames
var framesInterval = '5000'; // interval
var statesCount = 3; // number of states
var statesCountSplit = framesInterval/statesCount;
var $scene = $('#scene');
var $counter = $scene.find('.counter');
// An early brain dump
for (f = 1; f < framesCount; f += 1) {
$counter.text(f);
for (i = 1; i < statesCount; i += 1) {
setTimeout(function() {
$scene.removeClass().addClass('state-'+i);
}, statesCountSplit);
}
}
So you see for each of 360 frames there are three class switchouts at intervals. Although I haven't tested I'm concerned for the performance hit here once that frames value goes into the thousands (which it might).
This snippet is obviously flawed (very), please let me know what I can do to make this a) work, b) work efficiently. Thanks :-)

Some general advice:
1) Don't declare functions in a loop
Does this really need to be done in a setTimeout?
for (i = 1; i < statesCount; i += 1) {
setTimeout(function() {
$scene.removeClass().addClass('state-'+i);
}, statesCountSplit);
}
2) DOM operations are expensive
Is this really necessary? This will toggle so fast that you won't notice the counter going up. I don't understand the intent here, but it seems unnecessary.
$counter.text(f);
3) Don't optimize early
In your question, you stated that you haven't profiled the code in question. Currently, there's only about 1000 iterations, which shouldn't be that bad. DOM operations aren't too bad, as long as you aren't inserting/removing elements, and you're just modifying them.
I really wouldn't worry about performance at this point. There are other micro-optimizations you could apply (like changing the for loop into a decrementing while loop to save on a compare), but you gave no indication that the performance is a problem.
Closing thoughts
If I understand the logic correctly, you're code doesn't match it. The code will currently increment .counter as fast as the processor can iterate over your loops (should only take a few milliseconds or so for everything) and each of your "class switchouts" will fire 360 times within a few milliseconds of each other.
Fix your logic errors first, then worry about optimization if it becomes a problem.

Don't use a for loop for this. This will generate lots of setTimeout events which is known to slow browsers down. Use a single setTimeout instead:
function animate(framesCount, statesCount) {
$scene.removeClass().addClass('state-'+statesCount);
if (framesCount) {
setTimeout(
function(){
animate(framesCount-1,(statesCount%3)+1);
},
statesCountSplit
);
}
}
animate(360*3,1);

Related

Perfomance: Switch vs Polymorphism

I usually prefer polymorphism instead of switch when it possible. I find it more readable and it requires fewer lines. I believe these facts are enough to continue to use it. But what about performance? I've create a pretty simple (and bad) bench and it looks that switch is faster in my case. Could you please explain why?
https://jsfiddle.net/oqzpfqcg/1/
var class1 = { GetImportantValue: () => 1 };
var class2 = { GetImportantValue: () => 2 };
var class3 = { GetImportantValue: () => 3 };
var class4 = { GetImportantValue: () => 4 };
var class5 = { GetImportantValue: () => 5 };
getImportantValueSwitch = (myClassEnum) => {
switch (myClassEnum.type) {
case 'MyClass1': return 1;
case 'MyClass2': return 2;
case 'MyClass3': return 3;
case 'MyClass4': return 4;
case 'MyClass5': return 5;
}
}
getImportantValuePolymorphism = (myClass) => myClass.GetImportantValue();
test = () => {
var INTERATION_COUNT = 10000000;
var t0 = performance.now();
for (var i = 0; i < INTERATION_COUNT; i++) {
getImportantValuePolymorphism(class1);
getImportantValuePolymorphism(class2);
getImportantValuePolymorphism(class3);
getImportantValuePolymorphism(class4);
getImportantValuePolymorphism(class5);
}
var t1 = performance.now();
var t2 = performance.now();
for (var i = 0; i < INTERATION_COUNT; i++) {
getImportantValueSwitch({type: 'MyClass1'});
getImportantValueSwitch({type: 'MyClass2'});
getImportantValueSwitch({type: 'MyClass3'});
getImportantValueSwitch({type: 'MyClass4'});
getImportantValueSwitch({type: 'MyClass5'});
}
var t3 = performance.now();
var first = t1 - t0;
var second = t3 - t2;
console.log("The first sample took " + first + " ms");
console.log("The second sample took " + second + " ms");
console.log("first / second = " + (first/second));
};
test();
So as far as I understand the first sample has one dynamic/virtual runtime call myClass.GetImportantValue() and that's it. But the second has one dynamic/virtual runtime call as well myClassEnum.type and then check the condition in the switch.
Most probably I have some mistakes in the code but I cannot find it. The only thing that I suppose can affect result is performance.now(). But I think it does not affect so much.
V8 developer here. Your intuition is right: this microbenchmark isn't very useful.
One issue is that all your "classes" have the same shape, so the "polymorphic" case is in fact monomorphic. (If you fix this, be aware that V8 has vastly different performance characteristics for <= 4 and >= 5 polymorphic cases!)
One issue is that you're relying on on-stack replacement (OSR) for optimization, so the performance impact of that pollutes your timings in a misleading way -- especially for functions that have this pattern of two subsequent long-running loops: they get OSR-optimized for the first loop, deoptimized in the middle, then OSR-optimized again for the second loop.
One issue is that the compiler inlines many things, so the actually executed machine code can have a very different structure from the JavaScript code you wrote. In particular in this case, getImportantValueSwitch gets inlined, the {type: 'MyClass*'} constant object creations get elided, and the resulting code is just a few comparisons, which are very fast.
One issue is that with small functions, call overhead pretty much dominates everything else. V8's optimizing compiler doesn't currently do polymorphic inlining (because that's not always a win), so significant time is spent calling the () => 1 etc functions. That's unrelated to the fact that they're dynamically dispatched -- retrieving the right function from the object is pretty fast, calling it is what has the overhead. For larger functions, you wouldn't notice it much, but for almost-empty functions, it's quite significant compared to the switch-based alternative that doesn't do any calls.
Long story short: in microbenchmarks, one tends to measure weird effects unrelated to what one intended to measure; and in larger apps, most implementation details like this one don't have measurable impact. Write the code that makes sense to you (is readable, maintainable, etc), let the JavaScript engine worry about the rest! (Exception: Sometimes profiling indicates that your app has a particular bottleneck -- in such cases, hand-optimizing things can have big impact, but that's usually achieved by taking context into account and making the overall algorithm / control flow more efficient, rather than following simple rules of thumb like "prefer polymorphism over switch statements" (or the other way round).)
I do not see a "mistake" in your script. Although I really do not encourage performance testing this way, I might still be able to say a couple of things based on my intuition. I do not have solid, well tested results with control groups etc so take everything I say with a pinch of salt.
Now, for me it is quite normal to assume that the first option is gonna eat the dust of the second because there are couple of things more expensive than variable access in js:
object property access (presumably O(1) hash table, but still slower than variable access)
function call
If we count the function call and object access:
first case: 5 call [to getImportantValuePolymorphism] x (1 object access [to myClass] + 1 function call [to GetImportantValue] ===> TOTAL OF 10 function calls + 5 object access
second case: 5 call [to getImportantValueSwitch ] + 5 object access [to MyClassEnum] ===> TOTAL 5 function calls + 5 object access
One more things to mention, that in the first case, you have a function that calls another function so you end up with a scope chain. Net effect of this is minute, but still detrimental in means of performance.
If we account all the above factors, first will be slower. But how much? That is not easy to answer as it will depend on vendor implementations but in your case it about 25 times slower in chrome. Assuming we have double the function calls in the first case and a scope chain, one would expect it to be 2 or 3 times slower, but not 25.
This exponential decrease in performance I presume is due to the fact that you are starving the event loop, meaning that when you give a synchronous task to js, since it is single threaded, if the task is a cumbersome one, the event loop cannot proceed and gets stuck for good second or so. This question comes around when people see strange behavior of setTimeout or other async calls when they fire way off from the target time frame. This is as I said, due to the fact that previous synchronous task is taking way too long. In your case you have a synchronous for loop that iterates 10 million times.
To test my hypothesis, decrease the ITERATION_COUNT to 100000, that is 100 times less, you will see that in chrome, the ratio will decrase from ~20 to ~2. So the bottom line 1: Part of the inefficiency you observe is stemming from the fact that you are starving the event loop, but it still does not change the fact that first option is slower.
To test that function calls are indeed the bottle neck here, change the relevant parts of your script to this:
class1 = class1.GetImportantValue;
class2 = class2.GetImportantValue;
class3 = class3.GetImportantValue;
class4 = class4.GetImportantValue;
class5 = class5.GetImportantValue;
and for the test:
for (var i = 0; i < INTERATION_COUNT; i++) {
class1();
class2();
class3();
class4();
class5();
}
Resulting fiddle: https://jsfiddle.net/ibowankenobi/oqzpfqcg/2/
This time you will see that the first one is faster because it is (5 function calls) vs ( 5 function calls + 5 object access).

Javascript - What will be the most efficient between these two techniques?

I'm creating a little confettis generator in Javascript (and jQuery).
I'll use setInterval() to increase top position of each confetti every 0.06s and I wonder : what will be the most efficient / the most performant technique between technique A and technique B, below ?
Technique A : thousands confettis but 1 confetti = 1 individual setInterval() for setting its top position :
for(var i=0; i<confettisQty; i++)
{
var confetti = document.createElement("div");
confetti.css("top","-100px");
$("body").appendChild(confetti);
setInterval(function(){
var newTopPosition = confetti.position().top + 10;
confetti.css("top", newTopPosition);
},
60);
}
Technique B : thousands confettis but 1 global setInterval() which goes look after every confetti and then change top position of each one :
for(var i=0; i<confettisQty; i++)
{
var confetti = document.createElement("div");
confetti.addClass("littleConfetti");
confetti.css("top","-100px");
$("body").appendChild(confetti);
}
setInterval(function(){
$(".littleConfetti").each(function(){
var newTopPosition = $(this).position().top + 10;
$(this).css("top", newTopPosition);
});
},
60);
I'm learning Javascript language and I'm looking for the best practices and techniques in terms of optimization and performance. So if you could please give me some explanations with your answer, it would be very apreciated!
Note that you are mixing JS and JQuery, in your code you should change
confeti.css(), confeti.position()
to
$(confeti).css(), $(confeti).position()
Since you need a JQuery object
Edit:
Due to the Dandavis comment, I made a deeper search and I have to say that he is right, after reading several articles they all confirm his theory. So I made some tests to the code to find which code has a better performance.
At the beginning I thought that the second option will be better, since that it only create a setInterval, but I could find that I was totally wrong.
The cost of the setInterval is insignificant in compared to the costs of the JQuery .littleConfetti selector. The Option B is very slow in the first execution of the interval, that's because it's the first time that search for the .littleConfetti elements, the following times that the interval is executed, the execution time drops dramatically, although not enough to beat the Option A.
I executed each test 20 times and all the times are in milliseconds. The test results are:
confettisQty = 1000
Option A: 144
Option B: 240(first time)/130(following times)
confettisQty = 10000
Option A: 695
Option B: 21278(first time)/ 1134(following times)
confettisQty = 100000
Option A: 12725
Option B: 946740(first time)/10568(following times)
So, after the tests we can say that Option A was better. In spite of that the CSS option will perform better.

JavaScript anti-flood spam protection?

I was wondering if it were possible to implement some kind of crude JavaScript anti-flood protection.
My code receives events from a server through AJAX, but sometimes these events can be quite frequent (they're not governed by me).
I have attempted to come up with a method of combating this, and I've written a small script: http://jsfiddle.net/Ry5k9/
var puts = {};
function receiverFunction(id, text) {
if ( !puts[id] ) {
puts = {};
puts[id] = {};
}
puts[id].start = puts[id].start || new Date();
var count = puts[id].count = puts[id].count + 1 || 0;
var time = (new Date() - puts[id].start) * 0.001;
$("text").set("text", (count / time.toFixed()).toString() + " lines/second");
doSomethingWithTextIfNotSpam(text);
}
};
which I think could prove effective against these kinds of attacks, but I'm wondering if it can be improved or perhaps rewritten?
So far, I think everything more than 3 or 2.5 lines per second seems like spam, but as time progresses forward (because start mark was set... well... at the start), an offender could simply idle for a while and then commence the flood, effectively never passing 1 line per minute.
Also, I would like to add that I use Mootools and Lo-Dash libraries (maybe they provide some interesting methods), but it would be preferable if this can be done using native JS.
Any insight is greatly appreciated!
If you are concerned about the frequency a particular javascript function fires, you could debounce the function.
In your example, I guess it would be something like:
onSuccess: function(){ _.debounce(someOtherFunction, timeOut)};
where timeout is the maximum frequency you want someOtherFunction to be called.
I know you asked about native JavaScript, but maybe take a look at RxJS.
RxJS or Reactive Extensions for JavaScript is a library for
transforming, composing, and querying streams of data. We mean all
kinds of data too, from simple arrays of values, to series of events
(unfortunate or otherwise), to complex flows of data.
There is an example on that page which uses the throttle method to "Ignores values from an observable sequence which are followed by another value before dueTime" (see source).
keyup = Rx.Observable.fromEvent(input, 'keyup').select(function(ev) {
return ev.target.value;
}).where(function(text) {
return text.length > 2;
}).throttle(500)
.distinctUntilChanged()
There might be a similar way to get your 2.5-3 per second and ignore the rest of the events until the next second.
I've spent many days pondering on effective measures to forbid message-flooding, until I came across the solution implemented somewhere else.
First, we need three things, penalty and score variables, and a point in time where last action occured:
var score = 0;
var penalty = 200; // Penalty can be fine-tuned.
var lastact = new Date();
Next, we decrease score by the distance between the previous message and current in time.
/* The smaller the distance, more time has to pass in order
* to negate the score penalty cause{d,s}.
*/
score -= (new Date() - lastact) * 0.05;
// Score shouldn't be less than zero.
score = (score < 0) ? 0 : score;
Then we add the message penalty and check if it crosses the threshold:
if ( (score += penalty) > 1000 ) {
// Do things.
}
Shouldn't forget to update last action afterwards:
lastact = new Date();

CSS transitions blocked by JavaScript

I am trying to create a loading bar during a very intensive period of JavaScript where some pretty heavy 3d arrays are built and filled. This loading bar needs to remain empty until the user clicks a button.
The freezing occurs whether or not I'm using -webkit-transition (This app can be chrome exclusive, cross browser is not necessary in my case).
Seeking simplicity I've built my bar like this...
<div id="loader">
<div id="thumb">
</div>
</div>
... and then sought to increment that bar at various stages of my main for loop:
for(i = 0; i < 5 ; i++){
document.getElementById('thumb').style.width = i*25 + '%';
//More Code
}
Problem is that everything freezes until the JavaScript finishes.
I found a similar question on Stack Overflow, Using CSS animation while javascript computes, and in the comments found and considered and/or tried the following:
Web Workers
Don't think it'll work since my script is filling an array with objects and constructors containing functions which according to this site isn't going to work
jQuery
Not an option, I can't use external libraries in my app - in any case, importing a whole library just for a loading bar seems kind of like overkill...
Keyframes
This was promising and I tried it, but in the end it freezes also, so no joy
timeOut()s
Thought about this, but since the point of the loading bar is to reduce frustration, increasing the waiting time seems counter-productive
I'd be happy to have any incrementation of the bar at this stage, even if it's not smooth! I'm pretty sure this is a problem that has struck more than just me - maybe someone has an interesting solution?
P.S.: I'm posting this as a new question rather than adding to the referenced question since I'm specifically seeking help with JavaScript (not jQuery) and would prefer if I could get it using a transition (!=animation) on the width.
Some people already mentioned that you should use timeouts. That's the appropriate approach, bc it'll give the browser time to "breathe" and render your progress bar mid-task.
You have to split your code up to work asynchronously. Say you currently have something like this:
function doAllTheWork() {
for(var i = 0; i < reallyBigNumberOfIterations; i++) {
processorIntensiveTask(i);
}
}
Then you need to turn it into something like this:
var i = 0;
function doSomeWork() {
var startTime = Date.now();
while(i < reallyBigNumberOfIterations && (Date.now() - startTime) < 30) {
processorIntensiveTask(i);
i++;
}
if(i < reallyBigNumberOfIterations) {
// Here you update the progress bar
incrementBar(i / reallyBigNumberOfIterations);
// Schedule a timeout to continue working on the heavy task
setTimeout(doSomeWork, 50);
}
else {
taskFinished();
}
}
function incrementBar(fraction) {
console.log(Math.round(fraction * 100) + ' percent done');
}
function taskFinished() { console.log('Done!'); }
doSomeWork();
Note the expression (Date.now() - startTime) < 30. That means the loop will get as much done as it can in the span of 30 milliseconds. You can make this number bigger, but anything over 100ms (essentially 10 frames-per-second) is going to start feeling sluggish from the user's point of view.
It may be true that the overall task is going to take somewhat longer using this approach as opposed to the synchronous version. However, from the user's experience, having an indication that something is happening is better than waiting indefinitely while nothing seems to be happening – even if the latter wait time is shorter.
Have you tried going even simpler and making a function, let's say:
Pseudo:
Function increment_bar(amount = 10)
{
document.getElementById('thumb').style.width = i*amount + '%';
}
Then from wherever you are doing your processing work just calling that function every x seconds or whenever you hit a certain point in processing (let's say 10-20% completion?)
Pseudo:
{
Doing_work_here;
increment_bar(25);
LOOP
}

JS hangs on a do while loop

I am wondering how I can solve a hanging page with JS.
I have a JS loop which I'm testing like this, but it seems to hang forever - is there a way to stop it hanging while still completing the script ?:
<div id="my_data"></div>
<script>
function test(value){
output= [];
do{
value++;
output.push(value);
document.getElementById('my_data').innerHTML = (output.join(''));
}while(value < 10000000);
alert('end'); // never occurs
}
test(0);
</script>
You're updating the DOM with an increasingly long string ten million times.
If you're some sort of super-being from the future, then maybe this makes sense.
Javascript is single-threaded, so nothing else can happen while it's running your loop. It's running ten million times, and on each run it's setting the innerHTML of #my_data to something new. It's very inefficient and seems useless. What are you trying to accomplish?
You are concatenating 10,000,000 consecutive numbers together into a string, which really will not work well on any modern supercomputer.
If you'd like to poll the progress, setup a timer to do it somewhat slower. It isn't practical, but it looks nice: http://jsfiddle.net/V6jjT/2/
HTML:
<div id="my_data"></div>
Script:
var value = 0;
var interval = setInterval(function() {
if (value > 100) {
clearInterval(interval);
return;
}
value++;
document.getElementById('my_data').innerHTML += ' ' + value;
}, 10);
Running a tight loop like that will not update anything until it's done; besides, 10M array items?!
If you really want to do this and not let the browser hang until forever you have to use setInterval to allow the browser to refresh in between.
function updater(value, max, interval) {
var output = [],
node = document.getElementById('my_data'),
tid = setInterval(function() {
if (value++ < max) {
output.push(value);
node.innerHTML = (output.join(''));
} else {
alert('done');
clearInterval(tid);
}
}, interval);
}
updater(0, 10, 200);
You should not update the HTML each iteration.
function test(value){
output= [];
do {
value++;
output.push(value);
} while(value < 10000000);
document.getElementById('my_data').innerHTML = (output.join(''));
alert('end'); // never occurs
}
should work (although 10 million numbers in a HTML will still need their time to render).
If you want to see numbers running, you should have a look at window.setInterval - the browser needs some time between js execution to refresh its view. So, you will have to rewrite your code running several chunks of data asynchrously. See also:
Running a long operation in javascript? (example code for your loop)
How can I force the browser to redraw while my script is doing some heavy processing?
Javascript async loop processing
Prevent long running javascript from locking up browser
DOM refresh on long running function
Best way to iterate over an array without blocking the UI
Javascript - how to avoid blocking the browser while doing heavy work?
Tight loop is likely to always exhaust computing power. I would do it in chunks. Say, divide your list into smaller lists and pause 5ms in between lists. I can provide an example if you need.

Categories