Setting variable to existing value versus return? - javascript

In Javascript, I have a function that sets a variable. If the function tries to set the variable to its current value, is it more "efficient" to break out of the function, or let the function re-set the variable's value?
Example
var a;
function setStuff(x) {
if (a == x) { return; }
a = x;
}
versus
var a;
function setStuff(x) {
a = x;
}
This function will be called on page scroll, so it will be called at a high frequency.

I don't think the issue is "efficiency".
I do however think there's a practice at play here, which is to generally not manipulate values outside the scope of the function. Having many functions like these in your application will drive you nuts, wondering which function is changing what.
Instead return a new value.
var setStuff = function() {
return newValue;
}
var a = setStuff();

I wrote a simple test snippet:
var a;
function setStuffCheck(x) {
if (a == x) { return; }
a = x;
}
function setStuff(x) {
a = x;
}
function benchmark(func){
var startTime = Date.now();
var callCount = 1000000;
for(var i = 0; i < callCount; i++){
func(10);
}
console.log((Date.now() - startTime) + "ms for "+callCount+" calls setting always the same value");
startTime = Date.now();
for(var i = 0; i < callCount; i++){
func(i);
}
console.log((Date.now() - startTime) + "ms for "+callCount+" calls setting always different values");
}
benchmark(setStuffCheck);
benchmark(setStuff);
By copying and pasting it in the console (Firefox 46.0.1), I have something like this:
138ms for 1000000 calls setting always the same value
216ms for 1000000 calls setting always different values
77ms for 1000000 calls setting always the same value
78ms for 1000000 calls setting always different values
So the second way seems to be always better. But the results may be different on each computers. However, the difference is noticable only for 1 millions of calls (try changing it to 1000, there'll be no differences).

The second option makes more sense to me and is more viable as there is very less logic written as compared to the first one as it is checking whether two values are equal or not , whereas in the second option its just re-assigning the variable to a new value.
if condition is always slower compared to when just assigning a new value . So i think you should go with second option.

There are potentially two factors that will likely drown out any practical performance difference between the two options. In practice, I would suggest that you use the version that is the easiest to understand and explain to others. I think that would probably be the unconditional update but it would be more up to you.
The two things that are likely going to obfuscate any real differences are:
What else are you doing in the function
What effect branch prediction has on your conditional.
Now, specifically to the question of what version is faster. I have set up the following test with each option executed a million times with 10 runs of each test. The global is set to itself 1 in 10 times but you can change that to some other frequency by setting aNew
var a = 10;
var ittr = 1000 * 1000;
function getRandomIntInclusive(min, max) { return Math.floor(Math.random() * (max - min + 1)) + min; }
function set1(x){
if (a === x){ return; }
a = x;
}
function set2(x){
a = x;
}
for (var j = 0; j < 10; j++){
var start = performance.now();
for (var i = 0; i< ittr; i++){
var aNew = a - 10 + getRandomIntInclusive(1,19);
set1(aNew);
}
console.log("conditional : " + (performance.now() - start));
}
for (var j = 0; j < 10; j++){
var start = performance.now();
for (var i = 0; i< ittr; i++){
var aNew = a - 10 + getRandomIntInclusive(1,19);
set2(aNew);
}
console.log("unconditional : " + (performance.now() - start));
}
Your results may vary but the I see conditional set() averages about 18 after settling down. The unconditional about the same, maybe 17.5.
Note that the vast bulk of the time here is taken by the call to random(). If you consistently just set the global to itself both functions time at around 1.8 rather than 18, suggesting that whatever else you might do in your set() is likely to obfuscate any performance difference.

The two do not have necessarily have identical results. Consider this series of calls:
var a;
function setStuff(x) {
if (a == x) { return; }
a = x;
}
setStuffCheck(0) ;
setStuffCheck('0');
console.log(a);
Output is not '0', but 0.
For a good comparison, the setStuffCheck function should use the strict equality operator ===.
In my tests in FireFox I see the performance of both functions show very little difference. setStuffCheck seems to take slightly more time to execute than setStuff when the argument has a different value than a, but it is the opposite (also slightly) when the values are the same. The difference in either way is in the order of 2%, which is the kind of fluctuations you get anyway on a typical device/PC for other causes, that have nothing to do with the code.
Anyway, this also means that this slight performance difference will depend on how often you expect to call the function with an argument that is equal to a.
However, the difference is only noticeable when you would do hundreds of millions of calls. If you don't have that many calls, then don't even bother and choose for setStuff.

Related

How does empty try catch affect the performance?

I know the empty try... catch is not good program practice. However, I want to know the reason why the empty try... catch affect the performance in JavaScript?
The following codes snippets:
function test() {
var start = new Date();
for (var i = 0; i < 100000000; i++){
var r = i % 2;
}
console.log(new Date() - start);
try {
} catch (ex) {
}
}
The run time result is 709 under Chrome
However, without the empty try... catch,
function test3() {
var start = new Date();
for (var i = 0; i < 100000000; i++){
var r = i % 2;
}
console.log(new Date() - start);
}
The run time result is 132.
Under normal circumstances,
function test1() {
var start = new Date();
try {
for (var i = 0; i < 100000000; i++){
var r = i % 2;
}
console.log(new Date() - start);
} catch (ex) {
}
}
The result is 792
Edit
If I put this empty try catch into another function
function test4() {
var start = new Date();
for (var i = 0; i < 100000000; i++){
var r = i % 2;
}
console.log(new Date() - start);
wrap();
}
function wrap() {
try {
} catch (ex) {
}
}
The result is 130, so I think the try catch is function scope. Am I right or missing something?
That's extremely dependent on the JIT implementation. It can't be answered in a general context.
However, your benchmark is most likely giving you misleading results, specifically here:
for (var i = 0; i < 100000000; i++){
var r = i % 2;
}
Even toy compilers can optimize this into a NOOP and, without much extra effort, eliminate the entire loop. It's because this invokes no relevant side effects whatsoever ("relevant" as in, it has no effect on the output of the program, or from a lower-level perspective, it has no effect on memory that is going to be accessed elsewhere).
So with any decent optimizer, you're basically timing the cost of doing nothing at all (the optimizer would just skip the work you're actually trying to benchmark after quickly realizing it has no side effects that affect user output).
Micro-benchmarks are notoriously misleading because of what optimizers can do. They'll skip the very work you're trying to time if you aren't careful to make sure the work cannot be skipped without affecting user output.
If you want to construct meaningful benchmarks, you typically want to at least aggregate or sum the computation in some way that makes it to user output. For example, you might try summing the results of r in each iteration into some outer variable whose value you print out at the end of the computation. That would already make it exponentially more difficult for the optimizer to skip a bunch of computations, at which point you might quickly start to see more comparable times with or without the empty try/catch block, and whether or not you put a try/catch around the loop.
Now, based on what you're seeing, and this is getting into a realm of conjecture, it appears like introducing the empty try/catch block is preventing your particular JIT from being able to skip the work done in that loop. It's possible that exception-handling is being treated coarsely by your compiler at a per-function level, boiling down to a simple kind of, "Does this function require exception-handling? Yes/no? If yes, avoid certain optimizations for the entire function."
That's purely an educated guess -- only way to know for sure is to know the internals of your JIT or look at the resulting assembly.
I ran two tests in chrome and firefox:
let array = [];
function test1(){
try{
console.log('begin test 1..');
let startTime = new Date();
for(let i = 0; i < 10000000;i++){
array.push('');
}
console.log('result: ', new Date() - startTime);
}
catch(err){
console.error(err);
}
}
function test2(){
console.log('begin test 2..');
let startTime = new Date();
for(let i = 0; i < 10000000;i++){
array.push('');
}
console.log('result: ', new Date() - startTime);
}
array.length = 0;
test1();
array.length = 0;
test2();
Chrome result is: 378ms in test 1 vs 368ms in test 2 (102% diff).
Firefox result is: 1262ms in test 1 vs 1223ms in test 2 (103% diff)
I've tested some other operations like function calls, dividing and other, but result stays stable.
For now try/catch doesn't affect performance much

`arguments` increases the running time (mysteriously)

I have implemented a simple lightweight every function. I have noticed that if the variable arguments is somehow used inside the function -- it increases the running time from 800 ms to 1300 ms (in my case). What causes this?
I use Chrome 29.0.1547.66 m.
http://jsfiddle.net/4znzy/
function myEvery(list, fun, withArgument) {
var i;
fun = fun || function(val) { return val };
arguments; // with this statement the time is 1300 ms
// if you comment it out -- 800 ms
for (i = 0; i < list.length; i++) {
if (!fun.call(list, list[i], i)) {
return false;
}
}
return true;
};
// Create a huge array
var list = [];
for (i = 1; i < 20000000; i++) {
list.push(i);
}
// Measure the time
t1 = (new Date).getTime();
myEvery(list);
t2 = (new Date).getTime();
alert(t2 - t1);
(If to measure the time to perform the arguments statement itself, it is 0 ms.)
The appearance of arguments is like a dynamic getter for the function's parameters, which have to be read from the stack - and are copied. Large objects (not just big but also many) like your list parameter must also be copied.
You can see this by replacing the arguments line with
var args = [list.slice(0)]; // copy parameter
which results in similar times. Additional 150 with arguments and 200 with slice() on my machine.
Depending on the implementation of the JS engine this will be slower or faster, but will surely add time to the execution. There are probably (haven't tested it) quite big differences between different browsers or alternative JS engines.

Debugging loop and function - javascript

My browser is crashing from this loop that doesn't appear to be unterminated.
function checkLetters(word){
for(i=0;i<5;i++){
for(j=i+1;j<5;j++){
if(word.charAt(i) == word.charAt(j)){
return false;
break;
}
}
}
return true;
}
var compLibrary = [];
for (var k=0;k<library.length;k++) {
if(checkLetters(library[k]) == true) {
compLibrary.push(library[k]);
}
}
I am trying to search the library for words with no repeating letters and pushing it into a new array
The whole library is five letter words.
It's not an infinite loop, but it does look like a pretty expensive operation. There's not any really elegant way to detect an infinite loop (or recursion) so most engines just resort to either
Not trying to detect it, and running forever until some lower-level controller (like, the kernel) kills it.
Automatically killing itself when it gets to a certain recursion depth, loop count, or run time.
Your algorithm loops 5 * 4 * library.length times, so depending on how long your library actually is, your code could certainly trigger #2. But there are faster ways to find duplicate letters:
function checkLetters(word) {
var i=word.length;
var seenChars={};
var c;
while (i-->0) {
c = word.CharAt(i); # The current character
if (c in seenChars) return false;
seenChars[c] = 1;
}
return true;
}
var compLibrary = [];
for (var k=0; k < library.length; k++) {
if (checkLetters(library[k]) == true) {
compLibrary.push(library[k]);
}
}
Shouldn't this loop
for(i=0;i<5;i++){
for(j=i+1;j<5;j++){
be something in these lines
for(var i=0; i< word.length ;i++){
for(j=i+1; j<word.length/2; j++){
Can't see what your issue is but here's the solution I suggest for your problem:
var words = ['hello', 'bye', 'foo', 'baz'];
function hasUniqLetters(word){
var uniq = word.split('').filter(function(letter, idx){
return word.indexOf(letter) == idx;
});
return uniq.length == word.length;
}
var result = words.filter(hasUniqLetters);
console.log(result); //=> ["bye","baz"]
function checkLetters(word){
for(i=0;i<5;i++){ //both i and j were not instantiated
for(j=i+1;j<5;j++){
if(word.charAt(i) == word.charAt(j)){ //It could be crashing if
return false; //word <= 5
break;//a break after a return
} //statement is redundant.
}
}
return true;
}
You must put var before declaring a variable.
word.charAt(i) can be written like word[i]
Try this:
function checkLetters(word){
for(var i=0,j=1,l=word.length-1; i<l; i++,j++){
if(word.charAt(i) == word.charAt(j)){
return false;
}
}
return true;
}
var compLibrary = [];
for(var i=0,l=library; i<l; i++){
if(checkLetters(library[i]) == true){
compLibrary.push(library[i]);
}
}
tldr; The code originally posted should not crash the browser.
The following explains why nested loops are not always bad for efficiency and shows a counter-example where the original code works successfully without crashing the browser when running over 100,000 simulated words.
The complexity of the posted code is low and it should run really fast. It executes here in a fraction of a second (under 20ms!), even at all "20 * 8000" - i.e. C * O(n). Note that the time complexity is linear because the nested loops in checkLetters have a constant time: due to this small fixed limit ("20 loops" max each call) it does not represent a performance problem here.
As such, I maintain that it is not an issue wrt it being an efficiency problem. I assert that the original code will not "crash" a modern browser environment. For longer or unbound words then using a (presumably) lower complexity probe attempt may pay off - but the inner loop runs in small constant time here. (Actually, due to distribution of letters within words and word lengths I would imagine that the constant rarely exceeds "90 loops" for a natural language like English.)
See http://jsfiddle.net/FqdX7/6/
library = []
for (w = 0; w < 100000; w++) {
library.push((w + "12345").slice(0,5))
}
function checkLetters(word){
for(i=0;i<5;i++){
for(j=i+1;j<5;j++){
if(word.charAt(i) == word.charAt(j)){
return false;
}
}
}
return true;
}
$('#time').text("running")
start = +(new Date)
var compLibrary = [];
for (var k=0;k<library.length;k++) {
if(checkLetters(library[k]) == true) {
compLibrary.push(library[k]);
}
}
time = +(new Date) - start
$('#time').text(time + "ms")
On my machine (in Safari) the code runs in ~30 milliseconds (~40ms if the "return false" is removed) for an input of 100,000 words!
In comparison, the answer with a probe (seenChars lookup) actually runs worse in Safari/Chrome. See http://jsfiddle.net/Hw2wr/5/, where for 100k words it takes about 270ms - or about 9x slower. However, this is highly browser dependent and the jsperf in the comments shows that in Firefox the probing approach is faster (by about 2x) but is slower again in IE (say 4-5x).
YMMV. I think the original code is acceptable for the given situation and the "crashing" problem lies elsewhere.

let keyword in the for loop

ECMAScript 6's let is supposed to provide block scope without hoisting headaches. Can some explain why in the code below i in the function resolves to the last value from the loop (just like with var) instead of the value from the current iteration?
"use strict";
var things = {};
for (let i = 0; i < 3; i++) {
things["fun" + i] = function() {
console.log(i);
};
}
things["fun0"](); // prints 3
things["fun1"](); // prints 3
things["fun2"](); // prints 3
According to MDN using let in the for loop like that should bind the variable in the scope of the loop's body. Things work as I'd expect them when I use a temporary variable inside the block. Why is that necessary?
"use strict";
var things = {};
for (let i = 0; i < 3; i++) {
let index = i;
things["fun" + i] = function() {
console.log(index);
};
}
things["fun0"](); // prints 0
things["fun1"](); // prints 1
things["fun2"](); // prints 2
I tested the script with Traceur and node --harmony.
squint's answer is no longer up-to-date. In ECMA 6 specification, the specified behaviour is that in
for(let i;;){}
i gets a new binding for every iteration of the loop.
This means that every closure captures a different i instance. So the result of 012 is the correct result as of now. When you run this in Chrome v47+, you get the correct result. When you run it in IE11 and Edge, currently the incorrect result (333) seems to be produced.
More information regarding this bug/feature can be found in the links in this page;
Since when the let expression is used, every iteration creates a new lexical scope chained up to the previous scope. This has performance implications for using the let expression, which is reported here.
I passed this code through Babel so we can understand the behaviour in terms of familiar ES5:
for (let i = 0; i < 3; i++) {
i++;
things["fun" + i] = function() {
console.log(i);
};
i--;
}
Here is the code transpiled to ES5:
var _loop = function _loop(_i) {
_i++;
things["fun" + _i] = function () {
console.log(_i);
};
_i--;
i = _i;
};
for (var i = 0; i < 3; i++) {
_loop(i);
}
We can see that two variables are used.
In the outer scope i is the variable that changes as we iterate.
In the inner scope _i is a unique variable for each iteration. There will eventually be three separate instances of _i.
Each callback function can see its corresponding _i, and could even manipulate it if it wanted to, independently of the _is in other scopes.
(You can confirm that there are three different _is by doing console.log(i++) inside the callback. Changing _i in an earlier callback does not affect the output from later callbacks.)
At the end of each iteration, the value of _i is copied into i. Therefore changing the unique inner variable during the iteration will affect the outer iterated variable.
It is good to see that ES6 has continued the long-standing tradition of WTFJS.
IMHO -- the programmers who first implemented this LET (producing your initial version's results) did it correctly with respect to sanity; they may not have glanced at the spec during that implementation.
It makes more sense that a single variable is being used, but scoped to the for loop. Especially since one should feel free to change that variable depending on conditions within the loop.
But wait -- you can change the loop variable. WTFJS!! However, if you attempt to change it in your inner scope, it won't work now because it is a new variable.
I don't like what I have to do To get what I want (a single variable that is local to the for):
{
let x = 0;
for (; x < length; x++)
{
things["fun" + x] = function() {
console.log(x);
};
}
}
Where as to modify the more intuitive (if imaginary) version to handle a new variable per iteration:
for (let x = 0; x < length; x++)
{
let y = x;
things["fun" + y] = function() {
console.log(y);
};
}
It is crystal clear what my intention with the y variable is.. Or would have been if SANITY ruled the universe.
So your first example now works in FF; it produces the 0, 1, 2. You get to call the issue fixed. I call the issue WTFJS.
ps. My reference to WTFJS is from JoeyTwiddle above; It sounds like a meme I should have known before today, but today was a great time to learn it.

Alternatives to javascript function-based iteration (e.g. jQuery.each())

I've been watching Google Tech Talks' Speed Up Your Javascript and in talking about loops, the speaker mentions to stay away from function-based iterations such as jQuery.each() (among others, at about 24:05 in the video). He briefly explains why to avoid them which makes sense, but admittedly I don't quite understand what an alternative would be. Say, in the case I want to iterate through a column of table cells and use the value to manipulate the adjacent cell's value (just a quick example). Can anyone explain and give an example of an alternative to function-based iteration?
Just a simple for loop should be quicker if you need to loop.
var l = collection.length;
for (var i = 0; i<l; i++) {
//do stuff
}
But, just because it's quicker doesn't mean it's always important that it is so.
This runs at the client, not the server, so you don't need to worry about scaling with the number of users, and if it's quick with a .each(), then leave it. But, if that's slow, a for loop could speed it up.
Ye olde for-loop
It seems to me that it would be case that function-based iteration would be slightly slower because of the 1) the overhead of function itself, 2) the overhead of the callback function being created and executed N times, and 3) the extra depth in the scope chain. However, I thought I'd do a quick benchmark just for kicks. Turns out, at least in my simple test-case, that function-based iteration was faster. Here's the code and the findings
Test benchmark code
// Function based iteration method
var forEach = function(_a, callback) {
for ( var _i=0; _i<_a.length; _i++ ) {
callback(_a[_i], _i);
}
}
// Generate a big ass array with numbers 0..N
var a = [], LENGTH = 1024 * 10;
for ( var i=0; i<LENGTH; i++ ) { a.push(i); }
console.log("Array length: %d", LENGTH);
// Test 1: function-based iteration
console.info("function-base iteration");
var end1 = 0, start1 = new Date().getTime();
var sum1 = 0;
forEach(a, function(value, index) { sum1 += value; });
end1 = new Date().getTime();
console.log("Time: %sms; Sum: %d", end1 - start1, sum1);
// Test 2: normal for-loop iteration
console.info("Normal for-loop");
var end2 = 0, start2 = new Date().getTime();
var sum2 = 0;
for (var j=0; j<a.length; j++) { sum2 += a[j]; }
end2 = new Date().getTime();
console.log("Time: %sms; Sum: %d", end2 - start2, sum2);
Each test just sums the array which is simplistic, but something that can be realistically seen in some sort of real life scenario.
Results for FF 3.5
Array length: 10240
function-base iteration
Time: 9ms; Sum: 52423680
Normal for-loop
Time: 22ms; Sum: 52423680
Turns out that a basic for iteration was faster in this test case. I haven't watched the video yet, but I'll give it a look and see if he's differing somewhere that would make function-based iterations slower.
Edit: This is by no means the end-all, be-all and is only the results of one engine and one test-case. I fully expected the results to be the other way around (function-based iteration being slower), but it is interesting to see how certain browsers have made optimizations (which may or may not be specifically aimed at this style of JavaScript) so that the opposite is true.
The fastest possible way to iterate is to cut down on stuff you do within the loop. Bring stuff out of the iteration and minimise lookups/increments within the loop, e.g.
var i = arr.length;
while (i--) {
console.log("Item no "+i+" is "+arr[i]);
}
NB! By testing on the latest Safari (with WebKit nightly), Chrome and Firefox, you'll find that it really doesn't matter which kind of the loop you're choosing if it's not for each or for in (or even worse, any derived functions built upon them).
Also, what turns out, is that the following for loop is very slightly even faster than the above option:
var l = arr.length;
for (var i=l; i--;) {
console.log("Item no "+i+" is "+arr[i]);
}
If the order of looping doesn't matter, the following should be fastest as you only need a single local variable; also, decrementing the counter and bounds-checking are done with a single statement:
var i = foo.length;
if(i) do { // check for i != 0
// do stuff with `foo[i]`
} while(--i);
What I normally use is the following:
for(var i = foo.length; i--; ) {
// do stuff with `foo[i]`
}
It's potentially slower than the previous version (post- vs pre-decrement, for vs while), but more readable.

Categories