Can this be optimized
var groups = [];
for (var a in aList) {
for (var bNumber in bList) {
groups.push({ a: a, b: b });
}
}
The code is actually fine, but I just realized that it looks like a cross product of two lists, so instead of looping aList.length*bList.length times, I wondered if there was some smart function to do this.
Julian pointed out a mistake in my original answer where I was storing elements as opposed to indices.
The existing code is erroneous; there is no variable b.
I will assume bNumber was meant to be b (or the other way around; it doesn't matter).
Since you know that arrays are indexed 0...n (I am assuming you declared the arrays in a way such that all indices in that range exist), you certainly shouldn't use a for...in loop (as mentioned in this comment);
rather, you should explicitly code in the bounds in a standard for loop (or some nearly identical variant):
var groups = [];
for (var a = 0; a < aList.length; a++){
for (var b = 0; b < bList.length; b++){
groups.push({a, b});
}
}
You shouldn't generally be using var either;
you generally want to restrict the scope of your variables to directly within the block in which you are working, and you want to be able to make some variables unassignable.
const groups = [];
for (let a = 0; a < aList.length; a++){
for (let b = 0; b < bList.length; b++){
groups.push({a, b});
}
}
With those changes, this has runtime of around 11% of the code in the answer, with the error fixed.
Original answer, assuming storing elements
First, since you know that you are dealing with lists, you should not use a for...in loop (as mentioned in this comment).
Next, this operation is generally called the Cartesian product or direct product, not the cross product.
The existing code is erroneous; there is no variable b.
I will assume bNumber was meant to be b (or the other way around; it doesn't matter).
The code can then be rewritten as
var groups = aList.flatMap(a => bList.map(b => ({a, b})))
(Although you shouldn't generally be using var either!
You generally want to restrict the scope of your variables to directly within the block in which you are working.)
You will almost always find that the code in the question (with the error fixed) performs better, though, because Array maps have to handle some additional logic costs, like orchestrating the callbacks and list concatenation.
Switching your for...in loops to for...of loops actually halves the runtime in my tests.
Related
According to the 5.2.1 section of this article: Optimization killers
Doing this turns optimizations off in V8:
function hashTableIteration() {
var hashTable = {"-": 3};
for(var key in hashTable);
}
and the author says:
An object will go into hash table mode for example when you add too many properties dynamically (outside constructor), delete properties, use properties that cannot be valid identifiers and so on. In other words, when you use an object as if it was a hash
table, it will be turned into a hash table. Passing such an object to for-in is a no no. You can tell if an object is in hash table mode by calling console.log(%HasFastProperties(obj)) when the flag --allow-natives-syntax is enabled in Node.JS.
My question is then what is the correct way of iterating through the keys of a hashtable-like object in javascript, so that optimization do not get turned off?
Looks like the answer lies at the bottom of the very same article.
Workaround: Always use Object.keys and iterate over the array with
for loop. If you truly need all properties from entire prototype
chain, make an isolated helper function:
function inheritedKeys(obj) {
var ret = [];
for(var key in obj) {
ret.push(key);
}
return ret;
}
If you pass a object to for-in that is not a simple enumerable it
will punish the entire containing function.
From what I understood, the isolated function would help in allowing the rest of the process to be optimized. Only the inheritedkeys function wouldn't be optimized in the example below.
function someFunction(someObj) {
var keys = inheritedKeys(someObj),
i = 0,
len = keys.length;
for (; i < len; i++) {
//some computation
}
}
I believe Object.keys is regularly a better alternative in performance. However, I cannot say whether the optimizer converts it to a hashtable if used.
Update: Adding code for others stumbling upon this question.
var keys = Object.keys(myObj);
for (var i = 0; i < keys.length; i++) {
var value = myObj[keys[i]];
}
Before fully defining my question, I must say that >>this question/answer<< doesn't answer my problem, and I have proven it to myself that the given answer doesn't match at all with the actual effect of property vs. variable or cached property (see below).
I have been using HTML5 canvas, and I write raw pixel blocks many times in a second in a 640x480 area.
As advised by some tutorials, it is good to cache the .data property of an ImageData variable (in this case, it would be _SCimgData).
If I cache that property in SC_IMG_DATA, I can putImageData repeatedly in the Canvas with no problem; but if I repeatedly access it directly with _ScimgData.data, the slow-down of the code is noticieable (taking nearly 1 second to fill a single 640x480 Canvas):
var SomeCanvas = document.getElementById("SomeCanvas");
var SCContext = SomeCanvas.getContext("2d");
var _SCimgData = SomeCanvas.getImageData(0, 0, 640, 400);
var SC_IMG_DATA = _SCimgData.data;
Now I have the following doubt:
Would my code be as slow for other kinds of similar accesses?
I need an array of objects for a set of functions that can have several "instances" of an object (created by a regular utility function), and that need the index of the instance in an array of objects, either to create/initialize it, or to update its properties.
My concrete example is this:
var objArray=new Array();
var objArray[0]=new Object();
objArray[0].property1="some string property";
for(var x=0; x<65536; x++)
doSomething(objArray[0].property1, objIDX=0);
Would that code become as unacceptably slow as in the Canvas case, if the properties and functions contained in some properties are called very intensively (several times in a single milisecond, of course using setInterval and several "timer threads" to avoid locking the browser)?
If so, what other alternative is there to speed up access for the different properties of several objects in the main object array?
EDIT 1 (2012-08-27)
Thanks for the suggestions. I have up-voted them since I suspect they will be useful for the project I'm working on.
I am thinking in a combination of methods, using mainly Arrays instead of Objects to build an actual array of "base objects", and addressing array elements by numbers (arr[0]) instead of string array keys (arr["zero"]).
var OBJECTS_SIZE=10
var Obj_Instances=new Array();
Obj_Instances[0]="property or array 1 of Object 0";
Obj_Instances[1]=new Array();
Obj_Instances[1][0]=new ArrayBuffer(128);
Obj_Instances[1][1]=new DataView(Obj_Instances[1][0]);
Obj_Instances[2]="property or array 3 of Object 0";
Obj_Instances[3]=function(){alert("some function here")};
Obj_Instances[4]="property or array 5 of Object 0";
Obj_Instances[5]="property or array 6 of Object 0";
Obj_Instances[6]=3;
Obj_Instances[7]="property or array 8 of Object 0";
Obj_Instances[8]="property or array 9 of Object 0";
Obj_Instances[9]="property or array 10 of Object 0";
Obj_Instances[10]="property or array 1 of Object 1";
Obj_Instances[11]=new Array();
Obj_Instances[11][0]=new ArrayBuffer(128);
Obj_Instances[11][1]=new DataView(Obj_Instances[11][0]);
Obj_Instances[12]="property or array 3 of Object 1";
Obj_Instances[13]=function(){alert("some function there")};
Obj_Instances[14]="property or array 5 of Object 1";
Obj_Instances[15]="property or array 6 of Object 1";
Obj_Instances[16]=3;
Obj_Instances[17]="property or array 8 of Object 1";
Obj_Instances[18]="property or array 9 of Object 1";
Obj_Instances[19]="property or array 10 of Object 1";
function do_Something_To_Property_Number_6(objIdx)
{
//Fix the index to locate the base address
//of the object instance:
///
objIdx=(objIdx*OBJECTS_SIZE);
Obj_instances[objIdx+6]++; //Point to "Property" 6 of that object
}
I would have, say an "instance" of an "object" that takes up the first 10 array elements; the next "instance" would take the next 10 array elements, and so on (creating the initialization in a custom "constructor" function to add the new block of array elements).
I will also try to use jsPerf and JSHint to see which combination result better.
To answer your "doubts", I suggest using JSPerf to benchmark your code. One can't really tell by code alone if the procedure is faster than another unless tested.
Also, I suggest you use the literal notation for arrays and objects instead of the new notation during construction:
var objArray=[
{
property : 'some string property'
}, {
...
},
];
Also, based on your code, it's better to have this since you are using the same object per iteration:
var obj = objArray[0].property1,
objIDX = 0;
for(var x=0; x<65536; x++){
doSomething(obj,objIDX);
}
I realise this is not quite answering your question (as it has already been answered), however as you seem to be looking for speed improvements in regard to function calls that happen thousands of times (as others who find this might also be doing). I thought I'd include this here as it goes against assumptions:
An example function:
var go = function (a,b,c,d,e,f,g,h) {
return a+b+c+d+e+f+g+h;
}
The following is how you would normally call a repetitive function:
var i=500000; while(i--){
go(1,2,3,4,5,6,7,8);
}
However, if none (or a few) of those arguments ever change for this particular usage of the function, then it's far better to do this (from a speed pov - obviously not an asynchronous pov):
var i=500000; go.args = [1,2,3,4,5,6,7,8];
while(i--){
go();
}
In order for the above to work you only need a slight modification to the original function:
var go = function (a,b,c,d,e,f,g,h, i) {
if ( go.args ) {
i = go.args;
a = i[0]; b = i[1];
c = i[2]; d = i[3];
e = i[4]; f = i[5];
g = i[6]; h = i[7];
}
return a+b+c+d+e+f+g+h;
}
This second function runs significantly faster because you are not passing in any arguments (a function called with no args is very quick to initiate). Pulling the values from the .args array doesn't seem to be that costly either (unless you involve strings). Even if you update one or two of the args it's still far faster, which makes it perfect for pixel or imagedata manipulations because you are normally only shifting x & y:
var i=500000; go.args = [1,2,3,4,5,6,7,8];
while(i--){
go.args[2] = i;
go();
}
So in a way this is an example of where an object property can be faster than local vars - if a little convoluted and off topic ;)
Possible browser optimizations notwithstanding, accessing a property of an object is more expensive than accessing a local variable (but not necessarily a global variable or a variable of a parent function).
The deeper the property, the more of a performance hit you take. In other words,
for(var x=0; x<65536; x++)
doSomething(objArray[0].property1, objIDX=0);
would be improved by caching objArray[0].property1, and not repeatedly assigning to objIDX:
var prop = objArray[0].property1;
objIDX = 0;
for(var x=0; x<65536; x++)
doSomething(prop, 0);
When iterating over a string or array (or anything else with a length property), I've always used a loop like this:
var foo = [...];
var i;
for(i=0; i<foo.length; i++) {
// do something
}
However, I just encountered someone who did this:
var foo = [...];
var fooLen = foo.length;
var i;
for(i=0; i<fooLen; i++) {
// do something
}
He said he thought the ".length" was recalculating the length, thus the loop would recalculate the length of the string/array over and over, so by saving its length to a variable it would be more optimized.
I always assumed length was just a value property because of the way it's used (it's not "asdf".length(), it's "asdf".length) but is this not the case?
There are some situations where putting the length into a local variable is faster than accessing the .length property and it varies by browser. There have been performance discussions about this here on SO and numerous jsperf tests. In a modern browser, the differences were not as much as I thought they would be, but they do exist in some cases (I can't seem to find those previous threads).
There are also different types of objects that may have different performance characteristics. For example, a javascript array may have different performance characteristics than the array-like object returned from some DOM functions like getElementsByClassName().
And, there are some situations where you may be adding items to the end of the array and don't want to be iterating through the items you add so you get the length before you start.
From MDC
for (var i = 0; i < a.length; i++) {
// Do something with a[i]
}
This is slightly inefficient as you are looking up the length property
once every loop. An improvement is this:
for (var i = 0, len = a.length; i < len; i++) {
// Do something with a[i]
}
Maybe not much of a difference with "regular" arrays, but for something like "node.children.length" I would err on the safe side and call it only once. CoffeeScript does that for you automatically.
Note that there is an actual difference in behaviour if the length can change during the loop.
It depends on if you are changing the foo's length.
var foo = [1,2,3];
while(foo.length){
foo.shift();
}
Obviously the code is keeping track of the foo's length, not simply remembering a value.
You can assign the length as a number in the loop.
for(i=0, L=foo.length;i<L; i++) {
// do something to foo[i]
}
JSLint keeps complaining about things like this
var myArray = [1, 2, 3];
for (var value in myArray)
{
// BLAH
}
Saying that I should wrap it in an if statement. I realize you need to wrap it if you are looping over an object's properties, but here what should I put in the if statement to do the correct filtering.
Additionally when I do something like
for (var i = 0; i < 10; i++)
{
// foo
}
for (var i =0; i < 20; i++)
{
// bar
}
It complains that i has already been defined. How do I prevent this other than using different variable names?
JSLint whinges about a lot that's not really harmful. In this case it's right to complain about for...in, because that's the wrong construct to loop over an Array.
This is because you will get not only the numeric keys, but also any other arbitrary properties that have been added to the array or its Array.prototype. The latter typically comes from extension utility functions added by frameworks.
Whilst you can defeat that case with hasOwnProperty to check it's not a prototype member, it's uglier than just doing it the proper way with for (var i= 0...) so why bother.
Also, with for...in you won't necessarily get the items in numerical order as you might expect.
It complains that i has already been defined. How do I prevent this other than using different variable names?
Yeah, you can ignore that one.
It wants you to remove the var from the second for (i..., because declaring a variable twice in the same scope doesn't do anything. However I would recommend leaving the var there because it doesn't do any harm, and if you move the loop to another block you don't want it to be suddenly scribbling on globals.
Really, you don't have to listen to jslint. But if you really want to just pass (which is nice) you might do:
var myArray = [1, 2, 3];
for (var value in myArray)
{
if (myArray.hasOwnProperty(value)) {
// BLAH
}
}
For the second part, you either have to put them in functions or use different variables. The other solution would be to just use i instead of var i the second time, because it's already defined...
If you look at the JSLint docs you'll find a link explaining the rationale behind filtering for-in loops: basically, it's to avoid tripping over any enumerable properties that have been added to the object's prototype. (Although you shouldn't use for-in to iterate over an array anyway.)
In the second case you are declaring the variable twice: variables have function scope (or global scope) in JavaScript. Douglas Crockford, and therefore JSLint, argues that it is better to declare the variable only once for the scope in which it resides:
var i;
for (i = 0; i < 10; i++)
{
// foo
}
for (i =0; i < 20; i++)
{
// bar
}
I suggest to follow JSLint as good reference point, you may want to configure few options and make you check looser.
Anyway the best way to iterate through an Array is using for loop rather than for in loop.
If you want a detailed explanation read this post
phew! That was a long title.
I'm reading WROX' book on Professional JavaScript for web developers and I came across this sample code, and I was just wondering if that was best practice:
function convertToArray(nodes) {
array = new Array();
for (var i=0, len=nodes.length; i < len; i++) {
array.push(nodes[i]);
}
return array;
}
The thing that's got me scratching my head is the "len=nodes.length". Am I wrong in thinking that the first sentence in a for-loop is only run once? Is there a reason you'd want to set a variable (len) to the length of the nodeList before running through it? Would you do that to a normal array as well?
Thanks
That is for performance reasons. A local variable is faster for several reasons:
The the length will need to be accessed all the time in the loop, once per every iteration;
A local variable lookup is faster than member lookup;
If nodes is an array, then .length is a magic property that may take a bit longer to retrieve than a member variable.
If nodes is an ActiveX object, then .length might result in a method call into the object, so that's the most expensive operation of all.
While we're discussing micro-optimizations, the following should be even faster:
function convertToArray(nodes) {
var i = nodes.length,
array = new Array(i); // potentially faster than `array = []`
// -- see comments
while(i--)
array[i] = nodes[i];
return array;
}
It needs one less local variable, uses a while and not a for loop and uses array assignment instead of the function call push().
Also, because we're counting down we pre-allocate the array's slots, the array's length doesn't have to be changed each iteration step, but only on the first one.