Which way is best to define a JavaScript array? [duplicate] - javascript

I want to create an array in javascript and remember two ways of doing it so I just want to know what the fundamental differences are and if there is a performance difference in these two "styles"
var array_1 = new Array("fee","fie","foo","fum");
var array_2 = ['a','b','c'];
for (let i=0; i<array_1.length; i++){
console.log(array_1[i])
}
for (let i=0; i<array_2.length; i++){
console.log(array_2[i])
}

They do the same thing. Advantages to the [] notation are:
It's shorter.
If someone does something silly like redefine the Array symbol, it still works.
There's no ambiguity when you only define a single entry, whereas when you write new Array(3), if you're used to seeing entries listed in the constructor, you could easily misread that to mean [3], when in fact it creates a new array with a length of 3 and no entries.
It may be a tiny little bit faster (depending on JavaScript implementation), because when you say new Array, the interpreter has to go look up the Array symbol, which means traversing all entries in the scope chain until it gets to the global object and finds it, whereas with [] it doesn't need to do that. The odds of that having any tangible real-world impact in normal use cases are low. Still, though...
So there are several good reasons to use [].
Advantages to new Array:
You can set the initial length of the array, e.g., var a = new Array(3);
I haven't had any reason to do that in several years (not since learning that arrays aren't really arrays and there's no point trying to pre-allocate them). And if you really want to, you can always do this:
var a = [];
a.length = 3;

There's no difference in your usage.
The only real usage difference is passing an integer parameter to new Array() which will set an initial array length (which you can't do with the [] array-literal notation). But they create identical objects either way in your use case.

This benchmark on JSPerf shows the array literal form to be generally faster than the constructor on some browsers (and not slower on any).
This behavior is, of course, totally implementation dependent, so you'll need to run your own test on your own target platforms.

I believe the performance benefits are negligible.
See http://jsperf.com/new-array-vs-literal-array/4

I think both ways are the same in terms of performance since they both create an "Array object" eventually. So once you start accessing the array the mechanism will be the same. I not too sure about how different the mechanisms to construct the arrays be (in terms of performance) though it shouldn't be any noticeable gains using one way to the other.

Related

How object basically work when get value by key

I have a question about how object key-value work when we get value by key? Does it need to lookup where the key first and return the corresponding value?
For example: If we have object like
var obj = { keyX: valueX, keyY: valueY, keyZ: valueZ }
Then we retrieve the value obj.keyY, so does it lookup one by one on those object keys to find for keyY?
So for large object (e.g object with 1 million keys), is it slow when we get value by key?
I appreciate any helps. Thank you.
Then we retrieve the value obj.keyY, so does it lookup one by one on those object keys to find for keyY?
It's up to the implementation, but even in old JavaScript engines, the answer is no, it's much, much more efficient than that. Object property access is a hugely common operation, so JavaScript engines aggressively optimize it, and are quite sophisticated at doing so.
In modern JavaScript engines objects are often optimized to just-in-time generated class machine code, so property lookup is blindingly fast. If they aren't optimized for some reason (perhaps only used infrequently), typically a structure like a hash table is used, so lookup remains much better than linear access (looking at each property).
Objects with large numbers of properties, or properties that vary over time, may not be as well optimized as objects with more reasonable numbers of properties. But they'll at least be optimized (in anything vaguely modern) to a hash table level of access time. (FWIW: For objects that have varying properties over time, you may be better off with a Map in the first place.)
Although interesting academically, don't worry about this in terms of writing your code until/unless you run into a performance problem that you trace to slow property access. (Which I literally never have in ~20 years of JavaScript coding. :-) )
Little benchmark:
//setup..
for(var obj = {}, i =0; i<1000000; obj['key'+i] = i++);
var a;
//find 1
console.time(1);
a = obj.key1;
console.timeEnd(1);
//find last
console.time(2);
a = obj.key999999;
console.timeEnd(2);
As you see, it is not lookup one by one

Why are we allowed to create sparse arrays in JavaScript?

I was wondering what the use-cases for code like var foo = new Array(20), var foo = [1,2,3]; foo.length = 10 or var foo = [,,,] were (also, why would you want to use the delete operator instead of just removing the item from the array). As you may know already, all these will result in sparse arrays.
But why are we allowed to do the above thnigs ? Why would anyone want to create an array whose length is 20 by default (as in the first example) ? Why would anyone want to modify and corrupt the length property of an array (as in the second example) ? Why would anyone want to do something like [, , ,] ? Why would you use delete instead of just removing the element from the array ?
Could anyone provide some use-cases for these statements ?
I have been searching for some answers for ~3 hours. Nothing. The only thing most sources (2ality blog, JavaScript: The Definitive Guide 6th edition, and a whole bunch of other articles that pop up in the Google search results when you search for anything like "JavaScript sparse arrays") say is that sparse arrays are weird behavior and that you should stay away from them. No sources I read explained, or at least tried to explain, why we were allowed to create sparse arrays in the first place. Except for You Don't Know JS: Types & Grammar, here is what the book says about why JavaScript allows the creation of sparse arrays:
An array that has no explicit values in its slots, but has a length property that implies the slots exist, is a weird exotic type of data structure in JS with some very strange and confusing behavior. The capability to create such a value comes purely from old, deprecated, historical functionalities ("array-like objects" like the arguments object).
So, the book implies that the arguments object somehow, somewhere, uses one of the examples I listed above to create a sparse array. So, where and how does arguments use sparse arrays ?
Something else that is confusing me is this part in the book "JavaScript: The Definitive Guide 6th Edition":
Arrays that are sufficiently sparse are typically implemented in a slower, more memory-efficient way than dense arrays are`.
"more memory-efficient" appears like a contradiction to "slower" to me, so what is the difference between the two, in the context of sparse arrays especially ? Here is a link to that specific part of the book.
I was wondering what the use-cases for code like var foo = new Array(20), var foo = [1,2,3]; foo.length = 10 or var foo = [,,,] were
in theory, for the same reason people usually use sparse data structure ( not necessarily in order of importance ): memory usage ( var x = []; x[0]=123;x[100000]=456; won't consume 100000 'slots' ), performance ( say, take the avarage of the aforementioned x, via for-in or reduce() ) and convenience ( no 'hard' out of bound errors, no need to grow/shrink explicitly );
that said, semantically, a js array is just a special associative collection with index keys and a special property 'length' satisfying the invariant of being greater than all its index properties. While being a pretty elegant definition, it has the drawback of rendering sparsely defined arrays somewhat confusing and error prone as you noticed.
But why are we allowed to do the above thnigs ?
even if we were not allowed to define sparse arrays, we could still put undefined elements into arrays, resulting in basically the same usability problems you see with sparse arrays.
So, say, having [0,undefined,...,undefined,1,undefined] the same as [0,...,1,] would buy you nothing but more memory consuming arrays and slower iterations.
Arrays that are sufficiently sparse are typically implemented in a slower, more memory-efficient way than dense arrays are. more memory-efficient and slower appear like a contradiction to me
"dense arrays" used for general purpose data are typically implemented as a contiguous block of memory filled with elements of the same size; if you add more elements, you continue filling the memory block allocating a new block if exhausted. Given that reallocation implies moving all elements to the new memory block, said memory is typically allocated in abundance to minimize chances of reallocation ( something like the golden ratio times the last capacity ).
Hence, such data structures are typically the fastest for ordered/local traversal ( being more CPU/cache friendly ), the slowest for unpredicatble insertions/deletions ( for sufficiently big N ) and have high memory overhead ~ sizeof(elem) * N + extra space for future elems.
Conversely, "sparse arrays/matrices/..." are implemented by 'linking' together smaller memory blocks spreaded in memory or by using some 'logically compressed' form of a dense data structure or both; in either case, memory consumption is reduced for obvious reasons, but traversing them comparatively requires more work and less local memory access patterns.
So, if compared relative to the same effectively traversed elements sparse arrays consume much less memory but are much slower than dense arrays. However, given that you use sparse arrays with sparse data and algorithms acting trivially on 'zeros', sparse arrays can turn out much faster in some scenarios ( eg. multiply very big matrices with few non zero elements ... ).
Because a JS Array is a very curious kind of data type that do not obey the time complexity rules as you would probably expect when the right tool is used. I mean the for in loop or the Object.keys() method. Despite being a very functional man i would swing towards the for in loop here since it is breakable.
There are some very beneficial use cases of sparse arrays in JS such as inserting and deleting items into a sorted array in O(1) without disturbing the sorted structure if your values are numerals like a Limit Order Book. Or in another words, if you could establish a direct numerical correlation among your keys and values.

Why is .reduce used instead of reduce()? In flattening an array of arrays with JavaScript

This example is from http://eloquentjavascript.net/code/#5.1.
My question is the first bullet-pointed detail; others may be helpful details, but are additional; also see the first short program to see my question in context.
- Why is arrays.reduce() used instead of reduce(arrays()). I know that their's works with the arrays.reduce, but why?
This is an answer to a comment that is useful, but additional to the original question.
My question is with this first program. Since it uses arrays.reduce,
reduce would be a method of arrays, I am not sure why reduce is a
method of arrays. The reason might be in the design decisions of
JavaScript? Thanks, #cookie monster, for that comment!
This is the program with the context of my question-
var arrays = [[1, 2, 3], [4, 5], [6]];
console.log(arrays.reduce(function(flat, current) {
return flat.concat(current);
}, []));
// → [1, 2, 3, 4, 5, 6]
These next details are additional, but may(or may not) be of use:
I know that the [] at the end is used because it is the start parameter in the function reduce so that the other arrays are added to that empty array. I know the .concat is putting together the two arrays like + with strings, but for arrays. Here is what the reduce function, even though it is standard, looks like:
function reduce(array, combine, start){
var current = start;
for(var i = 0; i < array.length; i++)
current = combine(current, array[i]);
return current;
}
One of their other examples showed my way with a single array, if that helps. It looked like:
console.log(reduce([1, 2, 3, 4], function(a, b){
return a + b;
}, 0));
// 10
Thanks! :)
In object oriented design, a guiding principle is that you create or declare objects and those objects have a series of methods on them that operate on that particular type of object. As Javascript is an object oriented language, the built-in functions follow many of the object oriented principles.
.reduce() is a function that operates only on Arrays. It is of no use without an Array to operate on. Thus, in an object oriented design paradigm, it makes complete sense to place the .reduce() function as a method on an Array object.
This object-oriented approach offers the following advantages vs. a global reduce() function:
It is consistent with the object oriented principles used elsewhere in the language.
It is convenient to invoke the .reduce() function on a particular array by using array.reduce() and it is obvious from the syntax which array it is operating on.
All array operations are grouped as methods on the Array object making the API definition more self documenting.
If you attempt to do obj.reduce() (invoke it on a non-array), you will immediately get an runtime error about an undefined .reduce() method.
No additional space is taken in the global scope (e.g. no additional global symbol is defined - lessening the chance for accidental overwrite or conflict with other code).
If you want to know why anyone ever used a global reduce() instead, you would need to understand a little bit about the history of Javascript evolution. Before ES5 (or for users running browsers who hadn't yet implemented ES5 like IE8), Javascript did not have a built-in .reduce() method on the Array object. Yet, some developers who were familiar with this type of useful iteration capability from other languages wanted to to use it in their own Javascript or in their own Javascript framework.
When you want to add some functionality to an existing built-in object like Array in Javascript, you generally have two choices. You can add a method to the prototype (in object-oriented fashion) or you can create a global function that takes the Array as its first argument.
For the Array object in particular, there are some potential issues with adding iterable methods to the Array prototype. This is because if any code does this type of iteration of an array (a bad way to iterate arrays, but nevertheless done by many):
for (var prop in array)
they will end up iterating not only the elements of the array, but also any iterable properties of the array. If a new method is assigned to the prototype with this type of syntax:
Array.prototype.myMethod = function() {}
Then, this new method will end up getting iterated with the for (var prop in array) syntax and it will often cause problems.
So, rather than adding a new method to the prototype, a safer alternative was to just use a global function and avoid this issue.
Starting in ES 5.1, there is a way using Object.defineProperty() to add non-iterable methods to an object or prototype so for newer browsers, it is now possible to add methods to a prototype that are not subject to the problem mentioned above. But, if you wanted to support older browsers (like IE8) and use reduce() type of functionality, you're still stuck with these ancient limitations.
So ... even though a global reduce() is less object oriented and is generally not as desirable in Javascript, some older code went that route for legitimate safety/interoperability reasons. Fortunately, we are putting that road behind us as old version of IE drop off in usage (thank god for Microsoft dropping XP support to finally accelerate the demise of old versions of IE). And newer browsers already contain array.reduce() built in.
JavaScript was/is influenced by the language Scheme, a dialect of Lisp. In Scheme higher order functions are a key component/feature of the language. In fact the reduce function is pretty much equivalent to the fold function in Scheme. In the case of the reduce function in JavaScript, the creators of the language noticed that programmers often need to traverse arrays in a certain fashion, and gave programmers a higher order function where they can pass in a function to specify how they want to manipulate the data. Having higher order functions allows programmers to abstract redundant code therefore creating shorter, cleaner, more readable code.
You can use the reduce function in JavaScript to do many things other than flatten lists. Look here an example.

defining array in javascript

I want to create an array in javascript and remember two ways of doing it so I just want to know what the fundamental differences are and if there is a performance difference in these two "styles"
var array_1 = new Array("fee","fie","foo","fum");
var array_2 = ['a','b','c'];
for (let i=0; i<array_1.length; i++){
console.log(array_1[i])
}
for (let i=0; i<array_2.length; i++){
console.log(array_2[i])
}
They do the same thing. Advantages to the [] notation are:
It's shorter.
If someone does something silly like redefine the Array symbol, it still works.
There's no ambiguity when you only define a single entry, whereas when you write new Array(3), if you're used to seeing entries listed in the constructor, you could easily misread that to mean [3], when in fact it creates a new array with a length of 3 and no entries.
It may be a tiny little bit faster (depending on JavaScript implementation), because when you say new Array, the interpreter has to go look up the Array symbol, which means traversing all entries in the scope chain until it gets to the global object and finds it, whereas with [] it doesn't need to do that. The odds of that having any tangible real-world impact in normal use cases are low. Still, though...
So there are several good reasons to use [].
Advantages to new Array:
You can set the initial length of the array, e.g., var a = new Array(3);
I haven't had any reason to do that in several years (not since learning that arrays aren't really arrays and there's no point trying to pre-allocate them). And if you really want to, you can always do this:
var a = [];
a.length = 3;
There's no difference in your usage.
The only real usage difference is passing an integer parameter to new Array() which will set an initial array length (which you can't do with the [] array-literal notation). But they create identical objects either way in your use case.
This benchmark on JSPerf shows the array literal form to be generally faster than the constructor on some browsers (and not slower on any).
This behavior is, of course, totally implementation dependent, so you'll need to run your own test on your own target platforms.
I believe the performance benefits are negligible.
See http://jsperf.com/new-array-vs-literal-array/4
I think both ways are the same in terms of performance since they both create an "Array object" eventually. So once you start accessing the array the mechanism will be the same. I not too sure about how different the mechanisms to construct the arrays be (in terms of performance) though it shouldn't be any noticeable gains using one way to the other.

How do arrays in JavaScript work ? (ie without a starting dimension size)

I was reading this post the other night about the inner workings of the Array, and learned a lot from the answers posted, especially from Jonathan Holland's one.
So the reason you give a size to an array beforehand is so that space will need to be reserved beforehand, so that elements in the array will be placed next each other in memory, and thus providing O(1) access time, because of the pointer + offset traversal.
But in JavaScript, you can initialize an array like such:
var anArray = []; //Initialize an empty array, without a dimension
So my question is, since in JavaScript you can initialize an array Without specifying a dimension before hand, how is memory allocated for an array to still provide a O(1) access time since the 'amount' of memory locations is not specified beforehand ?
Hmm. You should distinguish between arrays and associative arrays.
arrays:
A=[0,1,4,9,16];
associative arrays:
B={a:'ha',b:27,c:30};
The former has a length, the latter does not. When I run this in a javascript shell, I get:
js>A=[0,1,4,9,16];
0,1,4,9,16
js>A instanceof Array
true
js>A.length
5
js>B={a:'ha',b:27,c:30};
[object Object]
js>B instanceof Array
false
js>B.length
js>
How arrays "work" in Javascript is implementation-dependent. (Firefox and Microsoft and Opera and Google Chrome would all use different methods) My guess is they (arrays, not associative arrays) use something like STL's std::vector. Your question:
how is memory allocated for an array
to still provide a O(1) access time
since the 'amount' of memory locations
is not specified beforehand ?
is more along the lines of how std::vector (or similar resizable arrays) works. It reallocates to a larger array as necessary. Insertions at the end take amortized O(1) time, namely if you insert N elements where N is large, the total time takes N*O(1). Those individual inserts where it does have to resize the array may take longer, but on the average it takes O(1).
Arrays in Javascript are "fake". They are implemented as hash maps. So in the worst case their access time is not O(1). They also need more memory and you can use any string as an array index. You think that's weird? It is.
As I understand, it's like this:
There are two different things in JavaScript: Arrays and Objects. They both act as hashtables, although the underlying implementation is specific to each runtime. The difference between the two is that an Array has an implicit length property, while an object does not. Otherwise you can use [] or . syntaxes for both of them. Yes, that means that objects can have numerical properties and arrays have string indices. No problem. Although the length property might not be what you expect when using such tricks or sparse arrays. You should rely on it only if the array is not sparse and indices start from 0.
As for the performance - sorry, it's not the O(1) you'd expect. As stated before - it's actually implementation specific. But in general case it's not possible to ensure that there will be O(1) performance for all operations in a hashtable. That said, I'd expect that decent implementations should have quite a few optimizations in place for standard cases, which would make the performance quite close to O(1) under most scenarios. But at any rate - storing huge volumes of data in JavaScript is not a wise idea.
It is the same in PHP. I came from a PHP/Javascript background and dimensionalizing arrays really got me when i moved onto other languages.
javascript has no real arrays.
Elements are allocated as you define them.
It is a flexible tool. You can use the for many purposes but as a general purpose tool is not as efficient as special purpose arrays.
As Jason said, unless it is explicitly specified by the ECMAScript standard (unlikely), it is implementation dependent. The article shown by Feet shows that IE's implementation was poor (until IE8?), which is confirmed by JavaScript loop performance.
Other JS engines probably takes a more pragmatic approach. For example, they can do like in Lua, having a true array part and an associative array part: if the array is dense, it lives in the true array (which can still be extended at the cost of re-allocation), and you can still have sparse high indices living in the associative part.
Thus you have the best of two worlds, speed of access to dense parts and low memory use for sparse parts.
You can even do:
var asocArray = {key: 'val', 0: 'one', 1: 'two'}
and
var array = []; array['key'] = 'val'; array[0] = 'one'; array[1] = 'two';
When looping, you can use them in the same way using a for i in object loop, instead of using that for the asocArray and using a for var i=0; i<array.length; i++ loop for "array". The only difference here is that if you're expecting the indices in "array" to be a typeof i === 'number' or (i).constructor === Number then it will only be in the "for var i=0; i<array.length; i++ loop"; the for i in object loop makes all the keys (even indices) a String.

Categories