While looking over new changes to JavaScript I noticed that Set and Map use .size instead of .length like arrays would.
This seems like a pointless diversion from what's normal with arrays - just one more thing to remember.
Was there a good design reason for this?
There's a lot of discussion in the esdiscuss thread "Set length property". This was a hotly debated issue, so it's not surprising that you do not necessarily agree with the resolution.
There is a tremendous amount of arguing about this in esdiscuss. Ultimately, the argument that prevailed (as evidenced by the fact that ES2015's Sets have size and not length) was summarized in a post by David Bruant:
...for me 'length' refers to a measurement with something like a ruler. You start at 0 and see up to where it goes. This is very accurate for an array which is an indexed set (starting at 0 and growing) and for arrays as considered in C (continuous sequence of bytes) which ECMAScript arrays seem inspired of. This is probably less relevant for unordered collections such as sets which I'd tend to consider as a messy bag.
And further discussed in a post by Dean Landolt:
Just wanted to jump in and say non-writable length is consistent with String behavior as well, but David makes a good point about length implying metric topology. David's suggestion of count is nice. ISTM what we're talking about is cardinality, but no need to get too silly w/ precision. Though size is just fine with me, and has plenty of prior art.
While apsillers' Jan 27, 2016 answer adds great links, a code example is missing. The size of a set is a read-only getter while that's not the case for arrays which allow modifying the length to truncate the array.
let arr = [1, 2, 3, 4]
arr.length = 2
console.log("modified length array", arr) // [1, 2]
let mySet = new Set([1, 2, 3, 4])
mySet.length = 2
mySet.size = 2
console.log("modified length set", [...mySet]) // [1, 2, 3, 4]
let str = "1234"
str.length = 2
console.log("modified length string", str) // "1234"
I'm working on a React Native app where I want to play an audio file and visualize it, I didn't find a suitable package for it and decided to make it myself.
I made everything but audio visualization. To visualizate a file I need some kind of library that will analyze audio for me and return numbers array. I will use each number of the array as a point on my future graph.
Let's imagine I have this package, ideally I would like to use like this:
const audioPath = somePackage.analyzeAudio(audio.url);
console.log(audioPath);
// Output: [0, 0, 1, 2, 5, 10, 8, 0]
In array [0, 0, 1, 2, 5, 10, 8, 0] I will understand that at the beginning the audio has no sound at all then it's getting louder and at the end it's silent again. Later I can use these numbers to plot a graph.
Is there a way to do it ?
I couldn't find anything useful to analyze an audio on client side so I decided to do it on server (nodejs) and then send parsed data to client.
I implemented it with help of this package => https://github.com/audiojs/web-audio-api.
This code helped me a lot => https://github.com/victordibia/beats/blob/master/beats.js
I'm learning about the map() method right now and I understand very basic examples.
var numbers = [2, 4, 6];
var double = numbers.map(function(value) {
return value * 2;
});
My question is, in what cases do developers use the map() method to help solve problems? Are there some good resources with real world examples?
Thanks for the help!
As #Tushar referred:
The map() method creates a new array with the results of calling a
provided function on every element in this array.
So it is basically used when you need to apply certain functionality to every single element of an array and get the result back as an array with the new results.
For example doubling the numbers:
var numbers = [1, 4, 9];
var doubles = numbers.map(function(num) {
return num * 2;
});
// doubles is now [2, 8, 18]. numbers is still [1, 4, 9]
It basically helps to shorten your code eliminating the need of using for loop. But do remember it is used when every element of the array is manipulated because map() generates similar length of array provided.
For eg.- in the example you provided doubles will have [2, 8, 18].
where 2 correspond to 1.
4 correspond to 8.
9 correspond to 18.
I recommend you to watch the whole video but your answer is at the 14th minute:
Asynchronous JavaScript at Netflix by Matthew Podwysowski at JSConf Budapest 2015
Given an unknown array of integers of an unknown length, whose values are also unknown, how can I organize them into three columns, so that the sum of the left most group is the largest, the middle the second largest and the third is the smallest with the groups being as close as possible in size.
The actual goal here is to organize <ul> elements by their size (# of <li> elements they contain) into three columns. I'm looking for an answer in javascript, but if someone can explain the logic simply enough that would good enough :)
So in other words given an array such as...
var set = [1, 1, 4, 6, 7, 10, 3, 6]
Would be organized as...
var left = [10, 4]
var middle = [6, 7]
var right = [3, 6, 1, 1]
There are other possibilities. The first column sums to 14, but this could be the outcome of various combinations such as [6, 4, 3, 1]. Being organized in such a way would make it difficult to get the right values for the next column, so preferably use the largest numbers earlier on, as in my example above. *
I'm sure this has been asked and answered before but I didn't know how to look this up. I did some research and found out that this is pretty much the Partitioning Problem, although I'm still at a loss on how to do it or if there is simple one feasible answer here. Anything that works for the simple example I gave should suffice.
* EDIT: On second thought, this may be an incorrect assumption.
In attempting to build a WebGL 3D library for myself (learning purposes mostly) I followed documentation that I found from various sources that stated that the TypedArray function set() (specifically for Float32Array), is supposed to be "as fast as" memcpy in C (obviously tongue in cheek), literally the fastest according to html5rocks. On appearances that seemed to be correct (no loop setup in javascript, disappearing into some uberfast typed array pure C nonsense, etc).
I took a gander at glMatrix (good job on it btw!), and noticed that he (author) stated that he unrolled all of the loops for speed. This is obviously something a javascript guru would do normally for as much speed as possible, but, based on my previous reading, I thought I had 1-up on this library, specifically, he created his lib to be functional with both arrays and typed arrays, thus I thought that I would get more speed by using set() since I was only interested in staying in TypedArray types.
To test my theory I set up this jsperf. Not only does set() comparatively lack speed, every other technique I tried (in the jsperf), beats it. It is the slowest by far.
Finally, my question: Why? I can theoretically understand a loop unrolling becoming highly optimized in spidermonkey or chrome V8 js-engines, but losing out to a for loop seems ridiculous (copy2 in jsperf), especially if its intent is theoretically to speed up copies due to the raw contiguous in memory data types (TypedArray). Either way it feels like the set() function is broken.
Is it my code? my browser? (I am using Firefox 24) or am I missing some other theory of optimization? Any help in understanding this contrary result to my thoughts and understandings would be incredibly helpful.
This is an old question, but there is a reason to use TypedArrays if you have a specific need to optimize some poorly performing code. The important thing to understand about TypedArray objects in JavaScript is that they are views which represent a range of bytes inside of an ArrayBuffer. The underlying ArrayBuffer actually represents the contiguous block of binary data to operate on, but we need a view in order to access and manipulate a window of that binary data.
Separate (or even overlapping) ranges in the same ArrayBuffer can be viewed by multiple different TypedArray objects. When you have two TypedArray objects that share the same ArrayBuffer, the set operation is extremely fast. This is because the machine is working with a contiguous block of memory.
Here's an example. We'll create an ArrayBuffer of 32 bytes, one length-16 Uint8Array to represent the first 16 bytes of the buffer, and another length-16 Uint8Array to represent the last 16 bytes:
var buffer = new ArrayBuffer(32);
var array1 = new Uint8Array(buffer, 0, 16);
var array2 = new Uint8Array(buffer, 16, 16);
Now we can initialize some values in the first half of the buffer:
for (var i = 0; i < 16; i++) array1[i] = i;
console.log(array1); // [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
console.log(array2); // [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
And then very efficiently copy those 8 bytes into the second half of the buffer:
array2.set(array1);
console.log(array1); // [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
console.log(array2); // [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
We can confirm that the two arrays actually share the same buffer by looking at the buffer with another view. For example, we could use a length-8 Uint32Array that spans the entire 32 bytes of the buffer:
var array3 = new Uint32Array(buffer)
console.log(array3); // [50462976, 117835012, 185207048, 252579084,
// 50462976, 117835012, 185207048, 252579084]
I modified a JSPerf test I found to demonstrate the huge performance boost of a copy on the same buffer:
http://jsperf.com/typedarray-set-vs-loop/3
We get an order of magnitude better performance on Chrome and Firefox, and it's even much faster than taking a normal array of double length and copying the first half to the second half. But we have to consider the cycles/memory tradeoff here. As long as we have a reference to any single view of an ArrayBuffer, the rest of the buffer's data can not be garbage collected. An ArrayBuffer.transfer function is proposed for ES7 Harmony which would solve this problem by giving us the ability explicitly release memory without waiting for the garbage collector, as well as the ability to dynamically grow ArrayBuffers without necessarily copying.
Well set doesn't exactly have simple semantics like that, in V8 after doing some figuring out of what should be done it will essentially arrive at exactly the same loop that the other methods are directly doing in the first place.
Note that JavaScript is compiled into highly optimized machine code if you play your cards right (all the tests do that) so there should be no "worshipping" of some methods just because they are "native".
I've also been exploring how set() performs and I have to say that for smaller blocks (such as the 16 indices used by the original poster), set() is still around 5x slower than the comparable unrolled loop, even when operating on a contiguous block of memory.
I've adapted the original jsperf test here. I think its fair to say that for small block transfers such as this, set() simply can't compete with unrolled index assignment performance. For larger block transfers (as seen in sbking's test), set() does perform better but then it is competing with literally 1 million array index operations so it would seem bonkers to not be able to overcome this with a single instruction.
The contiguous buffer set() in my test does perform slightly better than the separate buffer set(), but again, at this size of transfer the performance benefit is marginal