Was playing around with some code to create an array of 0's and found that for only one value NaN was returned instead of 0. I get this in Chrome, Node, and Firefox.
What's causing the second value to be NaN?
var arr = new Array(32).join(0).split('').map(parseInt)
// prints [0, NaN, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
console.dir(arr)
That's because the function passed to map will be called with three arguments:
The current array item
The index of that item
The whole array
In this case, that function is parseInt, which only uses two arguments:
The string to be parsed
The radix used to parse the string
Additional arguments will be ignored.
Therefore, for the 2nd item in the array (i.e. index 1), the call will be
parseInt("0", 1, ignoredArray)
When the radix is 1, NaN is returned:
Let R = ToInt32(radix).
If R ≠ 0, then
If R < 2 or R > 36, then return NaN.
Also note that if you used a bigger number like new Array(99), you would have seen NaNs starting at index 37.
The .map() function passes three arguments to the callback of which two are used by parseInt(): the value, and the index (the third is the array itself). When the index is 1, any string of digits will be an invalid value. The parseInt() function ignores the second argument when it's 0.
To elaborate: for the first element, parseInt() is called like this:
parseInt("0", 0)
because the array value is zero and the index is zero. On the second call, it's this:
parseInt("0", 1)
and that returns NaN.
Note that if you're not too picky about the results being all integers, you can do what you're trying to do with the Number constructor:
var arr = new Array(32).join(0).split('').map(Number);
In ES2015 (the newest ratified standard for JavaScript, which if we are all very good will be fully implemented in all browsers as our Christmas present) you can use the .fill() function:
var arr = new Array(31).fill(0);
The mapping passes three arguments to its function:
the element;
the index of that element within the array; and
the array itself.
The parseInt function will look at the first two of those, treating the first correctly as the string but treating the second as the base to use. When the base is neither zero nor in the inclusive range 2..36, it returns NaN (1). If you had an array with forty elements, you'd also see a bunch of NaN values at the end:
0, NaN, 0, 0, 0, ... 0, 0, 0, NaN, NaN
You'd also get some pretty strange results if the number strings were anything other than zero, since the array index would dictate what base was used to interpret it.
To actually fix this, you can just provide a function that will translate what map gives you to what parseInt expects (a map map, I guess you could call it):
function myParseInt(s,r,a) { return parseInt(s,10); }
var arr = new Array(32).join(0).split('').map(myParseInt)
alert(arr)
You might also want to have a look at the way you're creating this array, it will actually end up as an array of size 31 rather than 32. If you just want an array of '0' characters, you can just use:
var arr = new Array(32).fill('0')
assuming you have a browser that supports ECMAScript 2015, which is Safari, Firefox and Chrome-desktop as of the time of this answer.
(1) A base of zero is the default case for handling things like hex prefixes.
A base of one makes no sense in a purely positional system (where each digit is multiplied by a power of the base and accumulated) since the only allowed digit there would be 0, so each place value would be that zero multiplied by 1n. In other words, the only number possible in a base-one system would be zero.
Bases from 2 to 36 are therefore more sensible.
Related
This question already has answers here:
Find the min/max element of an array in JavaScript
(58 answers)
Closed 1 year ago.
I have an below array with elements.
Array [
0,
0,
0,
0,
0,
0,
0,
4116.38,
4120.87,
0,
0,
0,
]
How can i filter all the "0" value and the maximum value from the array?
You don't, you just use const maxInYourArray = Math.max(...yourArray), relying on modern spread syntax to turn your array into distinct arguments, because the Math.max function can take an arbitrary number of values as function arguments and returns the maximum value amongst them.
The same applies to Math.min, but that will obviously give you 0, so if you want to ignore all those zeroes, filter them out first: const minInYourArray = Math.min(...yourArray.filter(v => v!==0))
This code snippet is from w3schools JavaScript section. I am trying to figure out what
points.sort( function(a, b) {
return 0.5 - Math.random()
});
from the code below does. I understand it is trying to perform a random sort on the numbers stored in the array called points, but I don't understand how it is achieved with return 0.5 - Math.random().
I know that random returns a number between 0 and 1 (not including 1).
I supposed that 0.5 is then subtracted from that number, but I am not sure what happens from here. Could you kindly give me a step by step explanation?
<!DOCTYPE html>
<html>
<body>
<p>Click the button (again and again) to sort the array in random order.</p>
<button onclick="myFunction()">Try it</button>
<p id="demo"></p>
<script>
var points = [40, 100, 1, 5, 25, 10];
document.getElementById("demo").innerHTML = points;
function myFunction() {
points.sort(function(a, b){return 0.5 - Math.random()});
document.getElementById("demo").innerHTML = points;
}
</script>
</body>
Passing a callback to Array#sort() that returns a random value is a bad idea; the ECMAScript spec provides no guarantees about what will happen in that case. Per MDN:
compareFunction(a, b) must always return the same value when given a specific pair of elements a and b as its two arguments. If inconsistent results are returned then the sort order is undefined.
And per the spec itself:
If comparefn is not undefined and is not a consistent comparison function for the elements of this array (see below), the sort order is implementation-defined.
The W3Schools code from the question is demonstrably broken; it doesn't shuffle the array fairly, at least in Chrome. Let's try running it a million times and counting how often each value shows up in the final position in the array after "shuffling":
function shuffleAndTakeLastElement() {
var points = [40, 100, 1, 5, 25, 10];
return points.sort(function(a, b){return 0.5 - Math.random()})[5];
}
results = {};
for (var point of [40, 100, 1, 5, 25, 10]) results[point] = 0;
for (var i = 0; i < 1000000; i++) results[shuffleAndTakeLastElement()]++;
console.log(results);
I get the following counts in Chrome:
{1: 62622, 5: 125160, 10: 500667, 25: 249340, 40: 31057, 100: 31154}
Notice how the number 10 is around 16 times more likely to end up in the end position of the array than the numbers 40 or 100 are. This ain't a fair shuffle!
A few morals to draw from this story:
You should run a large number of tests and look at the results to help confirm whether any randomness algorithm is fair.
It's easy to accidentally write a biased algorithm even if you're starting with a fair source of randomness.
For shuffling arrays, use Lodash's _.shuffle method or one of the approaches from How to randomize (shuffle) a JavaScript array?.
Never trust anything you read on W3Schools, because they suck and are riddled with errors.
The sort callback is supposed to return a value <0, 0 or >0 to indicate whether the first value is lower than, equal to or higher than the second; sort uses that to sort the values. 0.5 - Math.random() returns a value between -0.5 and 0.5 randomly, satisfying the expected return values and resulting in an essentially randomly shuffled array.
Note that this shouldn't be the preferred method to shuffle; since the return value is random and not internally consistent (e.g. it tells sort that foo < bar, bar < baz and foo > baz), it may make Javascript's sort algorithm very inefficient. A Fisher-Yates shuffle for instance is pretty trivially implemented and likely more efficient.
I want to generate an array of length n, and the elements of the array are the random integer between 2 and 32. I use the function follow but I find that 17 will always be the first element of the returned array. What's more, when I change to sort function to sort(() => Math.random() - 0.5), it works well.
So I am confused that what's the difference betweenMath.random() >= 0.5 and Math.random() - 0.5? And how the difference affects the sort() function?
const fn = (n) => {
let arr = [];
for (let i = 2; i < 33; i++) {
arr.push(i);
}
return arr.sort(() => Math.random() >= 0.5).slice(0, n)
}
You're not using sort for it's intended purpose, and the results are therefore unpredictable, weird and can vary between browser implementations. If you wish to shuffle an array, here is a far better function.
The function passed into Array.sort() should accept two arguments x and y and returns a negative value if x < y, zero if x = y, or a positive value if x > y.
In your first try, you use sort(() => Math.random() >= 0.5), which returns a boolean; this is going to be cast to a 0 or a 1. This then means that your function is telling the sorter that whatever first argument you pass in is always going to be equal to or greater than whatever second argument you pass in. It just happens that 17 is passed in as the second argument every time your function is called; you tell the browser that it is therefore less than or equal to every other element in the array, and thus it will get put at the beginning of the array.
Your second attempt, with sort(() => Math.random() - 0.5), returns with equal probability that the first number is greater than the second, or vice versa, which makes the shuffle work much better. However, because of the unreliability of the whole thing there's zero assurance that the shuffle will work in all browsers or be particularly random. Please use the "real" shuffle algorithm linked above.
Source: http://www.ecma-international.org/ecma-262/6.0/#sec-array.prototype.sort
For js sort, the param is the compare function, that need to be return 3 values: negative, zero, positive for less than, equal and greater than.
If you use >=, it only returns boolean.
I am an absolute beginner to programming and Javascript and was watching one of Douglas crockford's videos where he says the following
Arrays unlike objects have a special length member
It is always 1 larger than the highest integer subscript
In this array
var a = [1,2,3,4,5,6,7]
a.length equals to 7.
So I am not quite sure what 1 larger than highest integer subscript means...? Is it just an outdated piece of info from a older version of Javascript or am i missing something ?
length equals to no of elements in the array and index starts from 0 , understand it like you are not starting with 1 instead with 0 so you will have 1 less than total elements(length). so
length = total elements
and
last index = length-1;
It just means that the length of an array is always* its largest index + 1.
Consider:
var a = [];
a[6] = 'foo';
console.log(a.length) //7
a[20] = 'bar';
console.log(a.length) //21
*actually not always, like for example when you use Array constructor with number argument: var a = new Array(5), the array is empty, but it has its length explicitely set to 5
Arrays are indexed from 0 (first element), but length is from 1 (one element) (array won't have length of -1 if there won't be no elements, the length would be 0; if there are 2 elements, second element will have the index of 1 and the length will be 2).
Indexes and lengths are different. While a JavaScript array will start at 0, the length will always be the true number. This is helpful since on a slice, the last number is NOT inclusive.
I typed the following into the chrome console:
> Array.prototype.slice.call([1,2], 0)
[1, 2]
> Array.prototype.slice.call([1,2], 1)
[2]
Why does the first call not return 1?
Array.prototype.slice.call([1,2], 0) returns all elements of the array starting from index 0, and puts the whole into the array.
Array.prototype.slice.call([1,2], 1) returns all elements of the array starting from index 1, and puts the whole into the array, however this time it finds only 1 element, the 2.
Slice with one argument returns the array from the index position you give in the argument. In your case 0 marks the index position 0, so it returns the whole array.
Array.prototype.slice.call([1,2,3,4,5], 0)
//=> [1,2,3,4,5] because index to start is 0
Array.prototype.slice.call([1,2,3,4,5], 2)
//=> [3,4,5] because index to start is 2
The second argument would be how many element it slices out starting from index, so:
Array.prototype.slice.call([1,2,3,4,5], 0, 1)
//=> [1] because index to start is 0 and number of elements to slice is 1
Array.prototype.slice.call([1,2,3,4,5], 2, 2)
//=> [3,4] because index to start is 2 and number of elements to slice is 2
Find more in the documentation here
As indicated here, the slice method accepts a start and optionally an end offset, not the index of the element to slice out. If no offset is provided, it will slice to the end of the array.
In your first example, slice starts from element 0, while in the second example it starts from element 1. In both occassions it will any elements it can find at that index and after, since you have not specified an offset.
First parameter to slice is the start index in the array (counting from zero).
The second parameter (which is not given in either of your examples), is the end. Precisely it is: one past the inclusive end index. Anyway, it defaults to the end of the array, which is the behaviour you see.
Array.prototype.slice.call([1,2], 0, 1)
should give you [1]
slice.call(arguments, Fromindex);
The Fromindex means to slice the arguments list from index to the end.
as your case
slice the arguments from index 1
thats why you get [2]