I just wanted to generate random integer number in range <-1,1>, that is -1, 0 or 1. I think this code nails it:
Math.round(Math.random()*2-1)
But I'm getting one more value apart from -1, 0 and 1. I'm not sure what is that supposed to be:
Now I don't think this is implementation error, but if it is, I'm using Firefox 41 and same thing happened in Google Chrome 39. When I tried to log -0 in console it appeared as -0 but -0 converted to String appears as "0". What is it? Can I use it for something, like some cool programming hacks and workarounds? Can it cause some errors if I don't just ignore it?
Two Zeros
Because JavaScript’s numbers keep magnitude and sign separate, each nonnegative number has a negative, including 0.
The rationale for this is that whenever you represent a number digitally, it can become so small that it is indistinguishable from 0, because the encoding is not precise enough to represent the difference. Then a signed zero allows you to record “from which direction” you approached zero; that is, what sign the number had before it was considered zero.
from here
More technical stuff if you are interested.
The implementation of -0 and +0 is introduced in 1985 by IEEE as part of the IEEE 754 standard. The standard addressed many problems found in the diverse floating point implementations that made them difficult to use reliably and portably.
The standard defines
arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special "not a number" values (NaNs)
interchange formats: encodings (bit strings) that may be used to exchange floating-point data in an efficient and compact form
rounding rules: properties to be satisfied when rounding numbers during arithmetic and conversions
operations: arithmetic and other operations on arithmetic formats
exception handling: indications of exceptional conditions (such as division by zero, overflow, etc.)
ECMAScript has two zero numbers: +0 and -0.
They are considered equal by most comparison algorithms:
+0 == -0 // true (Equality comparison)
+0 === -0 // true (StrictEquality comparison)
[+0].includes(-0) // true (SameValueZero comparison)
However, they are different number values:
Object.is(+0, -0) // false (SameValue comparison)
Usually, both zeros behave identically in mathematical calculations. However,
1 / +0 === +Infinity
1 / -0 === -Infinity
In this case, you get -0 because Math.round is defined as follows:
If x is less than 0 but greater than or equal to -0.5, the result is
−0.
In general, if you have an integer between −231 and 231−1, and you want to convert it to +0 if it's -0, and don't alter it otherwise, you can use bitwise OR:
-1 | 0 // -1
-0 | 0 // +0
+0 | 0 // +0
+1 | 0 // +1
However, in your case you could just take the -1 outside of Math.round:
Math.round(Math.random()*2) - 1
But note using Math.round will produce a non-uniform probability distribution. Consider using
Math.floor(Math.random()*3) - 1
The function Math.round works as specified in the standard:
If x is less than 0 but greater than or equal to -0.5, the result is −0.
Math.random returns number between 0 (zero) and 1 (one) exclusively, so you will have several occurances of that rule.
Related
Reading through the ECMAScript 5.1 specification, +0 and -0 are distinguished.
Why then does +0 === -0 evaluate to true?
JavaScript uses IEEE 754 standard to represent numbers. From Wikipedia:
Signed zero is zero with an associated sign. In ordinary arithmetic, −0 = +0 = 0. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero). This occurs in some signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
The IEEE 754 standard for floating point arithmetic (presently used by most computers and programming languages that support floating point numbers) requires both +0 and −0. The zeroes can be considered as a variant of the extended real number line such that 1/−0 = −∞ and 1/+0 = +∞, division by zero is only undefined for ±0/±0 and ±∞/±∞.
The article contains further information about the different representations.
So this is the reason why, technically, both zeros have to be distinguished.
However, +0 === -0 evaluates to true. Why is that (...) ?
This behaviour is explicitly defined in section 11.9.6, the Strict Equality Comparison Algorithm (emphasis partly mine):
The comparison x === y, where x and y are values, produces true or false. Such a comparison is performed as follows:
(...)
If Type(x) is Number, then
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is −0, return true.
If x is −0 and y is +0, return true.
Return false.
(...)
(The same holds for +0 == -0 btw.)
It seems logically to treat +0 and -0 as equal. Otherwise we would have to take this into account in our code and I, personally, don't want to do that ;)
Note:
ES2015 introduces a new comparison method, Object.is. Object.is explicitly distinguishes between -0 and +0:
Object.is(-0, +0); // false
I'll add this as an answer because I overlooked #user113716's comment.
You can test for -0 by doing this:
function isMinusZero(value) {
return 1/value === -Infinity;
}
isMinusZero(0); // false
isMinusZero(-0); // true
I just came across an example where +0 and -0 behave very differently indeed:
Math.atan2(0, 0); //returns 0
Math.atan2(0, -0); //returns Pi
Be careful: even when using Math.round on a negative number like -0.0001, it will actually be -0 and can screw up some subsequent calculations as shown above.
Quick and dirty way to fix this is to do smth like:
if (x==0) x=0;
or just:
x+=0;
This converts the number to +0 in case it was -0.
2021's answer
Are +0 and -0 the same?
Short answer: Depending on what comparison operator you use.
Long answer:
Basically, We've had 4 comparison types until now:
‘loose’ equality
console.log(+0 == -0); // true
‘strict’ equality
console.log(+0 === -0); // true
‘Same-value’ equality (ES2015's Object.is)
console.log(Object.is(+0, -0)); // false
‘Same-value-zero’ equality (ES2016)
console.log([+0].includes(-0)); // true
As a result, just Object.is(+0, -0) makes difference with the other ones.
const x = +0, y = -0; // true -> using ‘loose’ equality
console.log(x === y); // true -> using ‘strict’ equality
console.log([x].indexOf(y)); // 0 (true) -> using ‘strict’ equality
console.log(Object.is(x, y)); // false -> using ‘Same-value’ equality
console.log([x].includes(y)); // true -> using ‘Same-value-zero’ equality
In the IEEE 754 standard used to represent the Number type in JavaScript, the sign is represented by a bit (a 1 indicates a negative number).
As a result, there exists both a negative and a positive value for each representable number, including 0.
This is why both -0 and +0 exist.
Answering the original title Are +0 and -0 the same?:
brainslugs83 (in comments of answer by Spudley) pointed out an important case in which +0 and -0 in JS are not the same - implemented as function:
var sign = function(x) {
return 1 / x === 1 / Math.abs(x);
}
This will, other than the standard Math.sign return the correct sign of +0 and -0.
We can use Object.is to distinguish +0 and -0, and one more thing, NaN==NaN.
Object.is(+0,-0) //false
Object.is(NaN,NaN) //true
I'd blame it on the Strict Equality Comparison method ( '===' ).
Look at section 4d
see 7.2.13 Strict Equality Comparison on the specification
If you need sign function that supports -0 and +0:
var sign = x => 1/x > 0 ? +1 : -1;
It acts as Math.sign, except that sign(0) returns 1 and sign(-0) returns -1.
There are two possible values (bit representations) for 0. This is not unique. Especially in floating point numbers this can occur. That is because floating point numbers are actually stored as a kind of formula.
Integers can be stored in separate ways too. You can have a numeric value with an additional sign-bit, so in a 16 bit space, you can store a 15 bit integer value and a sign-bit. In this representation, the value 1000 (hex) and 0000 both are 0, but one of them is +0 and the other is -0.
This could be avoided by subtracting 1 from the integer value so it ranged from -1 to -2^16, but this would be inconvenient.
A more common approach is to store integers in 'two complements', but apparently ECMAscript has chosen not to. In this method numbers range from 0000 to 7FFF positive. Negative numbers start at FFFF (-1) to 8000.
Of course, the same rules apply to larger integers too, but I don't want my F to wear out. ;)
Wikipedia has a good article to explain this phenomenon: http://en.wikipedia.org/wiki/Signed_zero
In brief, it both +0 and -0 are defined in the IEEE floating point specifications. Both of them are technically distinct from 0 without a sign, which is an integer, but in practice they all evaluate to zero, so the distinction can be ignored for all practical purposes.
I am new to JavaScript programming and referring to Eloquent JavaScript, 3rd Edition by Marijn Haverbeke.
There is a statement in this book which reads like,
"JavaScript uses a fixed number of bits, 64 of them, to store a single number value. There are only so many patterns you can make with 64 bits, which means that the number of different numbers that can be represented is limited. With N decimal digits, you can represent 10^N numbers. Similarly, given 64 binary digits, you can represent 2^64 different numbers, which is about 18 Quintilian (an 18 with 18 zeros after it). That’s a lot."
Can someone help me with the actual meaning of this statement. I am confused as to how the values more than 2^64 are stored in the computer memory.
Can someone help me with the actual meaning of this statement. I am
confused as to how the values more than 2^64 are stored in the
computer memory.
Your questions is related with more generic concepts in Computer Science. For this question Javascript stays at higher level.
Please understand basic concepts for memory and storage first;
https://study.com/academy/lesson/how-do-computers-store-data-memory-function.html
https://www.britannica.com/technology/computer-memory
https://www.reddit.com/r/askscience/comments/2kuu9e/how_do_computers_handle_extremely_large_numbers/
How do computers evaluate huge numbers?
Also for Javascript please see this ECMAScript section
Ref: https://www.ecma-international.org/ecma-262/5.1/#sec-8.5
The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic, except that the 9007199254740990 (that is, 253−2) distinct “Not-a-Number” values of the IEEE Standard are represented in ECMAScript as a single special NaN value. (Note that the NaN value is produced by the program expression NaN.) In some implementations, external code might be able to detect a difference between various Not-a-Number values, but such behaviour is implementation-dependent; to ECMAScript code, all NaN values are indistinguishable from each other.
There are two other special values, called positive Infinity and negative Infinity. For brevity, these values are also referred to for expository purposes by the symbols +∞ and −∞, respectively. (Note that these two infinite Number values are produced by the program expressions +Infinity (or simply Infinity) and -Infinity.)
The other 18437736874454810624 (that is, 264−253) values are called the finite numbers. Half of these are positive numbers and half are negative numbers; for every finite positive Number value there is a corresponding negative value having the same magnitude.
Note that there is both a positive zero and a negative zero. For brevity, these values are also referred to for expository purposes by the symbols +0 and −0, respectively. (Note that these two different zero Number values are produced by the program expressions +0 (or simply 0) and -0.)
The 18437736874454810622 (that is, 264−253−2) finite nonzero values are of two kinds:
18428729675200069632 (that is, 264−254) of them are normalised, having the form
s × m × 2e
where s is +1 or −1, m is a positive integer less than 253 but not less than 252, and e is an integer ranging from −1074 to 971, inclusive.
The remaining 9007199254740990 (that is, 253−2) values are denormalised, having the form
s × m × 2e
where s is +1 or −1, m is a positive integer less than 252, and e is −1074.
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type (indeed, the integer 0 has two representations, +0 and -0).
A finite number has an odd significand if it is nonzero and the integer m used to express it (in one of the two forms shown above) is odd. Otherwise, it has an even significand.
In this specification, the phrase “the Number value for x” where x represents an exact nonzero real mathematical quantity (which might even be an irrational number such as π) means a Number value chosen in the following manner. Consider the set of all finite values of the Number type, with −0 removed and with two additional values added to it that are not representable in the Number type, namely 21024 (which is +1 × 253 × 2971) and −21024 (which is −1 × 253 × 2971). Choose the member of this set that is closest in value to x. If two values of the set are equally close, then the one with an even significand is chosen; for this purpose, the two extra values 21024 and −21024 are considered to have even significands. Finally, if 21024 was chosen, replace it with +∞; if −21024 was chosen, replace it with −∞; if +0 was chosen, replace it with −0 if and only if x is less than zero; any other chosen value is used unchanged. The result is the Number value for x. (This procedure corresponds exactly to the behaviour of the IEEE 754 “round to nearest” mode.)
Some ECMAScript operators deal only with integers in the range −231 through 231−1, inclusive, or in the range 0 through 232−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 232 integer values. See the descriptions of the ToInt32 and ToUint32 operators in 9.5 and 9.6, respectively.
Probably you have learned about big numbers of mathematics.
For example Avogadro constant is 6.022x10**23
Computers can also store numbers in this format.
Except for two things:
They store it as a binary number
They would store Avogadro as 0.6022*10**24, more precisely
the precision: a value that is between 0 and 1 (0.6022); this is usually 2-8 byte
the size/greatness of the number (24); this is usually only 1 byte because of 2**256 is already a very big number.
As you can see this method can be used to store an inexact value of a very big/small number.
An example of inaccuracy: 0.1 + 0.2 == 0.30000000000000004
Due to performance issues, most engines are often using the normal format, if it makes no difference in the results.
Division by zero is not an error in JavaScript: it simply returns infinity or negative
infinity. There is one exception, however: zero divided by zero does not have a well-
defined value, and the result of this operation is the special not-a-number value, printed
as NaN . NaN also arises if you attempt to divide infinity by infinity, or take the square in JavaScript root of a negative number or use arithmetic operators with non-numeric operands that
cannot be converted to numbers.
for Example
1. 0===0 returns True & 1==1 returns true
2. 0/0 returns NaN & 1/1 returns 1
Zero is Number and one is also a number?
I want the explanation? Why this Exactly happens in JavaScript only?
Because Javascript follows the IEEE 754 standard, which defines floating-point arithmetic and specifies this behavior.
Why does the standard specify NaN as the result of these operations? Because there is no sensible value to give them, so instead they have a well-defined insensible value.
Dividing anything by 0 is Infinity. That's a correct answer (from a computation point-of-view, not necessarily a mathematics point-of-view). Imagine doing the division on paper. You'll have an infinite number of operations because you always keep subtracting 0.
The reason most things don't allow dividing by 0 is because they have no way to handle an infinite operation - you wouldn't want your machine crashing every time you tried diving by 0 on your calculator.
This is a good video showing the above. An old mechanical calculator that only knows the rules of addition/subtraction (which is all multiplication/division really is). It never stop running because it can always keep subtracting 0.
JavaScript tries to be nice to programmers who aren't mathematics experts.
Read more about the IEEE 754 design rational.
Ironically, the they are also a number:
typeof NaN;//'number'
typeof Infinity;//'number'
To answer your key question, this is how javascript works.
See the specification here
Applying the / Operator
The / operator performs division, producing the quotient of its operands. The left operand is the dividend and the right operand is the divisor. ECMAScript does not perform integer division. The operands and result of all division operations are double-precision floating-point numbers. The result of division is determined by the specification of IEEE 754 arithmetic:
If either operand is NaN, the result is NaN.
The sign of the result is positive if both operands have the same sign, negative if the operands have different signs.
Division of an infinity by an infinity results in NaN.
Division of an infinity by a zero results in an infinity. The sign is determined by the rule already stated above.
Division of an infinity by a nonzero finite value results in a signed infinity. The sign is determined by the rule already stated above.
Division of a finite value by an infinity results in zero. The sign is determined by the rule already stated above.
Division of a zero by a zero results in NaN; division of zero by any other finite value results in zero, with the sign determined by the rule already stated above.
Division of a nonzero finite value by a zero results in a signed infinity. The sign is determined by the rule already stated above.
In the remaining cases, where neither an infinity, nor a zero, nor NaN is involved, the quotient is computed and rounded to the nearest representable value using IEEE 754 round-to-nearest mode. If the magnitude is too large to represent, the operation overflows; the result is then an infinity of appropriate sign. If the magnitude is too small to represent, the operation underflows and the result is a zero of the appropriate sign. The ECMAScript language requires support of gradual underflow as defined by IEEE 754.
In mathematics, zero, symbolized by the numeric character 0, is both:
In a positional number system, a place indicator meaning "no units of this multiple." For example, in the decimal number 1,041, there is one unit in the thousands position, no units in the hundreds position, four units in the tens position, and one unit in the 1-9 position.
An independent value midway between +1 and -1.
In writing outside of mathematics, depending on the context, various denotative or connotative meanings for zero include "total failure," "absence," "nil," and "absolutely nothing." ("Nothing" is an even more abstract concept than "zero" and their meanings sometimes intersect.)
Brahmagupta developed the concept of the zero as an actual independent number, not just a place-holder, and wrote rules for adding and subtracting zero from other numbers. The Indian writings were passed on to al-Khwarizmi (from whose name we derive the term algorithm ) and thence to Leonardo Fibonacci and others who continued to develop the concept and the number.
Follow the link here
While testing, I noticed something strange with Math.round().
When I was rounding negative numbers close to 0 (-0.1, -0.01, etc), the return value in my console would be -0 rather than 0. Even stranger, if I were to set that same value to an element's text, the element would display 0, instead of -0.
DEMO:
http://jsfiddle.net/dirtyd77/qcug9/
Can anyone explain why this occurs? Any help would be greatly appreciated!
Also, I am using Chrome Version 33.0.1750.117.
The kind of numbers JavaScript uses (IEEE-754 double-precision floating point) have the concept of both "positive" and "negative" zero.
The next highest integer to -0.1 is -0. "Highest" in this case means "toward positive infinity."
From the specification, §15.8.2.15 "Math.round":
If x is less than 0 but greater than or equal to -0.5, the result is −0.
-0 and +0 are both rendered as just 0 when converted to string. For instance:
console.log(-0) // 0 or -0 depending on what console you use
console.log(String(-0)) // 0 (always)
The IEEE double precision format allows for negative zero (distinct from positive 0). The values are considered almost equal (e.g. they're equal even if you compare with === and -0 is not considered "less than" 0) but for example 1/0 is infinity while 1/-0 is -infinity.
IMO You shouldn't try to read too much into this semantic.
JavaScript uses IEEE-754 to represent numbers and that spec considers 0 and -0 to be technically different. Presumably, negative zero is the integer nearest to those values, as defined by Math.round(x).
Note that zero is both loosely and strictly equal to negative zero (0==-0 and 0===-0).
You can workaround by doing Math.abs(Math.round(x)) if you don't want to see -0.
As I see in examples, the functionality if ~~ and Math.floor is the same. Both of them round a number downward (Am I think correct?)
Also I should mention that according to this test ~~ is faster than Math.floor: jsperf.com/math-round-vs
So I want to know, is there any difference between ~~ and Math.floor?
Yes, bitwise operators generally don’t play well with negative numbers. f.ex:
~~-6.8 == -6 // doesn’t round down, simply removes the decimals
Math.floor(-6.8) == -7
And you also get 0 instead of NaN, f.ex:
~~'a' == 0
Math.floor('a') == NaN
In addition to David answer:
One of the things that I have noticed about bitwise operations in JavaScript is that it can be convenient for smaller values, but doesn’t always work for larger values. The reason this is the case is that bitwise operators will only work fully for operands which can be fully expressed in a 32-bit signed format. In other words, using bitwise operations will only produce numbers that are in the range of -2147483648 (-231) to 2147483647 (231 – 1). In addition, if one of the operands used is outside of that range, the last 32 bits of the number will be used instead of the specified number.
http://cwestblog.com/2011/07/27/limits-on-bitwise-operators-in-javascript/
This limitation can easily be found when working with Date, assume you are rounding a milliseconds value:
Math.floor(1559125440000.6) // 1559125440000
~~1559125440000.6 // 52311552