Trying to get around some JavaScript code I am looking at. I am seeing things like this:
var myVariable = "X";
var result = myVariable * 6;
Coming from a C# background, this is new to me. Could somebody give me a quick primer on what is going on here? I am guessing that result would be equal to that letters position in the alphabet multiplied by 6, would I be correct?
I am guessing that result would be equal to that letters position in the alphabet multiplied by 6, would I be correct?
No. JS is weakly typed, and values are implicitly typecasted to match the operator (a multiplicative one here). In this case, the string "X" would be converted to a number - leading to NaN as it's no valid numberic literal. The result would then be NaN as well.
To get the position in the alphabet, you'd use parseInt function with a non-decimal base (parseInt("X", 36)-10) or the charCodeAt string method ("X".charCodeAt(0)-65).
I think the easiest solution to have a string s repeated n times is:
Array(n+1).join(s)
Reading your question again: To multiply the charcode of the first letter of s by n:
s.charCodeAt(0) * n
To get the character that corresponds to that multiplied charcode:
String.fromCharCode(s.charCodeAt(0) * n)
The result would be NaN because "X" is not a number; multiplying a string with a number will always return NaN.
More information: http://es5.github.com/
The * operator performs multiplication, producing the product of its operands. Multiplication is commutative. Multiplication is not always associative in ECMAScript, because of finite precision.
The result of a floating-point multiplication is governed by the rules of IEEE 754 binary double-precision arithmetic:
If either operand is NaN, the result is NaN.
If you want to get the letter's position in the English alphabet, try this:
myVariable.toUpperCase().charCodeAt(0) - 65;
Related
I want to define a BigInt number in JavaScript. But when I assign it, the wrong number is stored. In fact 1 is added to the number when storing.
let num = BigInt(0b0000111111111111111111111111111111111111111111111111111111111111)
console.log(num) // Output: 1152921504606846976n
console.log(num.toString(2)) // Output: 1000000000000000000000000000000000000000000000000000000000000
So the number stored is 1152921504606846976, but it should be 11529215046068469765. Why is that?
Converting a Number to a BigInt can't create bits that weren't there before.
0b1 (just like 1) is a Number literal, so it creates a Number.
0b1n (just like 1n) is a BigInt literal, so it creates a BigInt.
By writing BigInt(0b1), you're first creating a Number and then converting that to a BigInt. As long as the value is 1, that works just fine; once the value exceeds what you can losslessly store in a Number [1], you'll see that the value of the final BigInt won't match the literal you wrote down. Whether you use binary (0b...), decimal, or hex (0x...) literals doesn't change any of that.
(And just to be extra clear: there's no reason to write BigInt(123n), just like you wouldn't write Number(123). 123n already is a BigInt, so there's nothing to convert.)
A simple non-BigInt way to illustrate what's happening is to enter 12345678901234567890 into your favorite browser's DevTools console: you can specify Number literals of any length you want, but they'll be parsed into an IEEE754 64-bit "double", which has limited precision. Any extra digits in the literal simply can't be stored, though of course each digit's presence affects the magnitude of the number.
[1] Side note: this condition is more subtle than just saying that Number.MAX_SAFE_INTEGER is the threshold, though that constant is related to the situation: any integral number below MAX_SAFE_INTEGER can be stored losslessly, but there are plenty of numbers above MAX_SAFE_INTEGER that can also be represented exactly. Random example: 1e20.
How to get big power of 2 in decimal or
how to convert big exponential value into decimal value.
I want 2 to the power of 128 in decimal not exponential
what I did till now
tofixed(+exponent)
which again given me the same value.
var num = Math.pow(2, 128);
Actual result = 3.402823669209385e+38
expected some decimal value not exponential value.
You could use BigInt, if implemented.
var num = BigInt(2) ** BigInt(128);
console.log(num.toString());
console.log(BigInt(2 ** 128).toString());
3.402823669209385e+38 is a decimal number (in string form, because it's been output as a string). It's in scientific notation, specifically E-notation. It's the number 3.402823669209385 times 100000000000000000000000000000000000000.
If you want a string that isn't in scientific notation, you can use Intl.NumberFormat for that:
console.log(new Intl.NumberFormat().format(Math.pow(2, 128)));
Note: Although that number is well outside the range that JavaScript's number type can represent with precision in general (any integer above Number.MAX_SAFE_INTEGER [9,007,199,254,740,991] may be the result of rounding), it's one of the values that is held precisely, even at that magnitude, because it's a power of 2. But operations on it that would have a true mathematical result that wasn't a power of 2 would almost certainly get rounded.
I think the default power function won't be able to the results you want.
You can refer to the article below to understand how to create an Power function with big number by yourself.
Demo code is not JS but still quite understandable.
↓
Writing power function for large numbers
The problem: JavaScript returns 20160107190308484 for parseInt("20160107190308485")
Not the question: I do not need a workaround; I got one.
Not the question II: I do not need an explanation why this can happen, and how numbers are stored and so on.
How are numbers as input for parseInt defined? Why do I get a wrong result, not a NaN?
I only find in documentation how parseInt handles literals and so on, that only the first numbers are interpreted and ...
As W3C or MDN
Note: Only the first number in the string is returned!
Note: Leading and trailing spaces are allowed.
Note: If the first character cannot be converted to a number, parseInt() returns NaN.
I can't find anything that says "123456789202232334234212312311" is not a "number". And nothing like "a number must be in the range of -BIGINT .. +BIGINT or so. And I could not find a hint for errors thrown.
I just get 20160107190308484 for "20160107190308485" (two browsers tested) and other pairs like:
20160107155044520, 20160107155044522
20160107002720970, 20160107002720967
20160107000953860, 20160107000953859
For me this feels like a hole in ECMA.
Is there a number defined that is allowed as input for parseInt?
For those who are interested in "how came?":
I stumbled over that problem with a small logic, that converts classes and ids into an object holding the information like
class="book240 lang-en" id="book240page2212"
converting to an object like:
{
book: 240
page: 2212
lang: "en"
}
Notice, that INTs are numbers, not strings.
Therefore I have a loop that makes that generic:
for n in some_regexp_check_names
# m hold a Regexp result arg[1] is the interest:
nr = parseInt(m[1]) # Make it a number
#--------- Here the funny fault
if nr > 0 # Check if number & > 0 all my ids are > 0
out_obj[n]=nr # Take the number
else
out_obj[n]=m[1] # Take the string as is
All this is nice and is working with "real strings" and "real (int) ids", and then I had a situation where dynamically semi uuids out of datetime where created: "temp-2016-01-07-19-03-08.485" - and still all fine, but I had the idea to remove all constant chars from this (i.e. temp-.) and BANG, because the string "20160107190308485" gives 20160107190308484 as parseInt.
And it took me only ~3 hours to find that error.
How are numbers as input for parseInt defined?
I think it's pretty clearly defined in the spec (§18.2.5):
Let mathInt be the mathematical integer value that is represented by Z
in radix-R notation, using the letters A-Z and a-z for digits with
values 10 through 35. (However, if R is 10 and Z contains more than 20
significant digits, every significant digit after the 20th may be
replaced by a 0 digit, at the option of the implementation; and if R
is not 2, 4, 8, 10, 16, or 32, then mathInt may be an
implementation-dependent approximation to the mathematical integer
value that is represented by Z in radix-R notation.)
Let number be the Number value for mathInt.
For that last step, the section for the Number type (§6.1.6) specifies:
In this specification, the phrase “the Number value for x” where x
represents an exact nonzero real mathematical quantity (which might
even be an irrational number such as π) means a Number value chosen in
the following manner. Consider the set of all finite values of the
Number type, with −0 removed and with two additional values added to
it that are not representable in the Number type, namely 21024 (which
is +1 × 253 × 2971) and −21024 (which is −1 × 253 × 2971). Choose the
member of this set that is closest in value to x. If two values of the
set are equally close, then the one with an even significand is
chosen; for this purpose, the two extra values 21024 and −21024 are
considered to have even significands. Finally, if 21024 was chosen,
replace it with +∞; if −21024 was chosen, replace it with −∞; if +0
was chosen, replace it with −0 if and only if x is less than zero; any
other chosen value is used unchanged. The result is the Number value
for x. (This procedure corresponds exactly to the behaviour of the
IEEE 754-2008 “round to nearest, ties to even” mode.)
Why do I get a wrong result not a NaN?
You get a rounded result (with the maximum precision available in that range) - and there's nothing wrong with that, your input is a valid number, not NaN.
Although Bergis answer is correct, I want to tell what my real mistake was:
Don't think of parseInt of a function, that parses into an INTeger like known from C as 16, 32 or 64 bit value. It parse the number as long as there are valid digits, so if the number is to big (does not fit exactly into eg 64 bit), you dont get an error, you just get a rounded value like working with float. If you want to be on the safe side: check the result against Number.MAX_SAFE_INTEGER.
The maximum integer value in JavaScript is 2^53 == 9 007 199 254 740 992. This is because Numbers are stored as floating point is a 52 bit mantissa.
Read more: http://web.archive.org/web/20161117214618/http://bjola.ca/coding/largest-integer-in-javascript/
I have just observed that the parseInt function doesn't take care about the decimals in case of integers (numbers containing the e character).
Let's take an example: -3.67394039744206e-15
> parseInt(-3.67394039744206e-15)
-3
> -3.67394039744206e-15.toFixed(19)
-3.6739e-15
> -3.67394039744206e-15.toFixed(2)
-0
> Math.round(-3.67394039744206e-15)
0
I expected that the parseInt will also return 0. What's going on at lower level? Why does parseInt return 3 in this case (some snippets from the source code would be appreciated)?
In this example I'm using node v0.12.1, but I expect same to happen in browser and other JavaScript engines.
I think the reason is parseInt converts the passed value to string by calling ToString which will return "-3.67394039744206e-15", then parses it so it will consider -3 and will return it.
The mdn documentation
The parseInt function converts its first argument to a string, parses
it, and returns an integer or NaN
parseInt(-3.67394039744206e-15) === -3
The parseInt function expects a string as the first argument. JavaScript will call toString method behind the scene if the argument is not a string. So the expression is evaluated as follows:
(-3.67394039744206e-15).toString()
// "-3.67394039744206e-15"
parseInt("-3.67394039744206e-15")
// -3
-3.67394039744206e-15.toFixed(19) === -3.6739e-15
This expression is parsed as:
Unary - operator
The number literal 3.67394039744206e-15
.toFixed() -- property accessor, property name and function invocation
The way number literals are parsed is described here. Interestingly, +/- are not part of the number literal. So we have:
// property accessor has higher precedence than unary - operator
3.67394039744206e-15.toFixed(19)
// "0.0000000000000036739"
-"0.0000000000000036739"
// -3.6739e-15
Likewise for -3.67394039744206e-15.toFixed(2):
3.67394039744206e-15.toFixed(2)
// "0.00"
-"0.00"
// -0
If the parsed string (stripped of +/- sign) contains any character that is not a radix digit (10 in your case), then a substring is created containing all the other characters before such character discarding those unrecognized characters.
In the case of -3.67394039744206e-15, the conversion starts and the radix is determined as base 10 -> The conversion happens till it encounters '.' which is not a valid character in base 10 - Thus, effectively, the conversion happens for 3 which gives the value 3 and then the sign is applied, thus -3.
For implementation logic - http://www.ecma-international.org/ecma-262/5.1/#sec-15.1.2.2
More Examples -
alert(parseInt("2711e2", 16));
alert(parseInt("2711e2", 10));
TO note:
The radix starts out at base 10.
If the first character is a '0', it switches to base 8.
If the next character is an 'x', it switches to base 16.
It tries to parse strings to integers. My suspicion is that your floats are first getting casted to strings. Then rather than parsing the whole value then rounding, it uses a character by character parsing function and will stop when it gets to the first decimal point ignoring any decimal places or exponents.
Some examples here http://www.w3schools.com/jsref/jsref_parseint.asp
parseInt has the purpose of parsing a string and not a number:
The parseInt() function parses a string argument and returns an
integer of the specified radix (the base in mathematical numeral
systems).
And parseInt calls the function ToString wherein all the non numerical characters are ignored.
You can use Math.round, which also parses strings, and rounds a number to the nearest integer:
Math.round("12.2e-2") === 0 //true
Math.round("12.2e-2") may round up or down based on the value. Hence may cause issues.
new Number("3.2343e-10").toFixed(0) may solve the issue.
Looks like you try to calculate using parseFloat, this will give you the correct answer.
parseInt as it says, returns an integer, whereas parseFloat returns a floating-point number or exponential number:
parseInt(-3.67394039744206e-15) = -3
parseFloat(-3.67394039744206e-15) = -3.67394039744206e-15
console.log('parseInt(-3.67394039744206e-15) = ' , parseInt(-3.67394039744206e-15));
console.log('parseFloat(-3.67394039744206e-15) = ',parseFloat(-3.67394039744206e-15));
Anyone knows why javascript Number.toString function does not represents negative numbers correctly?
//If you try
(-3).toString(2); //shows "-11"
// but if you fake a bit shift operation it works as expected
(-3 >>> 0).toString(2); // print "11111111111111111111111111111101"
I am really curious why it doesn't work properly or what is the reason it works this way?
I've searched it but didn't find anything that helps.
Short answer:
The toString() function takes the decimal, converts it
to binary and adds a "-" sign.
A zero fill right shift converts it's operands to signed 32-bit
integers in two complements format.
A more detailed answer:
Question 1:
//If you try
(-3).toString(2); //show "-11"
It's in the function .toString(). When you output a number via .toString():
Syntax
numObj.toString([radix])
If the numObj is negative, the sign is preserved. This is the case
even if the radix is 2; the string returned is the positive binary
representation of the numObj preceded by a - sign, not the two's
complement of the numObj.
It takes the decimal, converts it to binary and adds a "-" sign.
Base 10 "3" converted to base 2 is "11"
Add a sign gives us "-11"
Question 2:
// but if you fake a bit shift operation it works as expected
(-3 >>> 0).toString(2); // print "11111111111111111111111111111101"
A zero fill right shift converts it's operands to signed 32-bit integers. The result of that operation is always an unsigned 32-bit integer.
The operands of all bitwise operators are converted to signed 32-bit
integers in two's complement format.
-3 >>> 0 (right logical shift) coerces its arguments to unsigned integers, which is why you get the 32-bit two's complement representation of -3.
http://en.wikipedia.org/wiki/Two%27s_complement
http://en.wikipedia.org/wiki/Logical_shift
var binary = (-3 >>> 0).toString(2); // coerced to uint32
console.log(binary);
console.log(parseInt(binary, 2) >> 0); // to int32
on jsfiddle
output is
11111111111111111111111111111101
-3
.toString() is designed to return the sign of the number in the string representation. See EcmaScript 2015, section 7.1.12.1:
If m is less than zero, return the String concatenation of the String "-" and ToString(−m).
This rule is no different for when a radix is passed as argument, as can be concluded from section 20.1.3.6:
Return the String representation of this Number value using the radix specified by radixNumber. [...] the algorithm should be a generalization of that specified in 7.1.12.1.
Once that is understood, the surprising thing is more as to why it does not do the same with -3 >>> 0.
But that behaviour has actually nothing to do with .toString(2), as the value is already different before calling it:
console.log (-3 >>> 0); // 4294967293
It is the consequence of how the >>> operator behaves.
It does not help either that (at the time of writing) the information on mdn is not entirely correct. It says:
The operands of all bitwise operators are converted to signed 32-bit integers in two's complement format.
But this is not true for all bitwise operators. The >>> operator is an exception to the rule. This is clear from the evaluation process specified in EcmaScript 2015, section 12.5.8.1:
Let lnum be ToUint32(lval).
The ToUint32 operation has a step where the operand is mapped into the unsigned 32 bit range:
Let int32bit be int modulo 232.
When you apply the above mentioned modulo operation (not to be confused with JavaScript's % operator) to the example value of -3, you get indeed 4294967293.
As -3 and 4294967293 are evidently not the same number, it is no surprise that (-3).toString(2) is not the same as (4294967293).toString(2).
Just to summarize a few points here, if the other answers are a little confusing:
what we want to obtain is the string representation of a negative number in binary representation; this means the string should show a signed binary number (using 2's complement)
the expression (-3 >>> 0).toString(2), let's call it A, does the job; but we want to know why and how it works
had we used var num = -3; num.toString(-3) we would have gotten -11, which is simply the unsigned binary representation of the number 3 with a negative sign in front, which is not what we want
expression A works like this:
1) (-3 >>> 0)
The >>> operation takes the left operand (-3), which is a signed integer, and simply shifts the bits 0 positions to the left (so the bits are unchanged), and the unsigned number corresponding to these unchanged bits.
The bit sequence of the signed number -3 is the same bit sequence as the unsigned number 4294967293, which is what node gives us if we simply type -3 >>> 0 into the REPL.
2) (-3 >>> 0).toString
Now, if we call toString on this unsigned number, we will just get the string representation of the bits of the number, which is the same sequence of bits as -3.
What we effectively did was say "hey toString, you have normal behavior when I tell you to print out the bits of an unsigned integer, so since I want to print out a signed integer, I'll just convert it to an unsigned integer, and you print the bits out for me."
Daan's answer explains it well.
toString(2) does not really convert the number to two's complement, instead it just do simple translation of the number to its positive binary form, while preserve the sign of it.
Example
Assume the given input is -15,
1. negative sign will be preserved
2. `15` in binary is 1111, therefore (-15).toString(2) gives output
-1111 (this is not in 2's complement!)
We know that in 2's complement of -15 in 32 bits is
11111111 11111111 11111111 11110001
Therefore in order to get the binary form of (-15), we can actually convert it to unsigned 32 bits integer using the unsigned right shift >>>, before passing it to toString(2) to print out the binary form. This is the reason we do (-15 >>> 0).toString(2) which will give us 11111111111111111111111111110001, the correct binary representation of -15 in 2's complement.