How to convert negative decimal to hexadecimal? - javascript

hex = Number(-59).toString(16)
hex is -3b
hex should be ffffffffffffffC5
Thanks for any help!

If the number is negative, the sign is preserved. Especially if the radix is 2, it's returning the binary (zeros and ones) of the number preceeded by a - sign but the two's complement.
This is how the toString() method of the Number type works, it doesn't output the two's complement.
In other words, the toString() method converts the number as if it was positive displaying its hexadecimal representation, if that number is negative it just puts a minus - before it.

Related

Advice on converting 8 byte (u64) unsigned integer into javascript

I have a u64 (unsigned integer) stored in 8 bytes of memory. Clearly the range is 0-2^64 integers.
I am converting it to a javascript number by turning each byte into hex and making a hex string:
let s = '0x'
s += buffer.slice(0,1).toString("hex")
s += buffer.slice(1,2).toString("hex")
...
n = parseInt(s)
Works great for everything I have done so far.
But when I look at how javascript stores numbers, I become unsure. Javascript uses 8 bytes for numbers, but treats all numbers the same. This internal javascript "number" representation can also hold floating point numbers.
Can a javascript number store all integers from 0 to 2^64? seems not.
At what point do I get into trouble?
What do people do to get round this?
An unsigned 64 bit integer has the range of a 0 to 18.446.744.073.709.551.615.
You could use the Number wrapper object with the .MAX_VALUE property, it represents the maximum numeric value representable in JavaScript.
The JavaScript Number type is a double-precision 64-bit binary format IEEE 754 value, like double in Java or C#.
General Info:
Integers in JS:
JavaScript has only floating-point numbers. Integers appear internally in two ways. First, most JavaScript engines store a small enough number without a decimal fraction as an integer (with, for example, 31 bits) and maintain that representation as long as possible. They have to switch back to a floating point representation if a number’s magnitude grows too large or if a decimal fraction appears.
Second, the ECMAScript specification has integer operators: namely, all of the bitwise operators. Those operators convert their operands to 32-bit integers and return 32-bit integers. For the specification, integer only means that the numbers don’t have a decimal fraction, and 32-bit means that they are within a certain range. For engines, 32-bit integer means that an actual integer (non-floating-point) representation can usually be introduced or maintained.
Ranges of integers
Internally, the following ranges of integers are important in JavaScript:
Safe integers [1], the largest practically usable range of integers that JavaScript supports:
53 bits plus a sign, range (−2^53, 2^53) which relates to (+/-) 9.007.199.254.740.992
Array indices [2]:
32 bits, unsigned
Maximum length: 2^32−1
Range of indices: [0, 2^32−1) (excluding the maximum length!)
Bitwise operands [3]:
Unsigned right shift operator (>>>): 32 bits, unsigned, range [0, 2^32)
All other bitwise operators: 32 bits, including a sign, range [−2^31, 2^31)
“Char codes”, UTF-16 code units as numbers:
Accepted by String.fromCharCode()
Returned by String.prototype.charCodeAt()
16 bit, unsigned
References:
[1] Safe integers in JavaScript
[2] Arrays in JavaScript
[3] Label bitwise_ops
Source: https://2ality.com/2014/02/javascript-integers.html

How does JS convert large numbers (above 2^53) to string

Since integers above 2^53 can't be accurately represented in doubles, how does JS decide on their decimal representation when they are printed as strings?
For example, 2^55 is 36028797018963968, and printf("%lf",(double)(1LL<<55)) in C will print that number correctly, since it has trailing zeroes in its binary representation that do not cause precision loss when truncated.
However, in Javascript, we get 36028797018963970 instead. It seems to try to round numbers to get a 0 at the end, but not always - for instance, 2^55-4 is represented correctly with 4 at the end.
Is there some place in the spec that defines this weird behavior?
Question- Since integers above 2^53 can't be accurately represented in doubles, how does JS decide on their decimal representation when they are printed as strings?
1. Way of Printing decimal numbers in JS
JavaScript numbers are internally stored in binary floating point and usually displayed in the decimal system.
There are two decimal notations used by JavaScript:
Fixed notation
[ "+" | "-" ] digit+ [ "." digit+ ]
Exponential notation
[ "+" | "-" ] digit [ "." digit+ ] "e" [ "+" | "-" ] digit+
An example of exponential notation is 1.2345678901234568e+21.
Rules for Displaying decimal numbers:
A. Use exponential notation if there are more than 21 digits before the decimal point.
B. Use exponential notation if the number starts with “0.” followed by more than five zeros.
2. The ECMAScript 5.1 display algorithm
Here is a details of Sect. 9.8.1 of the ECMAScript 5.1 specification describes the algorithm for displaying a decimal number
Given a number
mantissa × 10^pointPos−digitCount
The mantissa of a floating point number is an integer – the significant digits plus a sign. Leading and trailing zeros are discarded. Examples:
The mantissa of 12.34 is 1234.
Case-1. No decimal point: digitCount ≤ pointPos ≤ 21
Print the digits (without leading zeros), followed by pointPos−digitCount zeros.
Case-2. Decimal point inside the mantissa: 0 < pointPos ≤ 21, pointPos < digitCount
Display the pointPos first digits of the mantissa, a point and then the remaining digitCount−pointPos digits.
Case-3. Decimal point comes before the mantissa: −6 < pointPos ≤ 0
Display a 0 followed by a point, −pointPos zeros and the mantissa.
Case-4. Exponential notation: pointPos ≤ -6 or pointPos > 21
Display the first digit of the mantissa. If there are more digits then display a point and the remaining digits. Next, display the character e and a plus or minus sign (depending on the sign of pointPos−1), followed by the absolute value of pointPos−1. Therefore, the result looks as follows.
mantissa0 [ "." mantissa1..digitCount ]
"e" signChar(pointPos−1) abs(pointPos−1)
Question-
However, in Javascript, we get 36028797018963970 instead. It seems to try to round numbers to get a 0 at the end, but not always - for instance, 2^55-4 is represented correctly with 4 at the end.
Is there some place in the spec that defines this weird behavior?
Check: How numbers are encoded in JavaScript specially ==>5. The maximum integer
Additional Reference: https://medium.com/dailyjs/javascripts-number-type-8d59199db1b6

Why does `btoa` encoding in javascript works for 20 digit string and not 20 digit int?

Consider the following btoa output
btoa(99999999999999999999)
"MTAwMDAwMDAwMDAwMDAwMDAwMDAw"
btoa(99999999999999999998)
"MTAwMDAwMDAwMDAwMDAwMDAwMDAw"
btoa("99999999999999999998")
"OTk5OTk5OTk5OTk5OTk5OTk5OTg="
btoa("99999999999999999999")
"OTk5OTk5OTk5OTk5OTk5OTk5OTk="
We see that btoa is unable to encode unique hash for 20 digit int but was able to encode 20 digit string. Why is this?
My original guess is that since btoa is base 64 it can only encode something that is less than base 64, but why is it capable of encoding a 20 digit string instead?
Moreover btoa seems to not able to encode int less than 9223372036854775807 which is a 2^63
btoa(9223372036854775302)
"OTIyMzM3MjAzNjg1NDc3NjAwMA=="
btoa(9223372036854775303)
"OTIyMzM3MjAzNjg1NDc3NjAwMA=="
Because of floating point imprecision. Remember, JavaScript doesn't have integers except temporarily during certain calculations; all JavaScript numbers are IEEE-754 double-precision floating point. 99999999999999999999 is not a safe integer value in IEEE-754 numbers, and in fact if you do this:
console.log(99999999999999999999);
...you'll see
100000000000000000000
The max safe integer (e.g., integer that won't be affected by floating point imprecision) is 9007199254740991.
Since btoa accepts a string (when you pass it a number, the number just gets converted to string), just put quotes around your value:
btoa("99999999999999999999")
=> OTk5OTk5OTk5OTk5OTk5OTk5OTk=
Of course, if the value is the result of a math calculation, you can't do that. You'll have to change whatever it is that's calculating such large numbers, as they exceed the precise range of the number type.

How to convert floating point decimal separator from dot to comma in Javascript

I have already tried the following:
discval = 2.833423
discval = discval.toFixed(2).toString().replace("." , ",");
discval = parseFloat(discval);
The output is 2 and not 2,83
Any idea?
parseFloat("2,83") will return 2 because , is not recognized as decimal separator, while . is.
If you want to round the number to 2 decimal places just use parseFloat(discval.toFixed(2)) or Math.round(discval * 100) / 100;
If you need this jut for display purposes, then leave it as a string with a comma. You can also use Number.toLocaleString() to format numbers for display purposes. But you won't be able to use it in further calculations.
BTW .toFixed() returns a string, so no need to use .toString() after that.
From https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/parseFloat
parseFloat parses its argument, a string, and returns a floating point
number. If it encounters a character other than a sign (+ or -),
numeral (0-9), a decimal point, or an exponent, it returns the value
up to that point and ignores that character and all succeeding
characters. Leading and trailing spaces are allowed.
If the first character cannot be converted to a number, parseFloat
returns NaN.
, is not an expected character so the number is truncated to that.
It's not possible to change the representation of floating point numbers in Javascript, you will need to treat your number as a string if you want to separate decimals with a comma instead of dot.

Negative numbers to binary string in JavaScript

Anyone knows why javascript Number.toString function does not represents negative numbers correctly?
//If you try
(-3).toString(2); //shows "-11"
// but if you fake a bit shift operation it works as expected
(-3 >>> 0).toString(2); // print "11111111111111111111111111111101"
I am really curious why it doesn't work properly or what is the reason it works this way?
I've searched it but didn't find anything that helps.
Short answer:
The toString() function takes the decimal, converts it
to binary and adds a "-" sign.
A zero fill right shift converts it's operands to signed 32-bit
integers in two complements format.
A more detailed answer:
Question 1:
//If you try
(-3).toString(2); //show "-11"
It's in the function .toString(). When you output a number via .toString():
Syntax
numObj.toString([radix])
If the numObj is negative, the sign is preserved. This is the case
even if the radix is 2; the string returned is the positive binary
representation of the numObj preceded by a - sign, not the two's
complement of the numObj.
It takes the decimal, converts it to binary and adds a "-" sign.
Base 10 "3" converted to base 2 is "11"
Add a sign gives us "-11"
Question 2:
// but if you fake a bit shift operation it works as expected
(-3 >>> 0).toString(2); // print "11111111111111111111111111111101"
A zero fill right shift converts it's operands to signed 32-bit integers. The result of that operation is always an unsigned 32-bit integer.
The operands of all bitwise operators are converted to signed 32-bit
integers in two's complement format.
-3 >>> 0 (right logical shift) coerces its arguments to unsigned integers, which is why you get the 32-bit two's complement representation of -3.
http://en.wikipedia.org/wiki/Two%27s_complement
http://en.wikipedia.org/wiki/Logical_shift
var binary = (-3 >>> 0).toString(2); // coerced to uint32
console.log(binary);
console.log(parseInt(binary, 2) >> 0); // to int32
on jsfiddle
output is
11111111111111111111111111111101
-3
.toString() is designed to return the sign of the number in the string representation. See EcmaScript 2015, section 7.1.12.1:
If m is less than zero, return the String concatenation of the String "-" and ToString(−m).
This rule is no different for when a radix is passed as argument, as can be concluded from section 20.1.3.6:
Return the String representation of this Number value using the radix specified by radixNumber. [...] the algorithm should be a generalization of that specified in 7.1.12.1.
Once that is understood, the surprising thing is more as to why it does not do the same with -3 >>> 0.
But that behaviour has actually nothing to do with .toString(2), as the value is already different before calling it:
console.log (-3 >>> 0); // 4294967293
It is the consequence of how the >>> operator behaves.
It does not help either that (at the time of writing) the information on mdn is not entirely correct. It says:
The operands of all bitwise operators are converted to signed 32-bit integers in two's complement format.
But this is not true for all bitwise operators. The >>> operator is an exception to the rule. This is clear from the evaluation process specified in EcmaScript 2015, section 12.5.8.1:
Let lnum be ToUint32(lval).
The ToUint32 operation has a step where the operand is mapped into the unsigned 32 bit range:
Let int32bit be int modulo 232.
When you apply the above mentioned modulo operation (not to be confused with JavaScript's % operator) to the example value of -3, you get indeed 4294967293.
As -3 and 4294967293 are evidently not the same number, it is no surprise that (-3).toString(2) is not the same as (4294967293).toString(2).
Just to summarize a few points here, if the other answers are a little confusing:
what we want to obtain is the string representation of a negative number in binary representation; this means the string should show a signed binary number (using 2's complement)
the expression (-3 >>> 0).toString(2), let's call it A, does the job; but we want to know why and how it works
had we used var num = -3; num.toString(-3) we would have gotten -11, which is simply the unsigned binary representation of the number 3 with a negative sign in front, which is not what we want
expression A works like this:
1) (-3 >>> 0)
The >>> operation takes the left operand (-3), which is a signed integer, and simply shifts the bits 0 positions to the left (so the bits are unchanged), and the unsigned number corresponding to these unchanged bits.
The bit sequence of the signed number -3 is the same bit sequence as the unsigned number 4294967293, which is what node gives us if we simply type -3 >>> 0 into the REPL.
2) (-3 >>> 0).toString
Now, if we call toString on this unsigned number, we will just get the string representation of the bits of the number, which is the same sequence of bits as -3.
What we effectively did was say "hey toString, you have normal behavior when I tell you to print out the bits of an unsigned integer, so since I want to print out a signed integer, I'll just convert it to an unsigned integer, and you print the bits out for me."
Daan's answer explains it well.
toString(2) does not really convert the number to two's complement, instead it just do simple translation of the number to its positive binary form, while preserve the sign of it.
Example
Assume the given input is -15,
1. negative sign will be preserved
2. `15` in binary is 1111, therefore (-15).toString(2) gives output
-1111 (this is not in 2's complement!)
We know that in 2's complement of -15 in 32 bits is
11111111 11111111 11111111 11110001
Therefore in order to get the binary form of (-15), we can actually convert it to unsigned 32 bits integer using the unsigned right shift >>>, before passing it to toString(2) to print out the binary form. This is the reason we do (-15 >>> 0).toString(2) which will give us 11111111111111111111111111110001, the correct binary representation of -15 in 2's complement.

Categories