Distribution of Number of Digits of Random Numbers - javascript

I encounter this curious phenomenon trying to implement a UUID generator in JavaScript.
Basically, in JavaScript, if I generate a large list of random numbers with the built-in Math.random() on Node 4.2.2:
var records = {};
var l;
for (var i=0; i < 1e6; i += 1) {
l = String(Math.random()).length;
if (records[l]) {
records[l] += 1;
} else {
records[l] = 1;
}
}
console.log(records);
The numbers of digits have a strange pattern:
{ '12': 1,
'13': 11,
'14': 65,
'15': 663,
'16': 6619,
'17': 66378,
'18': 611441,
'19': 281175,
'20': 30379,
'21': 2939,
'22': 282,
'23': 44,
'24': 3 }
I thought this is a quirk of the random number generator of V8, but similar pattern appears in Python 3.4.3:
12 : 2
13 : 5
14 : 64
15 : 672
16 : 6736
17 : 66861
18 : 610907
19 : 280945
20 : 30455
21 : 3129
22 : 224
And the Python code is as follows:
import random
random.seed()
records = {}
for i in range(0, 1000000):
n = random.random()
l = len(str(n))
try:
records[l] += 1
except KeyError:
records[l] = 1;
for i in sorted(records):
print(i, ':', records[i])
The pattern from 18 to below is expected: say if random number should have 20 digits, then if the last digit of a number is 0, it effectively has only 19 digits. If the random number generator is good, the probability of that happening is roughly 1/10.
But why the pattern is reversed for 19 and beyond?
I guess this is related to float numbers' binary representation, but I can't figure out exactly why.

The reason is indeed related to floating point representation. A floating point number representation has a maximum number of (binary) digits it can represent, and a limited exponent value range. Now when you print this out without using scientific notation, you might in some cases need to have some zeroes after the decimal point before the significant digits start to follow.
You can visualize this effect by printing those random numbers which have the longest length when converted to string:
var records = {};
var l, r;
for (var i=0; i < 1e6; i += 1) {
r = Math.random();
l = String(r).length;
if (l === 23) {
console.log(r);
}
if (records[l]) {
records[l] += 1;
} else {
records[l] = 1;
}
}
This prints only the 23-long strings, and you will get numbers like these:
0.000007411070483631654
0.000053944830052166104
0.000018188989763578967
0.000029525788901141325
0.000009613635131744402
0.000005937417234758158
0.000021099748521158368
Notice the zeroes before the first non-zero digit. These are actually not stored in the number part of a floating point representation, but implied by its exponent part.
If you were to take out the leading zeroes, and then make a count:
var records = {};
var l, r, s;
for (var i=0; i < 1e6; i += 1) {
r = Math.random();
s = String(r).replace(/^[0\.]+/, '');
l = s.length;
if (records[l]) {
records[l] += 1;
} else {
records[l] = 1;
}
}
... you'll get results which are less strange.
However, you will see some irregularity that is due to how javascript converts tiny numbers to string: when they get too small, the scientific notation is used in the string representation. You can see this with the following script (not sure if every browser has the same breaking point, so maybe you need to play a bit with the number):
var i = 0.00000123456789012345678;
console.log(String(i), String(i/10));
This gives me the following output:
0.0000012345678901234567 1.2345678901234568e-7
So very small numbers will get a more fixed string length as a result, quite often 22 characters, while in the non-scientific notation a length of 23 is common. This influences also the second script I provided and length 22 will get more hits than 23.
It should be noted that javascript does not switch to scientific notation when converting to string in binary representation:
var i = 0.1234567890123456789e-120;
console.log(i.toString(2));
The above will print a string of over 450 binary digits!

It's because some of the values are like this:
0.00012345...
And thus they're longer.

Related

Is the Reverse Bits Solution Using Left Shift Results correct

Trying to solve Reverse Bits Solution Using Left Shift Results ,problem says
Reverse bits of a given 32 bits unsigned integer.
Input: n = 00000010100101000001111010011100
Output: 964176192 (00111001011110000010100101000000)
Explanation: The input binary string 00000010100101000001111010011100 represents the unsigned integer 43261596, so return 964176192 which its binary representation is 00111001011110000010100101000000.
Here in the solution code loops 32 times,then doing left shifting of result,and then if num & 1 is more than 0 i.e. its 1.then increment the result and also right shift nums by 1 or nums modulus 2 and then finally return the result
why the output coming as 0,any thoughts and updated solution for this code
let reverseBits = function(nums) {
let result = 0
for (let i = 1; i <= 32; i++) {
result <<= 1
if (nums & 1 > 0)
result++
nums >>= 1
}
return result
}
console.log(reverseBits(11111111111111111111111111111101))
Output is shown as 0
PS C:\VSB-PRO> node Fibo.js
0
Some issues:
The example value you pass as argument to your function, is not given in binary notation, but in decimal notation, so it is a different number than intended. Use the 0b prefix for literals in binary notation.
When using the << operator (and =<<), JavaScript will interpret the 32nd bit as a sign bit. I suppose it is not intended to produce negative values, so avoid this by using a multiplication by 2 instead of the shift operator.
Not a problem, but:
The >> operator will have a specific effect on numbers that have the 32nd bit set: that bit will be retained after the shift. As your script never inspects that bit, it is not a problem, but it would be more natural if 0 bits were shifted-in. For that you can use the >>> operator.
Finally, it may be useful to output the return value in binary notation so you can more easily verify the result.
let reverseBits = function(nums) {
let result = 0;
for (let i = 1; i <= 32; i++) {
// use multiplication to avoid sign bit interpretation
result *= 2;
if (nums & 1 > 0)
result++;
nums >>>= 1;
}
return result;
}
// Express number in binary notation:
let n = 0b11111111111111111111111111111101;
let result = reverseBits(n);
// Display result in binary notation
console.log(result.toString(2));

How to most optimally read subset of bits from a bitarray in JavaScript

Say I have a bitarray like this
101110101010101010101111111011001100100100001011001111101000101001
The two operations I want to do are:
Read a contiguous subset of the bits into a single bitarray (or integer in the case of JavaScript)
Read a noncontigous subset of bits into a single integer in JavaScript.
So for (1), let's say I want this:
1011[101010101010101011111]11011001100100100001011001111101000101001
== 101010101010101011111
= 1398111 in decimal
Bits 4-25 or so.
For (2), I would like to most optimally select a non-contiguous subset of bits and combine them optimally into a final value.
1011[101]0101010101010[11]1111[1]011001100100100001011001111101000101001
== 101 ++ 11 ++ 1
= 101111
= 47 in decimal
Bits 4-6, 21-22, and 27 or so.
What is the right/optimal way of doing this?
The format is still a bit vague, but here's a way to do something like this. I'm making some assumptions that make the problem easier, namely:
At most 32 bits are extracted at once (so it fits in a Number without weird hacks)
Bits are in an Uint32Array (or compatible storage, as long as it has 32 bits per element)
The least significant bit of the 0th entry of the array is number 0 overall
The bit string represented this way is ... + tobits(array[1]) + tobits(array[0]) for example [ 0, 256 ] represents 00000000000000000000000100000000_00000000000000000000000000000000 (the underscore indicates the boundary between the pieces). Maybe that's the wrong way around, it can be changed but this way is simple.
The ith bit is in the i >> 5-th (aka i / 32 but with integer division) word, at offset i & 31 (aka i % 32) within that word. That's what makes this order simple to work with.
By the first assumption, at most 2 entries/words in the array are spanned by the range, so there are only two cases:
The bottom of the range is in one word and the top is in the next.
The range is wholly contained in a single word. Touching a second word should be avoided, as it might be beyond the end of the array. Also even if the second word can be touched, it wouldn't be as easy to deal with the excess bits because shift counts are taken modulo 32 so high << 32 wouldn't do the trick.
In code (not tested)
function extractRange(bits, begin, end) {
// extract bits [begin .. end],
// begin==end extracts a single bit
var beginIdx = begin >> 5;
var endIdx = end >> 5;
var beginOfs = begin & 31;
var endOfs = end & 31;
var len = end - begin + 1;
var msk = -1 >>> (32 - len);
console.assert(len > 0 && len <= 32);
if (beginIdx == endIdx) {
// begin and end are in the same word
// discard the bits before the begin of the range and mask off the high bits
return ((bits[endIdx] >>> beginOfs) & msk) >>> 0;
}
else {
var low = bits[beginIdx];
var high = bits[endIdx];
// the high word contains the high bits of the result, in its lowest bits
// the low word contains the low bits of the result, in its highest bits:
// xxxxhhhh_llllxxxx
// high word must be shifted left by the number of bits in the low word
var bitsInLow = 32 - beginOfs;
return (((low >>> beginOfs) | (high << bitsInLow)) & msk) >>> 0;
}
}
Examples:
[0xdeadbeef, 0xcafebabe] means that the string is really 0xcafebabedeadbeef (in bits)
extractRange([0xdeadbeef, 0xcafebabe], 0, 31).toString(16) =
deadbeef
extractRange([0xdeadbeef, 0xcafebabe], 4, 35).toString(16) =
edeadbee
extractRange([0xdeadbeef, 0xcafebabe], 8, 39).toString(16) =
bedeadbe
extractRange([0xdeadbeef, 0xcafebabe], 60, 63).toString(16) =
c
extractRange([0xdeadbeef, 0xcafebabe], 30, 33).toString(16) =
b // ...ed... in binary 11101101, taking the middle 4 bits, 1011 = b
For non-contiguous ranges, you could extract every individual range and then append them. I don't think there is a nicer way in general.
A streamlined version, that uses only generators up to the final step, thus avoiding loading whole input into memory.
// for single number
function* gUintToBits(input, dim) {
let i = 0;
while (i < dim) {
yield (input >> (dim - 1 - i++)) & 1;
// or like this, if bits should bo from left to right:
// yield (input >> i++) & 1;
}
}
// for array of numbers
function* gUintArrayToBits(input, dim) {
for (let item of input) {
yield* gUintToBits(item, dim);
}
}
// apply intervals mask directly to generator
function* genWithIntervalsApplied(iterOfBits, intervals) {
// fast, if number of intervals is not so big
const isInsideIntervals = (n, itrvs) => {
for (let intv of itrvs) {
if (n >= intv[0] && n < intv[1]) return true;
}
return false;
};
let index = 0;
for (let item of iterOfBits) {
if (isInsideIntervals(index++, intervals)) {
yield item;
}
}
}
// Finally consume the generator
function extractIntervalsFromUint8Array(input, intervals) {
let result = ''
for (let x of genWithIntervalsApplied(gUintArrayToBits(input, 8), intervals)) {
result += `${x}`
}
return result
}
const result = extractIntervalsFromUint8Array(
[1, 3, 9, 127],
[[8, 16], [24, 32]],
);
const dec = parseInt(result, 2);
console.log(result);
console.log(dec);
output:
// 0000001101111111
// 895

Function to convert Number to Binary string

Writing a function to convert passed in number to a binary string. The function is creating a proper binary sequence, but my compare function is skipping the first index when comparing a number equal to binaryIndex[0] (ex. n = 32, 16, 8, 4). Any ideas why?
This step creates a binary ordered array, which is what I will use to check the passed in parameter with:
var Bin = function(n) {
var x =1;
var binSeq=[];
var converted=[];
for (var i=0; x <= n; i++) {
binSeq.unshift(x)
x = x+x
}
console.log(binSeq)
This next step should compare and spit out a binary sequence of 1's and 0's: but it is skipping if (n === binSeq[0])
for (var i=0; i < binSeq.length; i++) {
if ((n - binSeq[i]) >= 0) {
converted.unshift(1);
n=n-binSeq[i]
} else {converted.unshift(0)}
}
console.log(converted)
}
Link to the CodePen: https://codepen.io/fdeppe/pen/GEozKY?editors=1111
Actually this would do the trick
function dec2bin(dec){
return (dec >>> 0).toString(2);
}
Explanation here ==> Negative numbers to binary string in JavaScript
-3 >>> 0 (right logical shift) coerces its arguments to unsigned integers, which is why you get the 32-bit two's complement representation of -3.

Converting large numbers from binary to decimal and back in JavaScript

I have a very large number represented as binary in JavaScript:
var largeNumber = '11010011010110100001010011111010010111011111000010010111000111110011111011111000001100000110000011000001100111010100111010101110100010001011010101110011110000011000001100000110000011001001100000110000011000001100000110000111000011100000110000011000001100000110000011000010101100011001110101101001100110100100000110000011000001100000110001001101011110110010001011010001101011010100011001001110001110010100111011011111010000110001110010101010001111010010000101100001000001100001011000011011111000011110001110111110011111111000100011110110101000101100000110000011000001100000110000011010011101010110101101001111101001010010111101011000011101100110010011001001111101'
When I convert it to decimal by use of parseInt(largeNumber, 10)l it gives me 1.5798770299367407e+199 but when I try to convert it back to binary:
parseInt(`1.5798770299367407e+199`, 2)
it returns 1 (which I think is related to how parseInt works by rounding value) when I was expecting to see my original binary representation of largeNumber. Can you explain me such behavior? And how I can convert it back to original state in JavaScript?
EDIT: This question is a result of my experiment where I was playing around with storing and transferring large amount of boolean data. The largeNumber is a representation of a collection [true,true,false,true ...] of boolean values which has to be shared between client, client worker and server.
As noted in Andrew L.'s answer, and by several commenters, your largeNumber exceeds what JavaScript can represent as an integer in an ordinary number without loss of precision—which is 9.007199254740991e+15.
If you want to work with larger integers, you will need a BigInt library or other special-purpose code.
Below is some code demonstrating how to convert arbitrarily large positive integers between different base representations, showing that the exact decimal representation of your largeNumber is
15 798 770 299 367 407 029 725 345 423 297 491 683 306 908 462 684 165 669 735 033 278 996 876 231 474 309 788 453 071 122 111 686 268 816 862 247 538 905 966 252 886 886 438 931 450 432 740 640 141 331 094 589 505 960 171 298 398 097 197 475 262 433 234 991 526 525
function parseBigInt(bigint, base) {
//convert bigint string to array of digit values
for (var values = [], i = 0; i < bigint.length; i++) {
values[i] = parseInt(bigint.charAt(i), base);
}
return values;
}
function formatBigInt(values, base) {
//convert array of digit values to bigint string
for (var bigint = '', i = 0; i < values.length; i++) {
bigint += values[i].toString(base);
}
return bigint;
}
function convertBase(bigint, inputBase, outputBase) {
//takes a bigint string and converts to different base
var inputValues = parseBigInt(bigint, inputBase),
outputValues = [], //output array, little-endian/lsd order
remainder,
len = inputValues.length,
pos = 0,
i;
while (pos < len) { //while digits left in input array
remainder = 0; //set remainder to 0
for (i = pos; i < len; i++) {
//long integer division of input values divided by output base
//remainder is added to output array
remainder = inputValues[i] + remainder * inputBase;
inputValues[i] = Math.floor(remainder / outputBase);
remainder -= inputValues[i] * outputBase;
if (inputValues[i] == 0 && i == pos) {
pos++;
}
}
outputValues.push(remainder);
}
outputValues.reverse(); //transform to big-endian/msd order
return formatBigInt(outputValues, outputBase);
}
var largeNumber =
'1101001101011010000101001111101001011101' +
'1111000010010111000111110011111011111000' +
'0011000001100000110000011001110101001110' +
'1010111010001000101101010111001111000001' +
'1000001100000110000011001001100000110000' +
'0110000011000001100001110000111000001100' +
'0001100000110000011000001100001010110001' +
'1001110101101001100110100100000110000011' +
'0000011000001100010011010111101100100010' +
'1101000110101101010001100100111000111001' +
'0100111011011111010000110001110010101010' +
'0011110100100001011000010000011000010110' +
'0001101111100001111000111011111001111111' +
'1000100011110110101000101100000110000011' +
'0000011000001100000110100111010101101011' +
'0100111110100101001011110101100001110110' +
'0110010011001001111101';
//convert largeNumber from base 2 to base 10
var largeIntDecimal = convertBase(largeNumber, 2, 10);
function groupDigits(bigint){//3-digit grouping
return bigint.replace(/(\d)(?=(\d{3})+$)/g, "$1 ");
}
//show decimal result in console:
console.log(groupDigits(largeIntDecimal));
//converting back to base 2:
var restoredOriginal = convertBase(largeIntDecimal, 10, 2);
//check that it matches the original:
console.log(restoredOriginal === largeNumber);
If you're looking to transfer a large amount of binary data, you should use BigInt. BigInt allows you to represent an arbitrary number of bits.
// parse large number from string
let numString = '1101001101011010000101001111101001011101111100001001'
// as number
let num = BigInt('0b' + numString)
// now num holds large number equivalent to numString
console.log(num) // 3718141639515913n
// print as base 2
console.log(num.toString(2)) // 1101001101011010000101001111101001011101111100001001
Helper functions
// some helper functions
// get kth bit from right
function getKthBit(x, k){
return (x & (1n << k)) >> k;
}
// set kth bit from right to 1
function setKthBit(x, k){
return (1n << k) | x;
}
// set kth bit from right to 0
function unsetKthBit(x, k){
return (x & ~(1n << k));
}
getKthBit(num, 0n);
// 1n
getKthBit(num, 5n);
// 0n
setKthBit(num, 1n).toString(2);
// 1101001101011010000101001111101001011101111100001011
setKthBit(num, 4n);
// 1101001101011010000101001111101001011101111100011001
unsetKthBit(num, 0n).toString(2);
// 1101001101011010000101001111101001011101111100001000
unsetKthBit(num, 0n).toString(2);
// 1101001101011010000101001111101001011101111100000001
For convenience you may want to add this to BigInt if you're going to be serializing back to the client. Then you can read it back as a string. Otherwise you will get "Uncaught TypeError: Do not know how to serialize a BigInt" because for some reason Javascript Object Notation doesn't know how to serialize one of the types in Javascript.
Object.defineProperty(BigInt.prototype, "toJSON", {
get() {
"use strict";
return () => this.toString() + 'n';
}
});
BigInt is built into js
function parseBigInt(str, base=10) {
base = BigInt(base)
var bigint = BigInt(0)
for (var i = 0; i < str.length; i++) {
var code = str[str.length-1-i].charCodeAt(0) - 48; if(code >= 10) code -= 39
bigint += base**BigInt(i) * BigInt(code)
}
return bigint
}
parseBigInt('11010011010110100001010011111010010111011111000010010111000111110011111011111000001100000110000011000001100111010100111010101110100010001011010101110011110000011000001100000110000011001001100000110000011000001100000110000111000011100000110000011000001100000110000011000010101100011001110101101001100110100100000110000011000001100000110001001101011110110010001011010001101011010100011001001110001110010100111011011111010000110001110010101010001111010010000101100001000001100001011000011011111000011110001110111110011111111000100011110110101000101100000110000011000001100000110000011010011101010110101101001111101001010010111101011000011101100110010011001001111101', 2)
// 15798770299367407029725345423297491683306908462684165669735033278996876231474309788453071122111686268816862247538905966252886886438931450432740640141331094589505960171298398097197475262433234991526525n
When you convert it back to binary, you don't parse it as base 2, that's wrong. You're also trying to parse an integer as a float, this can cause imprecision. With this line:
parseInt(`1.5798770299367407e+199`, 2)
You're telling JS to parse a base 10 as base 2! What you need to do is convert it to binary like so (note the use of parseFloat):
var largeNumber = '11010011010110100001010011111010010111011111000010010111000111110011111011111000001100000110000011000001100111010100111010101110100010001011010101110011110000011000001100000110000011001001100000110000011000001100000110000111000011100000110000011000001100000110000011000010101100011001110101101001100110100100000110000011000001100000110001001101011110110010001011010001101011010100011001001110001110010100111011011111010000110001110010101010001111010010000101100001000001100001011000011011111000011110001110111110011111111000100011110110101000101100000110000011000001100000110000011010011101010110101101001111101001010010111101011000011101100110010011001001111101';
//intLN is integer of large number
var intLN = parseFloat(largeNumber, 2); //here, you used base 10 to parse as integer, Incorrect
console.log(intLN);
var largeNumberConvert = intLN.toString(2); //here, we convert back to binary with toString(radix).
console.log(largeNumberConvert);
Before, you converted a decimal to binary. What you need to do is call toString(radix) to convert it back into binary, so:
var binaryRepresentation = integerFormOfLargeNumber.toString(2);
If you look at the output, you see:
Infinity
Infinity
Since your binary number is quite large, it can affect the results. Because JS supports up to 64 bits, the number is way too large. It causes Infinity and is imprecise. If you try re-converting the largeNumberConvert from binary to decimal like this:
parseInt(largeNumberConvert, 10);
You can see that it outputs Infinity.

Is there a reliable way in JavaScript to obtain the number of decimal places of an arbitrary number?

It's important to note that I'm not looking for a rounding function. I am looking for a function that returns the number of decimal places in an arbitrary number's simplified decimal representation. That is, we have the following:
decimalPlaces(5555.0); //=> 0
decimalPlaces(5555); //=> 0
decimalPlaces(555.5); //=> 1
decimalPlaces(555.50); //=> 1
decimalPlaces(0.0000005); //=> 7
decimalPlaces(5e-7); //=> 7
decimalPlaces(0.00000055); //=> 8
decimalPlaces(5.5e-7); //=> 8
My first instinct was to use the string representations: split on '.', then on 'e-', and do the math, like so (the example is verbose):
function decimalPlaces(number) {
var parts = number.toString().split('.', 2),
integerPart = parts[0],
decimalPart = parts[1],
exponentPart;
if (integerPart.charAt(0) === '-') {
integerPart = integerPart.substring(1);
}
if (decimalPart !== undefined) {
parts = decimalPart.split('e-', 2);
decimalPart = parts[0];
}
else {
parts = integerPart.split('e-', 2);
integerPart = parts[0];
}
exponentPart = parts[1];
if (exponentPart !== undefined) {
return integerPart.length +
(decimalPart !== undefined ? decimalPart.length : 0) - 1 +
parseInt(exponentPart);
}
else {
return decimalPart !== undefined ? decimalPart.length : 0;
}
}
For my examples above, this function works. However, I'm not satisfied until I've tested every possible value, so I busted out Number.MIN_VALUE.
Number.MIN_VALUE; //=> 5e-324
decimalPlaces(Number.MIN_VALUE); //=> 324
Number.MIN_VALUE * 100; //=> 4.94e-322
decimalPlaces(Number.MIN_VALUE * 100); //=> 324
This looked reasonable at first, but then on a double take I realized that 5e-324 * 10 should be 5e-323! And then it hit me: I'm dealing with the effects of quantization of very small numbers. Not only are numbers being quantized before storage; additionally, some numbers stored in binary have unreasonably long decimal representations, so their decimal representations are being truncated. This is unfortunate for me, because it means that I can't get at their true decimal precision using their string representations.
So I come to you, StackOverflow community. Does anyone among you know a reliable way to get at a number's true post-decimal-point precision?
The purpose of this function, should anyone ask, is for use in another function that converts a float into a simplified fraction (that is, it returns the relatively coprime integer numerator and nonzero natural denominator). The only missing piece in this outer function is a reliable way to determine the number of decimal places in the float so I can multiply it by the appropriate power of 10. Hopefully I'm overthinking it.
Historical note: the comment thread below may refer to first and second implementations. I swapped the order in September 2017 since leading with a buggy implementation caused confusion.
If you want something that maps "0.1e-100" to 101, then you can try something like
function decimalPlaces(n) {
// Make sure it is a number and use the builtin number -> string.
var s = "" + (+n);
// Pull out the fraction and the exponent.
var match = /(?:\.(\d+))?(?:[eE]([+\-]?\d+))?$/.exec(s);
// NaN or Infinity or integer.
// We arbitrarily decide that Infinity is integral.
if (!match) { return 0; }
// Count the number of digits in the fraction and subtract the
// exponent to simulate moving the decimal point left by exponent places.
// 1.234e+2 has 1 fraction digit and '234'.length - 2 == 1
// 1.234e-2 has 5 fraction digit and '234'.length - -2 == 5
return Math.max(
0, // lower limit.
(match[1] == '0' ? 0 : (match[1] || '').length) // fraction length
- (match[2] || 0)); // exponent
}
According to the spec, any solution based on the builtin number->string conversion can only be accurate to 21 places beyond the exponent.
9.8.1 ToString Applied to the Number Type
Otherwise, let n, k, and s be integers such that k ≥ 1, 10k−1 ≤ s < 10k, the Number value for s × 10n−k is m, and k is as small as possible. Note that k is the number of digits in the decimal representation of s, that s is not divisible by 10, and that the least significant digit of s is not necessarily uniquely determined by these criteria.
If k ≤ n ≤ 21, return the String consisting of the k digits of the decimal representation of s (in order, with no leading zeroes), followed by n−k occurrences of the character ‘0’.
If 0 < n ≤ 21, return the String consisting of the most significant n digits of the decimal representation of s, followed by a decimal point ‘.’, followed by the remaining k−n digits of the decimal representation of s.
If −6 < n ≤ 0, return the String consisting of the character ‘0’, followed by a decimal point ‘.’, followed by −n occurrences of the character ‘0’, followed by the k digits of the decimal representation of s.
Historical note: The implementation below is problematic. I leave it here as context for the comment thread.
Based on the definition of Number.prototype.toFixed, it seems like the following should work but due to the IEEE-754 representation of double values, certain numbers will produce false results. For example, decimalPlaces(0.123) will return 20.
function decimalPlaces(number) {
// toFixed produces a fixed representation accurate to 20 decimal places
// without an exponent.
// The ^-?\d*\. strips off any sign, integer portion, and decimal point
// leaving only the decimal fraction.
// The 0+$ strips off any trailing zeroes.
return ((+number).toFixed(20)).replace(/^-?\d*\.?|0+$/g, '').length;
}
// The OP's examples:
console.log(decimalPlaces(5555.0)); // 0
console.log(decimalPlaces(5555)); // 0
console.log(decimalPlaces(555.5)); // 1
console.log(decimalPlaces(555.50)); // 1
console.log(decimalPlaces(0.0000005)); // 7
console.log(decimalPlaces(5e-7)); // 7
console.log(decimalPlaces(0.00000055)); // 8
console.log(decimalPlaces(5e-8)); // 8
console.log(decimalPlaces(0.123)); // 20 (!)
Well, I use a solution based on the fact that if you multiply a floating-point number by the right power of 10, you get an integer.
For instance, if you multiply 3.14 * 10 ^ 2, you get 314 (an integer). The exponent represents then the number of decimals the floating-point number has.
So, I thought that if I gradually multiply a floating-point by increasing powers of 10, you eventually arrive to the solution.
let decimalPlaces = function () {
function isInt(n) {
return typeof n === 'number' &&
parseFloat(n) == parseInt(n, 10) && !isNaN(n);
}
return function (n) {
const a = Math.abs(n);
let c = a, count = 1;
while (!isInt(c) && isFinite(c)) {
c = a * Math.pow(10, count++);
}
return count - 1;
};
}();
for (const x of [
0.0028, 0.0029, 0.0408,
0, 1.0, 1.00, 0.123, 1e-3,
3.14, 2.e-3, 2.e-14, -3.14e-21,
5555.0, 5555, 555.5, 555.50, 0.0000005, 5e-7, 0.00000055, 5e-8,
0.000006, 0.0000007,
0.123, 0.121, 0.1215
]) console.log(x, '->', decimalPlaces(x));
2017 Update
Here's a simplified version based on Edwin's answer. It has a test suite and returns the correct number of decimals for corner cases including NaN, Infinity, exponent notations, and numbers with problematic representations of their successive fractions, such as 0.0029 or 0.0408. This covers the vast majority of financial applications, where 0.0408 having 4 decimals (not 6) is more important than 3.14e-21 having 23.
function decimalPlaces(n) {
function hasFraction(n) {
return Math.abs(Math.round(n) - n) > 1e-10;
}
let count = 0;
// multiply by increasing powers of 10 until the fractional part is ~ 0
while (hasFraction(n * (10 ** count)) && isFinite(10 ** count))
count++;
return count;
}
for (const x of [
0.0028, 0.0029, 0.0408, 0.1584, 4.3573, // corner cases against Edwin's answer
11.6894,
0, 1.0, 1.00, 0.123, 1e-3, -1e2, -1e-2, -0.1,
NaN, 1E500, Infinity, Math.PI, 1/3,
3.14, 2.e-3, 2.e-14,
1e-9, // 9
1e-10, // should be 10, but is below the precision limit
-3.14e-13, // 15
3.e-13, // 13
3.e-14, // should be 14, but is below the precision limit
123.12345678901234567890, // 14, the precision limit
5555.0, 5555, 555.5, 555.50, 0.0000005, 5e-7, 0.00000055, 5e-8,
0.000006, 0.0000007,
0.123, 0.121, 0.1215
]) console.log(x, '->', decimalPlaces(x));
The tradeoff is that the method is limited to maximum 10 guaranteed decimals. It may return more decimals correctly, but don't rely on that. Numbers smaller than 1e-10 may be considered zero, and the function will return 0. That particular value was chosen to solve correctly the 11.6894 corner case, for which the simple method of multiplying by powers of 10 fails (it returns 5 instead of 4).
However, this is the 5th corner case I've discovered, after 0.0029, 0.0408, 0.1584 and 4.3573. After each, I had to reduce the precision by one decimal. I don't know if there are other numbers with less than 10 decimals for which this function may return an incorrect number of decimals. To be on the safe side, look for an arbitrary precision library.
Note that converting to string and splitting by . is only a solution for up to 7 decimals. String(0.0000007) === "7e-7". Or maybe even less? Floating point representation isn't intuitive.
Simple "One-Liner":
If what you're doing requires more than 16 digit precision, then this is not for you.
This 'one-liner' will work fine for the other 99.99999999999999% of the time. (Yes, even that number.)😜
function numDec(n){return n%1==0?0:(""+n).length-(""+n).lastIndexOf(".")-1}
Demo in the snippet:
function numDec(n){return n%1==0?0:(""+n).length-(""+n).lastIndexOf(".")-1}
setInterval(function(){
n=Math.random()*10000000000;
document.body.innerHTML=n+' ← '+numDec(n)+' decimal places';
},777);
body{font-size:123.4567890%; font-family:'fira code';}
More info:
mozilla.com : .lastIndexOf()
mozilla.com : .length
this works for numbers smaller than e-17 :
function decimalPlaces(n){
var a;
return (a=(n.toString().charAt(0)=='-'?n-1:n+1).toString().replace(/^-?[0-9]+\.?([0-9]+)$/,'$1').length)>=1?a:0;
}
This works for me
const decimalPlaces = value.substring(value.indexOf('.') + 1).length;
This method expects the value to be a standard number.
Not only are numbers being quantized before storage; additionally, some numbers stored in binary have unreasonably long decimal representations, so their decimal representations are being truncated.
JavaScript represents numbers using IEEE-754 double-precision (64 bit) format. As I understand it this gives you 53 bits precision, or fifteen to sixteen decimal digits.
So for any number with more digits you just get an approximation. There are some libraries around to handle large numbers with more precision, including those mentioned in this thread.
2021 Update
An optimized version of Mike Samuel handling scientific and non-scientific representation.
// Helper function to extract the number of decimal assuming the
// input is a number (either as a number of a stringified number)
// Note: if a stringified number has an exponent, it will always be
// '<x>e+123' or '<x>e-123' or '<x.dd...d>e+123' or '<x.dd...d>e-123'.
// No need to check for '<x>e123', '<x>E+123', '<x>E-123' etc.
const _numDecimals = v => {
const [i, p, d, e, n] = v.toString().split(/(\.|e[\-+])/g);
const f = e === 'e-';
return ((p === '.' && (!e || f) && d.length) + (f && parseInt(n)))
|| (p === 'e-' && parseInt(d))
|| 0;
}
// But if you want to be extra safe...you can replace _numDecimals
// with this:
const _numSafeDecimals = v => {
let [i, p, d, e, n] = v.toString().split(/(\.|[eE][\-+])/g);
e = e.toLowerCase();
const f = e === 'e-';
return ((p === '.' && (!e || f) && d.length) + (f && parseInt(n)))
|| (p.toLowerCase() === 'e-' && parseInt(d))
|| 0;
}
// Augmenting Number proto.
Number.prototype.numDecimals = function () {
return (this % 1 !== 0 && _numDecimals(this)) || 0;
}
// Independent function.
const numDecimals = num => (
(!isNaN(num) && num % 1 !== 0 && _numDecimals(num)) || 0
);
// Tests:
const test = n => (
console.log('Number of decimals of', n, '=', n.numDecimals())
);
test(1.234e+2); // --> 1
test(0.123); // ---> 3
test(123.123); // ---> 3
test(0.000123); // ---> 6
test(1e-20); // --> 20
test(1.2e-20); // --> 21
test(1.23E-20); // --> 22
test(1.23456789E-20); // --> 28
test(10); // --> 0
test(1.2e20); // --> 0
test(1.2e+20); // --> 0
test(1.2E100); // --> 0
test(Infinity); // --> 0
test(-1.234e+2); // --> 1
test(-0.123); // ---> 3
test(-123.123); // ---> 3
test(-0.000123); // ---> 6
test(-1e-20); // --> 20
test(-1.2e-20); // --> 21
test(-1.23E-20); // --> 22
test(-1.23456789E-20); // --> 28
test(-10); // --> 0
test(-1.2e20); // --> 0
test(-1.2e+20); // --> 0
test(-1.2E100); // --> 0
test(-Infinity); // --> 0
I use this...
45555.54545456?.toString().split(".")[1]?.length
value null check before converting it to string, then convert it to string, split it, get the decimal part, add nullcheck, and get length, if there is number with decimals, you get the amount of those decimals, else you get undefined.
//console.log("should give error:", null.toString().split(".")[1]?.length);
console.log("should give undefined:", null?.toString().split(".")[1]?.length);
//console.log("should give error:", 45555.toString().split(".")[1]?.length);
console.log("should give undefined:", 45555?.toString().split(".")[1]?.length);
//console.log("should give error:", 45555?.toString().split(".")[1].length);
console.log("should give amount of decimals:", 45555.54545456?.toString().split(".")[1]?.length);
console.log("should return without decimals when undefined:", 45555.54545456.toFixed(undefined));
An optimized version of nick answer.
The function requires that n is a string. This function gets the decimal even if there all 0, like 1.00 -> 2 decimals.
function getDecimalPlaces(n) {
var i = n.indexOf($DecimalSeparator)
return i > 0 ? n.length - i - 1 : 0
}
console.log(getDecimalPlaces("5555.0")); // 1
console.log(getDecimalPlaces("5555")); // 0
console.log(getDecimalPlaces("555.5")); // 1
console.log(getDecimalPlaces("555.50")); // 2
console.log(getDecimalPlaces("0.0000005")); // 7
console.log(getDecimalPlaces("0.00000055")); // 8
console.log(getDecimalPlaces("0.00005500")); // 8
If you have very small values, use the below code:
Number.prototype.countDecimals = function () {
if (Math.floor(this.valueOf()) === this.valueOf()) return 0;
var str = this.toString();
if (str.indexOf(".") !== -1 && str.indexOf("-") !== -1) {
return parseInt(str.split("-")[1])+str.split("-")[0].split(".")[1].length-1
} else if (str.indexOf(".") !== -1) {
return str.split(".")[1].length || 0;
}
return str.split("-")[1] || 0;
}
var num = 10
console.log(num.countDecimals()) //0
num = 1.23
console.log(num.countDecimals()) //2
num = 1.454689451
console.log(num.countDecimals()) //9
num = 1.234212344244323e-7
console.log(num.countDecimals()) //22
Based on gion_13 answer I came up with this:
function decimalPlaces(n){
let result= /^-?[0-9]+\.([0-9]+)$/.exec(n);
return result === null ? 0 : result[1].length;
}
for (const x of [
0, 1.0, 1.00, 0.123, 1e-3, 3.14, 2.e-3, -3.14e-21,
5555.0, 5555, 555.5, 555.50, 0.0000005, 5e-7, 0.00000055, 5e-8,
0.000006, 0.0000007,
0.123, 0.121, 0.1215
]) console.log(x, '->', decimalPlaces(x));
It fixes the returning 1 when there are no decimal places. As far as I can tell this works without errors.

Categories