JavaScript flooring number to order of magnitude - javascript

I want to floor any integers >= 10 according to its order of magnitude. For instance,
15 -> 10
600 -> 100
8,547 -> 1,000
32,123 -> 10,000
3,218,748 -> 1,000,000
544,221,323,211 -> 100,000,000,000
....
I was thinking parsing the int to string and count how many digits are there, then set the new string to 1 + a bunch of zeros and convert back to number.
function convert(n) {
nStr = n.toString();
nLen = nStr.length;
newStr = "1" + Array(nLen).join("0");
return parseInt(newStr);
}
Is there a more mathematical way to do this? I want to avoid converting between int and str because it might waste a lot of memory and disk space when n is huge and if I want to run this function a million times.

So you're looking for the order of magnitude.
function convert(n) {
var order = Math.floor(Math.log(n) / Math.LN10
+ 0.000000001); // because float math sucks like that
return Math.pow(10,order);
}
Simple ^_^ Math is awesome! Floating point imprecisions, however, are not. Note that this won't be completely accurate in certain edge cases, but it will do its best.

Related

Create a random BigInt for Miller-Rabin test

I'm implementing a Miller Rabin primality test using JavaScript BigInts.
The basic algorithm is no problem - I have that done, but it requires a random number in the range 0 to (number being tested-3). I can't use Math.random() and scale since I'm using BigInt.
This doesn't need to be cryptographically secure, just random enough, so I've opted for generating a string of randomly selected hex digits and converting that to a BigInt.
Here's the code:
function getRandomBigint(lower, upper) {
// Convert to hex strings so that we know how many digits to generate
let hexDigits = new Array(upper.toString(16).length).fill('');
let rand;
let newDigits;
do {
// Fill the array with random hex digits and convert to a BigInt
newDigits = hexDigits.map(()=>Math.floor(Math.random()*16).toString(16));
rand = BigInt('0x'+newDigits.join(''));
} while (rand < lower || rand > upper);
return rand;
}
The problem here is that the generated number could be out of range. This function handles that (badly) by iterating until it gets a number in range. Obviously, that could potentially take a very long time. In practice it has never iterated more than a couple of dozen times before delivering a number, but the nature of randomness means that the problem is 'out there' waiting to bite me!
I could scale or truncate the result to get it in range, rather than iterating, but I am concerned that this would affect the randomness. I already have some evidence that this is not as random as it might be, but that might not matter in this application.
So, two questions:
is this random enough for Miller Rabin?
how to deal with results out of range?
This is a JavaScript-only project - no libraries, please.
The Miller Rabin test is a probabilistic1 test. That is, it is deterministic for establishing that a number is not prime, but a prime indication is only that - an indication that a number might be prime.
The reason is that certain bases used in the test can result in a prime indication, even when the target number is not prime. The use of a random number as a base in the test allows for the test to be run repeatedly with a different base each time, thus increasing the probability that the result correctly indicates prime or not prime2.
Thus, the random numbers selected must just be less than the number being tested. There is no requirement for them to be selected from the complete range.
With this in mind I now have this:
function getRandomBigInt(upper) {
let maxInt = BigInt(Number.MAX_SAFE_INTEGER);
if (upper <= maxInt) {
return BigInt((Math.floor(Math.random()*Number(upper))));
} else {
return BigInt((Math.floor(Math.random()*Number.MAX_SAFE_INTEGER)));
}
}
9007199254740991 (Number.MAX_SAFE_INTEGER) should provide a sufficiently large range of numbers for this purpose. It works with my Miller-Rabin implementation as far as I have tested it so far.
1 Testing numbers up to 3,317,044,064,679,887,385,961,981 against a short specific list of bases will yield a definitive prime/not prime result.
2 For non-deterministic prime results it is still necessary to perform a deterministic test (such as trial division) to confirm.
A way to get random number in a range is
lower + rand() % (upper - lower)
rand() is any function that returns a random number larger than (upper - lower). The math can be done with Integer or Bigint.
function rand16() {
// 0 .. 2^16-1
return BigInt(Math.floor(Math.random()*65536));
}
function rand() {
// -2^63 .. 2^63-1
return BigInt( (((rand16() * 65536) + rand16())* 65536 + rand16()) * 65536 + rand16() );
}

Math.sin() Different Precision between Node.js and C#

I have a problem in precision in the last digit after the comma.The javascript code generates one less Digit in compare with the C# code.
Here is the simple Node.js code
var seed = 45;
var x = Math.sin(seed) * 0.5;
console.log(x);//0.4254517622670592
Here is the simple C# code
public String pseudorandom()
{
int seed = 45;
double num = Math.Sin(seed) * (0.5);
return num.ToString("G15");//0.42545176226705922
}
How to achieve the same precision?
The JavaScript Number type is quite complex. It looks like floating point number will probably be like IEEE 754-2008 but some aspects are left to the implementation. See http://www.ecma-international.org/ecma-262/6.0/#sec-number-objects sec 12.7.
There is a note
The output of toFixed may be more precise than toString for some
values because toString only prints enough significant digits to
distinguish the number from adjacent number values. For example,
(1000000000000000128).toString() returns "1000000000000000100", while
(1000000000000000128).toFixed(0) returns "1000000000000000128".
Hence to get full digit accuracy you need something like
seed = 45;
x = Math.sin(seed) * 0.5;
x.toFixed(17);
// on my platform its "0.42545176226705922"
Also, note the specification for how the implementation of sin and cos allow for some variety in the actual algorithm. It's only guaranteed to within +/- 1 ULP.
Using java the printing algorithm is different. Even forcing 17 digits gives the result as 0.42545176226705920.
You can check you are getting the same bit patterns using x.toString(2) and Double.doubleToLongBits(x) in Java.
return num.ToString("G15");//0.42545176226705922
actually returns "0.425451762267059" (no significant digit + 15 decimal places in this example), and not the precision shown in the comment after.
So you would use:
return num.ToString("G16");
to get "0.4254517622670592"
(for your example - where the significant digit is always 0) G16 will be 16 decimal places.

Can anyone explain this process with converting decimal numbers to Binary

I have looked around the internet for a way to convert decimal numbers into binary numbers. and i found this piece of code in some forum.
var number = prompt("Type a number!") //Asks user to input a number
var converted = []; // creates an array with nothing in it
while(number>=1) { //While the number the user typed is over or equal to 1 its shoud loop
converted.unshift(number%2); // takes the "number" and see if you can divid it by 2 and if theres any rest it puts a "1" otherwise "0"
number = Math.floor(number/2); // Divides the number by 2, then starts over again
}
console.log(converted)
I'm not understanding everything completely, so i made some comments of what i think the pieces of code do. But anyone that can explain in more detail? or is the way i think the code does correct?
This code is based on a technique for converting decimal numbers to binary.
If I take a decimal number. I divide it by two and get the remainder which will either be 0 or 1. Once you divide 57 all the way down to 0. You get the binary number for example:
57 / 2 = 28 r 1; 28 / 2 = 14 r 0; 14 / 2 = 7 r 0; 7 / 2 = 3 r 1; 3 / 2 = 1 r 1; 1 / 2 = 0 r 1;
The remainders are the binary number. Sorry if it's a bit hard to read. I definitely recommend writing it out on paper. Read from the last remainder to the first, the remainders look like this: 111001
Reverse it to make it correct. array.unshift() can do this or you could use array.push() then array.reverse() after the while loop. Unshift() is probably a better approach.
57 in decimal is equal to 111001, which you can check.
BTW, this algorithm works for other bases, as long you are converting from decimal. Or at least as far as I know.
I hope this helped.
It seems like you've got the gist of it down.
Let's start with a random number:
6 === 110b
Now let's see what the above method does:
The number is geq than 1, hence, let's add the last bit of the number to the output
6%2 === 0 //output [0]
the number we're working with after dividing the number by two, which is essentially just bit-shifting the whole thing to the right is now 11b (from the original 110b). 11b === 3, as you'd expect.
You can alternatively think of number % 2 as a bit-wise AND operation (number & 1):
110
& 1
-----
0
The rest of the loop simply carries the same operation out as long as needed: find the last bit of the current state, add it to the output, shift the current state.

From character to binary (adding the left 0)

You can convert from char to binary in JS using this code:
var txt = "H";
bits = txt.charCodeAt(0).toString(2); //bits=1001000
The result is mathematically correct but no literally, i mean, there is a missing 0 to the left, that again, is correct, but I wonder if there is a way to make it consider the left zeros.
You need a byte? Try this:
var txt = "H",
bits = txt.charCodeAt(0).toString(2),
aByte = new Array(9 - bits.length).join('0') + bits;
The snippet creates a new array with length of missing bits + 1, then it converts the newly created array to a string with amount of zeroes needed. 9 is the wanted "byte length" + 1.
However, this is relatively slow method, if you're having a time-critical task, I'd suggest you to use while or for loop instead.
charCodeAt() returns the code of a character, which is a general number.
A number itself does not have any kind of preffered alignment.
By convention numbers are printed without any leading zeros.
In fact, charCodeAt() returns unicode character code,
which in general can take more than 8 bits to store.
Therefore such behaviour is correct.
Try this
var bits = txt.charCodeAt( 0 ).toString( 2 );
var padding = 8 - bits.length;
res = [ ];
res.push( new Array( padding+1 ).join( '0' ) + bits );

Fastest way to create this number?

I'm writing a function to extend a number with sign to a wider bit length. This is a very frequently used action in the PowerPC instruction set. This is what I have so far:
function exts(value, from, to) {
return (value | something_goes_here);
}
value is the integer input, from is the number of bits that the value is using, and to is the target bit length.
What is the most efficient way to create a number that has to - from bits set to 1, followed by from bits set to 0?
Ignoring the fact that JavaScript has no 0b number syntax, for example, if I called
exts(0b1010101010, 10, 14)
I would want the function to OR the value with 0b11110000000000, returning a sign-extended result of 0b11111010101010.
A number containing p one bits followed by q zero bits can be generated via
((1<<p)-1)<<q
thus in your case
((1<<(to-from))-1)<<from
or much shorter
(1<<to)-(1<<from)
if you have the number 2^q (= 1 shifted left by q) represented as an integer of width p + q bits, it has the representation:
0...010...0
p-1 q
then 2^q - 1 has the representation
0...01...1
p q
which is exactly the opposite of you want. So just flip the bits
hence what you want is NOT((1 LEFT SHIFT by q) - 1)
= ~((1 << q) - 1) in c notation
I am not overly familiar with binary mathematics in JavaScript... But if you need to OR a number with 0b11110000000000, then I assume you would just convert that to decimal (which would get you 15360), and do value | 15360.
Relevant info that you may find useful: parseInt("11110000000000", 2) converts a binary number (specified as a string) to a decimal number, and (15360).toString(2) converts a decimal number (15360 in this case) to a binary number (the result is a string).
Revised solution
There's probably a more elegant and mathematical method, but here's a quick-and-dirty solution:
var S = "";
for(var i=0;i<p;i++)
S += "1";
for(i=0;i<q;i++)
S += "0";
S = parseInt(S, 2); // convert to decimal

Categories