In my es6 compiler, I was working on a function that converts any binary number into a base-10 number. basically, the 1s and 0s are all defaulting to 1.
This project is just 4 a 12-yr-old's pleasure (me) so, no rush if there's someone else with something for their job. anyway, I found the problem was in the fact that when you entered the binary number, the numbers stored in convertedNumQuery somehow defaulted to 1, even if the number was 0. I've tried debugging everywhere, but maybe it's just cause i'm a newb >.<
var numQuery = prompt("Enter a binary number to convert to base-10! Don't leave the \"0b\" in there.").split("");
var binaryChart = [1];
var convertedNumQuery = numQuery.map(Number);
var base10 = 0;
console.log(convertedNumQuery+ "\n");
for (var i = 0; i < (numQuery.length - 1); i++) {
binaryChart.unshift(binaryChart[0]*2);
console.log(binaryChart);
}
console.log(convertedNumQuery);
console.log("\n convertedNumQuery is array: " + Array.isArray(convertedNumQuery));
for (var i = 0; i < convertedNumQuery.length; i++ ) {
if (convertedNumQuery[i] = 1) {
base10 += binaryChart[i];
console.log(convertedNumQuery[i]);
}
}
console.log(base10);
say the binary number I wanted to convert is 101010, or 42. the expected result, 42, is supposed to be stored in base10, but what ends up happening is every number in binaryChart is added to get 63. and even stranger, when I looked at convertedNumQuery in line 15 console.log(convertedNumQuery[i]);, the array was logged in the console as all 1s.
why don't just use parseInt to convert:
params : (number,base)
console.log(parseInt(110101, 2))
Related
I am doing a test and the task is to write a function that takes an integer as input, and returns the number of bits that are equal to one in the binary representation of that number. You can guarantee that input is non-negative.
So i have the following problem : while my code runs ok for the most part and passes all tests when the inputs are 9843520790 and 8989787640 I am getting wrong results. So I hoped in the calc and made this numbers binary and tried couple of console.log(bNum) and what I noticed is that I loose the high byte during the conversion. Meaning that after bNum=(n>>>0).toString(2); the console.log(bNum) returns 11001001100011110010110101110011 for 7676570995 when it should return 111001001100011110010110101110011.
var countBits = function(n) {
let bNum=(n>>>0).toString(2);//converts int to base 2
console.log(bNum);
let sum=0;
for(let i=0;i<=bNum.length-1;i++)
{
if(bNum[i]==1)
sum+=1;
}
return sum;
};
I am not asking for a coding solution just someone to explain why this is happening so I can find a wayto fix it.Thanks in advance :)
I think your issue is n>>>0. I not to familiar with the >>>0 but as far as I know it converts the input to a 32bit unsigned in. The max value for an uint is 4294967295 so those numbers exceed it. Therefore I think some information is lost in the conversion.
var countBitsOld = function(n) {
let bNum=(n>>>0).toString(2);//converts int to base 2
let sum=0;
for(let i=0;i<=bNum.length-1;i++)
{
if(bNum[i]==1)
sum+=1;
}
return sum;
};
var countBitsNew = function(n) {
let bNum=(n).toString(2);//converts int to base 2
let sum=0;
for(let i=0;i<=bNum.length-1;i++)
{
if(bNum[i]==1)
sum+=1;
}
return sum;
};
const test = (n) => console.log({old: countBitsOld(n), new: countBitsNew(n)});
test(9843520790);
test(8989787640);
test(76765709950);
.
This is a follow-up question to the one I asked previously, Is this JS Unique ID Generator Unreliable? (Getting collisions).
In the scriptlet below I'm generating 10000 random numbers using 2 methods. Method 1 is a straight random number up to 10^6, while Method 2 concatenates that random number up to 10^6 (same idea as in [1]) with the current JS Date().time() timestamp. Also there's Method [3] which only does the Math.round on the RNG rather than the whole concatenated result.
My question is, if you keep clicking the test button, you see that [1] always produces 10000 unique numbers but [2] produces ~9500 no matter what. [3] produces ~9900 but never the max either. Why is that? The chances of getting a +/-1 from a previous random number in [0..10^6] and having that mixed with the timestamp of exactly the opposite +/-1 for the timestamp concatenation are impossible. We are generating pretty much on the same millisecond in a loop. 10^6 is a huge limit, much bigger than in my original question, and we know that's true because Method [1] works perfectly.
Is there truncation of some kind of going on, which trims the string and makes it more likely to get duplicates? Paradoxically, a smaller string works better than a larger string using the same RNG inside it. But if there's no truncation, I would expect results to be 100% as in [1].
function run() {
var nums1 = new Set(), nums2 = new Set(), nums3 = new Set();
for (var i = 0; i < 10000; i++) {
nums1.add(random10to6th());
}
for (var i = 0; i < 10000; i++) {
nums2.add(random10to6th_concatToTimestamp());
}
for (var i = 0; i < 10000; i++) {
nums3.add(random10to6th_concatToTimestamp_roundRNGOnly());
}
console.clear();
console.log('Random 10^6 Unique set: ' + nums1.size);
console.log('Random 10^6 and Concat to Date().time() Unique set: ' + nums2.size);
console.log('Random 10^6 and Concat to Date().time(), Round RNG Only Unique set: ' + nums3.size);
function random10to6th() {
return Math.random() * Math.pow(10, 6);
}
function random10to6th_concatToTimestamp() {
return Math.round(new Date().getTime() + '' + (Math.random() * Math.pow(10, 6)));
}
}
function random10to6th_concatToTimestamp_roundRNGOnly() {
return new Date().getTime() + '' + Math.round(Math.random() * Math.pow(10, 6));
}
<button onclick="run()">Run Algorithms</button>
<p>(Keep clicking this button)</p>
Is there truncation of some kind of going on, which trims the string
and makes it more likely to get duplicates?
Yes, simply by rounding a random number, you cut off the fractional digits. This reduces the number of possible outcomes compared to the non-rounded random number.
In addition to that, you concatenate a timestamp (13 digits) with a value between 0 and 1000000 (1 to 7 digits). So your concatenated result will have a total number of 14 to 20 digits, but JavaScript's number datatype is of limited precision and represents integers faithfully up to about 16 digits only (see Number.MAX_SAFE_INTEGER).
Example: Let's assume the timestamp is 1516388144210 and you append random numbers from 500000 to 500400:
+'1516388144210500000' == 1516388144210500000
+'1516388144210500100' == 1516388144210500000
+'1516388144210500200' == 1516388144210500000
+'1516388144210500300' == 1516388144210500400
+'1516388144210500400' == 1516388144210500400
You can see that, when converting those strings to numbers, they get rounded to the nearest available IEEE-754 double-precision (64 bit) number. This is because 1516388144210500000 > Number.MAX_SAFE_INTEGER.
I think there are a number of issues in play here. I don't know which or to what degree each of the items below contribute to the observed difference, just that they are things that might explain the results.
One is because you're concatenating a number with a string with a number and then coercing the value back to a number as part of rounding the result. It would be very easy to feed unexpected results into the Round function (which in itself might cause collisions due to floating precision and such outlined below)
Second, I think that you actually reduce the randomness of the resulting number when you concatenate the timestamp. The function is likely to be called many, many times every second; if it's invoked at a rate > Date.getTime() precision the value returned will be identical to one generated in a previous loop iteration.
Third, unless I missed something, have you considered that the random number gen is only guaranteed to be pseudo-random? Precision and digit limits play a factor when dealing with large values like in the code you posted. Since the random-est part of the number is tacked onto the least significant part, it is more likely to be truncated, chopped, or modified.
Try inverting your concatenation and see the results (there's only about 4 or so collisions). The collisions are accounted for by the reasons outlined by me and #le_m's answers.
function run() {
var nums1 = new Set(), nums2 = new Set()
for (var i = 0; i < 10000; i++) {
nums1.add(random10to6th());
}
for (var i = 0; i < 10000; i++) {
nums2.add(random10to6th_concatToTimestamp());
}
console.clear();
console.log('Random 10^6 Unique set: ' + nums1.size);
console.log('Random 10^6 and Concat to Date().time() Unique set: ' + nums2.size);
function random10to6th() {
return Math.random() * Math.pow(10, 6);
}
function random10to6th_concatToTimestamp() {
return Math.round((Math.random() * Math.pow(10, 6)) + '' + new Date().getTime());
}
}
<button onclick="run()">Run Algorithms</button>
<p>(Keep clicking this button)</p>
I have a value fetched from the database, it's like:
4.5 which should be 4.500
0.01 which should be 0.010
11 which should be 11.000
so I used this piece of code
sprintf("%.3f",(double)$html['camp_cpc'])
But here arised another problem. If $html['camp_cpc'] = '4.5234', then also it displays 4.523 instead of original value 4.5234
Also for other values with larger decimal like 0.346513, its only showing up to 0.346.
How can I solve this problem in JavaScript also?
Floats 4.5 and 4.500 correspond to the same number, so they cannot (and should not) be used/stored in a way that preserves the different representation. If you need to preserve the original representation given by a user, you need to store this field as a list (string) and convert to a float whenever you need the float value
In Javascript at least, this is an implementation of what I think you want:
function getValue(x, points) {
var str = x.toString();
// Convert to string
var idx = str.indexOf(".");
// If the number is an integer
if(!~idx) return str + "." + "0".repeat(points);
// Get the tail of the number
var end = str.substr(idx+1);
// If the tail exceeds the number of decimal places, return the full string
if(end.length > points) return str;
// Otherwise return the int + the tail + required number of zeroes
return str.substr(0, idx) + "." + end.substr(0, points) + "0".repeat(points-end.length);
}
console.log(getValue(4.5, 3)); //4.500
console.log(getValue(0.01, 3)); //0.010
console.log(getValue(11, 3)); //11.000
Working demo (Makes use of ES6 String.repeat for demonstration purposes)
The important thing to note here is that this is string manipulation. Once you start to say "I want the number to look like..." it's no longer a number, it's what you want to show the user.
This takes your number, converts it to the string and pads the end of the string with the appropriate number of zeroes. If the decimal exceeds the number of places required the full number is returned.
In PHP, use %0.3f — and you don't need to cast as (double)
<?php
echo sprintf("%0.3f", 4.5); // "4.500"
echo sprintf("%0.3f", 4.5234); // "4.523"
If you want to display 4 decimal places, use %0.4f
echo sprintf("%0.4f", 4.5); // "4.5000"
echo sprintf("%0.4f", 4.5234); // "4.5234"
To do this in JavaScript
(4.5).toFixed(3); // "4.500"
It could look sth. like this:
var n = [4.5234, 0.5, 0.11, 456.45];
var temp_n;
for(var i = 0; i < n.length; i++) {
temp_n = String(n[i]).split(".");
if(temp_n[1] == null || temp_n[1].length < 3) {
n[i] = n[i].toFixed(3);
}
}
var number = 342345820139586830203845861938475676
var output = []
var sum = 0;
while (number) {
output.push(number % 10);
number = Math.floor(number/10);
}
output = output.reverse();
function addTerms () {
for (i = 0; i < output.length; i=i+2) {
var term = Math.pow(output[i], output[i+1]);
sum += term;
}
return sum;
}
document.write(output);
document.write("<br>");
document.write(addTerms());
I am trying to take that large number and split it into its digits. Then, find the sum of the the first digit raised to the power of the 2nd, 3rd digit raiseed to the 4th, 5th raised to the 6th and so on. for some reason, my array is returning weird digits, causing my sum to be off. the correct answer is 2517052. Thanks
You're running into precision issues within JavaScript. Just evaluate the current value of number before you start doing anything, and the results may surprise you:
>>> var number = 342345820139586830203845861938475676; number;
3.423458201395868e+35
See also: What is JavaScript's highest integer value that a Number can go to without losing precision?
To resolve your issue, I'd store your input number as an array (or maybe even a string), then pull the digits off of that.
This will solve your calculation with the expected result of 2517052:
var number = "342345820139586830203845861938475676";
var sum = 0;
for(var i=0; i<number.length; i=i+2){
sum += Math.pow(number.charAt(i), number.charAt(i+1));
}
sum;
JavaScript stores numbers in floating point format (commonly double). double can store precisely only 15 digits.
You can use string to store this large number.
As mentioned, this is a problem with numeric precision. It applies to all programming languages that use native numeric formats. Your problem works fine if you use a string instead
var number = '342345820139586830203845861938475676'
var digits = number.split('')
var total = 0
while (digits.length > 1) {
var [n, power] = digits.splice(0, 2)
total += Math.pow(n, power)
}
(the result is 2517052, byt the way!)
Cast the number as a string and then iterate through it doing your math.
var number = "342345820139586830203845861938475676";//number definition
var X = 0;//some iterator
var numberAtX = 0 + number.charAt(X);//number access
The greatest integer supported by Javascript is 9007199254740992. So that only your output is weird.
For Reference go through the link http://ecma262-5.com/ELS5_HTML.htm#Section_8.5
[edit] adjusted the answer based on Borodins comment.
Mmm, I think the result should be 2517052. I'd say this does the same:
var numbers = '342345820139586830203845861938475676'.split('')
,num = numbers.splice(0,2)
,result = Math.pow(num[0],num[1]);
while ( (num = numbers.splice(0,2)) && num.length ){
result += Math.pow(num[0],num[1]);
}
console.log(result); //=> 2517052
The array methods map and reduce are supported in modern browsers,
and could be worth defining in older browsers. This is a good opportunity,
if you haven't used them before.
If you are going to make an array of a string anyway,
match pairs of digits instead of splitting to single digits.
This example takes numbers or strings.
function sumPower(s){
return String(s).match(/\d{2}/g).map(function(itm){
return Math.pow(itm.charAt(0), itm.charAt(1));
}).reduce(function(a, b){
return a+b;
});
}
sumPower('342345820139586830203845861938475676');
alert(sumPower(s))
/*
returned value:(Number)
2517052
*/
While playing around with random numbers in JavaScript I discovered a surprising bug, presumably in the V8 JavaScript engine in Google Chrome. Consider:
// Generate a random number [1,5].
var rand5 = function() {
return parseInt(Math.random() * 5) + 1;
};
// Return a sample distribution over MAX times.
var testRand5 = function(dist, max) {
if (!dist) { dist = {}; }
if (!max) { max = 5000000; }
for (var i=0; i<max; i++) {
var r = rand5();
dist[r] = (dist[r] || 0) + 1;
}
return dist;
};
Now when I run testRand5() I get the following results (of course, differing slightly with each run, you might need to set "max" to a higher value to reveal the bug):
var d = testRand5();
d = {
1: 1002797,
2: 998803,
3: 999541,
4: 1000851,
5: 998007,
10: 1 // XXX: Math.random() returned 4.5?!
}
Interestingly, I see comparable results in node.js, leading me to believe it's not specific to Chrome. Sometimes there are different or multiple mystery values (7, 9, etc).
Can anyone explain why I might be getting the results I see? I'm guessing it has something to do with using parseInt (instead of Math.floor()) but I'm still not sure why it could happen.
The edge case occurs when you happen to generate a very small number, expressed with an exponent, like this for example 9.546056389808655e-8.
Combined with parseInt, which interprets the argument as a string, hell breaks loose. And as suggested before me, it can be solved using Math.floor.
Try it yourself with this piece of code:
var test = 9.546056389808655e-8;
console.log(test); // prints 9.546056389808655e-8
console.log(parseInt(test)); // prints 9 - oh noes!
console.log(Math.floor(test)) // prints 0 - this is better
Of course, it's a parseInt() gotcha. It converts its argument to a string first, and that can force scientific notation which will cause parseInt to do something like this:
var x = 0.000000004;
(x).toString(); // => "4e-9"
parseInt(x); // => 4
Silly me...
I would suggest changing your random number function to this:
var rand5 = function() {
return(Math.floor(Math.random() * 5) + 1);
};
This will reliably generate an integer value between 1 and 5 inclusive.
You can see your test function in action here: http://jsfiddle.net/jfriend00/FCzjF/.
In this case, parseInt isn't the best choice because it's going to convert your float to a string which can be a number of different formats (including scientific notation) and then try to parse an integer out of it. Much better to just operate on the float directly with Math.floor().