While playing around with random numbers in JavaScript I discovered a surprising bug, presumably in the V8 JavaScript engine in Google Chrome. Consider:
// Generate a random number [1,5].
var rand5 = function() {
return parseInt(Math.random() * 5) + 1;
};
// Return a sample distribution over MAX times.
var testRand5 = function(dist, max) {
if (!dist) { dist = {}; }
if (!max) { max = 5000000; }
for (var i=0; i<max; i++) {
var r = rand5();
dist[r] = (dist[r] || 0) + 1;
}
return dist;
};
Now when I run testRand5() I get the following results (of course, differing slightly with each run, you might need to set "max" to a higher value to reveal the bug):
var d = testRand5();
d = {
1: 1002797,
2: 998803,
3: 999541,
4: 1000851,
5: 998007,
10: 1 // XXX: Math.random() returned 4.5?!
}
Interestingly, I see comparable results in node.js, leading me to believe it's not specific to Chrome. Sometimes there are different or multiple mystery values (7, 9, etc).
Can anyone explain why I might be getting the results I see? I'm guessing it has something to do with using parseInt (instead of Math.floor()) but I'm still not sure why it could happen.
The edge case occurs when you happen to generate a very small number, expressed with an exponent, like this for example 9.546056389808655e-8.
Combined with parseInt, which interprets the argument as a string, hell breaks loose. And as suggested before me, it can be solved using Math.floor.
Try it yourself with this piece of code:
var test = 9.546056389808655e-8;
console.log(test); // prints 9.546056389808655e-8
console.log(parseInt(test)); // prints 9 - oh noes!
console.log(Math.floor(test)) // prints 0 - this is better
Of course, it's a parseInt() gotcha. It converts its argument to a string first, and that can force scientific notation which will cause parseInt to do something like this:
var x = 0.000000004;
(x).toString(); // => "4e-9"
parseInt(x); // => 4
Silly me...
I would suggest changing your random number function to this:
var rand5 = function() {
return(Math.floor(Math.random() * 5) + 1);
};
This will reliably generate an integer value between 1 and 5 inclusive.
You can see your test function in action here: http://jsfiddle.net/jfriend00/FCzjF/.
In this case, parseInt isn't the best choice because it's going to convert your float to a string which can be a number of different formats (including scientific notation) and then try to parse an integer out of it. Much better to just operate on the float directly with Math.floor().
Related
Problem statement: I'm trying to get string > binary without using the inbuilt method in javascript.
This is a piece of program where a string input (like "ABC") is accepted, then it is translated to an array of equivalent code value ([65,66,67]).
Function binary() will change a number to binary. But I'm unable to join them together to loop through all the contents. Please help. (I'm a noob, please forgive my bad code and bad explanation)
var temp3 = [65,66,67];
var temp2 = [];
var r;
for(i=0;i<temp3.length;i++) {
var r = temp3[i];
temp2.push(binary(r));
}
function binary(r) {
if (r === 0) return;
temp2.unshift(r % 2);
binary(Math.floor(r / 2));
return temp2;
}
console.log(temp2);
I think this is a cleaner version of this function. It should work for any non-negative integers, and would be easy enough to extend to the negatives. If we have a single binary digit (0 or 1) and hence are less than 2, we just return the number converted to a string. Otherwise we call recursively on the floor of half the number (as yours does) and append the final digit.
const binary = (n) =>
n < 2
? String (n)
: binary (Math.floor (n / 2)) + (n % 2)
console.log (binary(22)) //=> '10110'
console.log ([65, 66, 67] .map (binary)) //=> ['1000001', '1000010', '1000011']
In your function you have this code
var r = temp3[i];
I don't see any temp3 variable anywhere in your code above so I'd imagine that could be causing some issues.
I have two functions here...
function getCostOne() {
var cost = 1.5;
return 1 * cost.toFixed(2);
}
and...
function getCostTwo() {
var cost = 1.5;
return 1 + cost.toFixed(2);
}
What is the difference between multiplying cost.toFixed(2) and adding cost.toFixed(2)?
Why does multiplying it return .5 and adding return .50?
Those functions return 1.5 and "11.50" respectively. Working JSBin Demo...
console.log(1 * '1.50');
console.log(1 + '1.50');
It looks like the string is cast in the first case (as though you had called parseFloat('1.50') and then concatted in the second. However, this is only the results on my own browser. Take a look at the official MDN Web Docs...
console.log('foo' * 2);
// expected output: NaN
So, Chrome is probably handling it well, but I wouldn't expect that kind of behavior across all browsers!
If you want them to both definitely return the right numbers, do all the mathematical logic first, and then format it with toFixed(). That code would look like...
function getCostTwo() {
var cost = 1.5;
cost += 1; // do the math logic FIRST!
return cost.toFixed(2); // once the number is just right, we format it!
}
Greetings Stack Overflow!
First off, this is my first question!
I am trying to solve the selfDividingNumbers algorithm and I ran into this interesting problem. This function is supposed to take a range of numbers to check if they are self dividing.
Self Dividing example:
128 is a self-dividing number because
128 % 1 == 0, 128 % 2 == 0, and 128 % 8 == 0.
My attempt with Javascript.
/*
selfDividingNumbers( 1, 22 );
*/
var selfDividingNumbers = function(left, right) {
var output = [];
while(left <= right){
// convert number into an array of strings, size 1
var leftString = left.toString().split();
// initialize digit iterator
var currentDigit = leftString[0];
for(var i = 0; i < leftString.length; i++){
currentDigit = parseInt(leftString[i])
console.log( left % currentDigit );
}
// increment lower bound
left++;
}
return output
};
When comparing the current lower bound to the current digit of the lower bound, left % currentDigit it always produces zero! I figure this is probably a type error but I am unsure of why and would love for someone to point out why!
Would also like to see any other ideas to avoid this problem!
I figured this was a good chance to get a better handle on Javascript considering I am clueless as to why my program is producing this output. Any help would be appreciated! :)
Thanks Stack Overflow!
Calling split() isn't buying you anything. Remove it and you'll get the results you expect. You still have to write the code to populate output though.
The answer by #Joseph may fix your current code, but I think there is a potentially easier way to go about doing this. Consider the following script:
var start = 128;
var num = start;
var sd = true;
while (num > 0) {
var last = num % 10;
if (start % last != 0) {
sd = false;
break;
}
num = Math.floor(num / 10);
}
if (sd) {
print("Is self dividing");
}
else {
print("Is NOT self dividing");
}
Demo
To test each digit in the number for its ability to cleanly divide the original number, you can simply use a loop. In each iteration, check num % 10 to get the current digit, and then divide the number by ten. If we never see a digit which can not divide evenly, then the number is not self dividing, otherwise it is.
So the string split method takes the string and returns an array of string parts. The method expects a parameter, however, the dividing element. If no dividing element is provided, the method will return only one part, the string itself. In your case, what you probably intended was to split the string into individual characters, which would mean the divider would be the empty string:
var leftString = left.toString().split('');
Since you are already familiar with console.log, note that you could also use it to debug your program. If you are confused about the output of left % currentDigit, one thing you could try is logging the variables just before the call,
console.log(typeof left, left, typeof currentDigit, currentDigit)
which might give you ideas about where to look next.
var number = 342345820139586830203845861938475676
var output = []
var sum = 0;
while (number) {
output.push(number % 10);
number = Math.floor(number/10);
}
output = output.reverse();
function addTerms () {
for (i = 0; i < output.length; i=i+2) {
var term = Math.pow(output[i], output[i+1]);
sum += term;
}
return sum;
}
document.write(output);
document.write("<br>");
document.write(addTerms());
I am trying to take that large number and split it into its digits. Then, find the sum of the the first digit raised to the power of the 2nd, 3rd digit raiseed to the 4th, 5th raised to the 6th and so on. for some reason, my array is returning weird digits, causing my sum to be off. the correct answer is 2517052. Thanks
You're running into precision issues within JavaScript. Just evaluate the current value of number before you start doing anything, and the results may surprise you:
>>> var number = 342345820139586830203845861938475676; number;
3.423458201395868e+35
See also: What is JavaScript's highest integer value that a Number can go to without losing precision?
To resolve your issue, I'd store your input number as an array (or maybe even a string), then pull the digits off of that.
This will solve your calculation with the expected result of 2517052:
var number = "342345820139586830203845861938475676";
var sum = 0;
for(var i=0; i<number.length; i=i+2){
sum += Math.pow(number.charAt(i), number.charAt(i+1));
}
sum;
JavaScript stores numbers in floating point format (commonly double). double can store precisely only 15 digits.
You can use string to store this large number.
As mentioned, this is a problem with numeric precision. It applies to all programming languages that use native numeric formats. Your problem works fine if you use a string instead
var number = '342345820139586830203845861938475676'
var digits = number.split('')
var total = 0
while (digits.length > 1) {
var [n, power] = digits.splice(0, 2)
total += Math.pow(n, power)
}
(the result is 2517052, byt the way!)
Cast the number as a string and then iterate through it doing your math.
var number = "342345820139586830203845861938475676";//number definition
var X = 0;//some iterator
var numberAtX = 0 + number.charAt(X);//number access
The greatest integer supported by Javascript is 9007199254740992. So that only your output is weird.
For Reference go through the link http://ecma262-5.com/ELS5_HTML.htm#Section_8.5
[edit] adjusted the answer based on Borodins comment.
Mmm, I think the result should be 2517052. I'd say this does the same:
var numbers = '342345820139586830203845861938475676'.split('')
,num = numbers.splice(0,2)
,result = Math.pow(num[0],num[1]);
while ( (num = numbers.splice(0,2)) && num.length ){
result += Math.pow(num[0],num[1]);
}
console.log(result); //=> 2517052
The array methods map and reduce are supported in modern browsers,
and could be worth defining in older browsers. This is a good opportunity,
if you haven't used them before.
If you are going to make an array of a string anyway,
match pairs of digits instead of splitting to single digits.
This example takes numbers or strings.
function sumPower(s){
return String(s).match(/\d{2}/g).map(function(itm){
return Math.pow(itm.charAt(0), itm.charAt(1));
}).reduce(function(a, b){
return a+b;
});
}
sumPower('342345820139586830203845861938475676');
alert(sumPower(s))
/*
returned value:(Number)
2517052
*/
UPDATED:
Using javascript or jQuery, how can I convert a number into it's different variations:
eg:
1000000
to...
1,000,000 or 1000K
OR
1000
to...
1,000 or 1K
OR
1934 and 1234
to...
1,934 or -2K (under 2000 but over 1500)
or
1,234 or 1k+ (over 1000 but under 1500)
Can this is done in a function?
Hope this make sense.
C
You can add methods to Number.prototype, so for example:
Number.prototype.addCommas = function () {
var intPart = Math.round(this).toString();
var decimalPart = (this - Math.round(this)).toString();
// Remove the "0." if it exists
if (decimalPart.length > 2) {
decimalPart = decimalPart.substring(2);
} else {
// Otherwise remove it altogether
decimalPart = '';
}
// Work through the digits three at a time
var i = intPart.length - 3;
while (i > 0) {
intPart = intPart.substring(0, i) + ',' + intPart.substring(i);
i = i - 3;
}
return intPart + decimalPart;
};
Now you can call this as var num = 1000; num.addCommas() and it will return "1,000". That's just an example, but you'll find that all the functions create will involve converting the numbers to strings early in the process then processing and returning the strings. (The separating integer and decimal part will probably be particularly useful so you might want to refactor that out into its own method.) Hopefully this is enough to get you started.
Edit: Here's how to do the K thing... this one's a bit simpler:
Number.prototype.k = function () {
// We don't want any thousands processing if the number is less than 1000.
if (this < 1000) {
// edit 2 May 2013: make sure it's a string for consistency
return this.toString();
}
// Round to 100s first so that we can get the decimal point in there
// then divide by 10 for thousands
var thousands = Math.round(this / 100) / 10;
// Now convert it to a string and add the k
return thousands.toString() + 'K';
};
Call this in the same way: var num = 2000; num.k()
Theoretically, yes.
As TimWolla points out, it requires a lot of logic.
Ruby on Rails have a helper for presenting time with words. Have a look at the documentation. The implementation for that code is found on GitHub, and could give you some hint as how to go about implementing this.
I agree with the comment to reduce the complexity by choosing one format.
Hope you find some help in my answer.