So. I'm trying to subtract large integers. 76561198060995608 - 76561197960265728 = 100729880 type numbers. (I'm converting a 64 bit to a 32 bit) Vbscript and JS both give 100729888.
I would love to be able to do it in vbscript, but I'm either doing something wrong with cdbl (returns 100729888) or ccur (Overflow: 'ccur' error happens) or it can't be done the way I'm trying.
I've tried implementing JS libraries (bignum, bignumber) and they also haven't returned the correct number, again, maybe because of my error. BigNumber returns 100729890.
Big number code as follows:
$(document).ready(function(){
var x = new BigNumber(76561198060995608).subtract(new BigNumber(76561197960265728))
alert(x)
})
So...what am I doing wrong? Am I making a stupid mistake? I don't feel like this should take the 6+ hours it's taken me so far.
Any suggestions or help would be greatly appreciated. Thanks!
The problem is that when you try
new BigNumber(76561198060995608)
you're still relying on the JavaScript runtime to parse and represent that number before it calls the "BigNumber" constructor. I'm pretty sure you can pass a string to that constructor:
new BigNumber("76561198060995608")
and that should give you a fighting chance.
Related
I am practicing Javascript operators but addition is not working. What should I do to fix the problem?
This for a new website. I gave 10+20. but in output i get 1020 instead of 30.
var x="10";
var y="20";
var z=x+y;
I expect the output of 10+20 to be 1020, but the actual output is 30.
It's because you are trying to concatanate String instead.
Using quote explicity tell to the intertreter yours variables are string.
Discards them and you will get your addition.
I'd like to get for really big decimal values like e.g. 4.951760157141521e+27 to the matching binaryString using pure javaScript.
I'm aware that 4.951760157141521e+27 not really is a normal integer anymore what also leads to the problem that just using toString(2) does not work anymore.
print = function(i) { console.log(i) }
let myDecimalNumber = 42;
print(+myDecimalNumber.toString(2));
let myDecimalNumberBIG = 4.951760157141521e+27;
print(+myDecimalNumberBIG.toString(2));
How can I fix this? My idea was to use something like a bigInt library but it seems like currently I could not find any working solution so I would really appreciate a working example :)
Given the value: 0x9e9090ab (10011110100100001001000010101011)
I want to parse the value ignoring the 32nd bit such that I end up with:
0x1e9090ab (00011110100100001001000010101011)
My attempts at doing this via a bitmask (0x9e9090ab & ~0x10000000) don't seem to be working and result in a signed negative number.
Not sure what I'm doing wrong here, so any help would be appreciated.
You need to use the bitmask ~0x80000000 instead of ~0x10000000, since 0x10000000 refers to the 29th bit. Example:
var result = 0x9e9090ab & ~0x80000000;
For me it's easier to think about this way:
0x9e9090ab & 0x7FFFFFFF
You can compare using node:
> (0xFFFFFFFF).toString(2)
'11111111111111111111111111111111'
> (0x7FFFFFFF).toString(2)
'1111111111111111111111111111111'
This is by far the strangest error I've ever seen.
In my program I have one variable called avgVolMix. It's a decimal variable, and is not NaN (console.log(avgVolMix) prints something like 0.3526246 to console). However, using the variable at all in an assignment statement causes it to corrupt whatever is trying to use it to NaN. Example:
console.log(avgVolMix); <- prints a working decimal
var moveRatio = 10 + avgVolMix * 10;
console.log(moveRatio); <- prints NaN
I seriously have no idea why this is happening. I've tried everything to fix it; I've converted it to a string and then back, rounded it to 2 decimal places, adding 0.0001 to it - nothing works! This is the only way I can get it "working" right now:
var temp = 0.0;
for(i = 0; i <= avgVolMix; i+=0.1)
temp = i;
This assigns a number that is close to avgVolMix to temp. However, as you can see, it's extremely bad programming. I should also note that this isn't just broken with this one variable, every variable that's associated with a library I'm using does this (I'm working on a music visualizer). Does anyone know why this might be happening?
Edit: I'm not actually able to access the code right now to test any of this stuff, and since this is a company project I'm not comfortable opening up a jsfiddle anyway. I was just wondering if anyone's ever experienced something like this. I can tell you that I got the library in question from here: http://gskinner.com/blog/archives/2011/03/music-visualizer-in-html5-js-with-source-code.html
If its showing the variable value as NaN. Then try converting the variable as parseInt(); method. Hope it works. Because I also faced such problem and solved when tried it.
This is the Jquery codeļ¼
function Totalprice()
{
var unitprice=$('#unitpay').val();
var quota=$('#readers').val();
var totalprice=unitprice*quota;
$('#totalprice').text('$'+totalprice);
}
When the value of readers is 67 and the unitpay is 0.3, it calculates the total price and displays $20.099999999999998 , not $20.1. What's wrong? If I want it to display $20.1 instead of $20.099999999999998, how can I rewrite the code?
How about this:
$('#totalprice').text('$'+totalprice.toFixed(1));
or:
$('#totalprice').text('$'+totalprice.toFixed(2));
to show it as an actual dollar amount.
As your enthusiastic commentators pointed out, it's a floating point error. The quick and easy solution is to use a rounding method like toFixed().
Just use .toFixed(2). (link)
The problem is that computers can't represent some numbers exactly (they're finite, and operate in binary), so stuff like this happens.
Javascript has some pretty severe floating point issues. Try typing 0.1+0.2 in your Firebug console sometime for some fun.
This isn't an issue with jQuery. As has been mentioned above, use toFixed().