In C# the following code returns 2:
double d = 2.9;
int i = (int)d;
Debug.WriteLine(i);
In Javascript, however, the only way of converting a "double" to an "int" that I'm aware of is by using Math.round/floor/toFixed etc. Is there a way of converting to an int in Javascript without rounding? I'm aware of the performance implications of Number() so I'd rather avoid converting it to a string if at all possible.
Use parseInt().
var num = 2.9
console.log(parseInt(num, 10)); // 2
You can also use |.
var num = 2.9
console.log(num | 0); // 2
I find the "parseInt" suggestions to be pretty curious, because "parseInt" operates on strings by design. That's why its name has the word "parse" in it.
A trick that avoids a function call entirely is
var truncated = ~~number;
The double application of the "~" unary operator will leave you with a truncated version of a double-precision value. However, the value is limited to 32 bit precision, as with all the other JavaScript operations that implicitly involve considering numbers to be integers (like array indexing and the bitwise operators).
edit — In an update quite a while later, another alternative to the ~~ trick is to bitwise-OR the value with zero:
var truncated = number|0;
Similar to C# casting to (int) with just using standard lib:
Math.trunc(1.6) // 1
Math.trunc(-1.6) // -1
Just use parseInt() and be sure to include the radix so you get predictable results:
parseInt(d, 10);
There is no such thing as an int in Javascript. All Numbers are actually doubles behind the scenes* so you can't rely on the type system to issue a rounding order for you as you can in C or C#.
You don't need to worry about precision issues (since doubles correctly represent any integer up to 2^53) but you really are stuck with using Math.floor (or other equivalent tricks) if you want to round to the nearest integer.
*Most JS engines use native ints when they can but all in all JS numbers must still have double semantics.
A trick to truncate that avoids a function call entirely is
var number = 2.9
var truncated = number - number % 1;
console.log(truncated); // 2
To round a floating-point number to the nearest integer, use the addition/subtraction trick. This works for numbers with absolute value < 2 ^ 51.
var number = 2.9
var rounded = number + 6755399441055744.0 - 6755399441055744.0; // (2^52 + 2^51)
console.log(rounded); // 3
Note:
Halfway values are rounded to the nearest even using "round half to even" as the tie-breaking rule. Thus, for example, +23.5 becomes +24, as does +24.5. This variant of the round-to-nearest mode is also called bankers' rounding.
The magic number 6755399441055744.0 is explained in the stackoverflow post "A fast method to round a double to a 32-bit int explained".
// Round to whole integers using arithmetic operators
let trunc = (v) => v - v % 1;
let ceil = (v) => trunc(v % 1 > 0 ? v + 1 : v);
let floor = (v) => trunc(v % 1 < 0 ? v - 1 : v);
let round = (v) => trunc(v < 0 ? v - 0.5 : v + 0.5);
let roundHalfEven = (v) => v + 6755399441055744.0 - 6755399441055744.0; // (2^52 + 2^51)
console.log("number floor ceil round trunc");
var array = [1.5, 1.4, 1.0, -1.0, -1.4, -1.5];
array.forEach(x => {
let f = x => (x).toString().padStart(6," ");
console.log(`${f(x)} ${f(floor(x))} ${f(ceil(x))} ${f(round(x))} ${f(trunc(x))}`);
});
As #Quentin and #Pointy pointed out in their comments, it's not a good idea to use parseInt() because it is designed to convert a string to an integer. When you pass a decimal number to it, it first converts the number to a string, then casts it to an integer. I suggest you use Math.trunc(), Math.floor(), ~~num, ~~v , num | 0, num << 0, or num >> 0 depending on your needs.
This performance test demonstrates the difference in parseInt() and Math.floor() performance.
Also, this post explains the difference between the proposed methods.
What about this:
if (stringToSearch.IndexOfAny( ".,;:?!".ToCharArray() ) == -1) { ... }
I think that the easiest solution is using the bitwise not operator twice:
const myDouble = -66.7;
console.log(myDouble); //-66.7
const myInt = ~~myDouble;
console.log(myInt); //-66
const myInt = ~~-myDouble;
console.log(myInt); //66
Related
I have a number generated as a finite decimal:
var x = k * Math.pow(10,p)
with k and p integers. Is there a simple way to convert it to an exact string representation?
If I use implict string conversion I get ugly results:
""+ 7*Math.pow(10,-1)
gives
"0.7000000000000001"
I tried using .toFixed and .toPrecision but it is very difficult to find the correct precision to use depending on k and p. Is there a way to get the good old "%g" formatting of C language? Maybe I should resort to an external library?
One can use math.format from math.js library:
math.format(7 * Math.pow(10, -1), {precision: 14});
gives
"0.7"
You can create your own floor function like this, with option accuracy
function fixRound(number, accuracy) {
return ""+Math.floor(number * (accuracy || 1)) / accuracy || 1;
}
let num = 7 * Math.pow(10, -1);
console.log(fixRound(num, 1000))
I'm translating a library from javascript to C#, and felt under this case:
// Javascript
var number = 3144134277.518717 | 0;
console.log(number); // -> -1150833019
From what I read on other post, it might of been used to round values, but in this case the value isn't what I expect it to be (if it was to be rounded) and I can't reproduce in C# the same behavior with:
// C#
3144134277.5187168 | 0 // -> Operator '|' cannot be applied to operands
// of type 'double' and 'int'
// or
Convert.ToInt64(3144134277.5187168) | 0 // -> 3144134278
Thanks for any help!
The way | works in javaScript is detailed in the spec, but primarily what you're seeing is that | implicitly converts its operands to 32-bit ints before doing the bitwise OR (which is a no-op because the second arg is 0). So really what you're seeing is the result of the ToInt32 operation:
Let number be ? ToNumber(argument). (You can largely ignore this bit.)
If number is NaN, +0, -0, +∞, or -∞, return +0.
Let int be the mathematical value that is the same sign as number and whose magnitude is floor(abs(number)).
Let int32bit be int modulo 2^32.
If int32bit ≥ 2^31, return int32bit - 2^32; otherwise return int32bit.
So in C#, I think that's roughly:
double value = 3144134277.5187168;
bool negative = value < 0;
long n = Convert.ToInt64(Math.Floor(Math.Abs(value)));
n = n % 4294967296;
n = n > 2147483648 ? n - 4294967296 : n;
int i = (int)n;
i = negative ? -i : i;
Console.WriteLine(i); // -1150833019
...written verbosely for clarity. Note that the sign isn't added back to the result in quite the same place as the spec; when I did that, it didn't work correctly, which probably has to do with differing definitions between the spec's "modulo" and C#'s % operator.
And just double-checking, those steps with -3144134277.5187168 give you 1150833019, which is as it's supposed to be as well.
From my understanding the binary number system uses as set of two numbers, 0's and 1's to perform calculations.
Why does:
console.log(parseInt("11", 2)); return 3 and not 00001011?
http://www.binaryhexconverter.com/decimal-to-binary-converter
Use toString() instead of parseInt:
11..toString(2)
var str = "11";
var bin = (+str).toString(2);
console.log(bin)
According JavaScript's Documentation:
The following examples all return NaN:
parseInt("546", 2); // Digits are not valid for binary representations
parseInt(number, base) returns decimal value of a number presented by number parameter in base base.
And 11 is binary equivalent of 3 in decimal number system.
var a = {};
window.addEventListener('input', function(e){
a[e.target.name] = e.target.value;
console.clear();
console.log( parseInt(a.number, a.base) );
}, false);
<input name='number' placeholder='number' value='1010'>
<input name='base' placeholder='base' size=3 value='2'>
As stated in the documentation for parseInt: The parseInt() function parses a string argument and returns an integer of the specified radix (the base in mathematical numeral systems).
So, it is doing exactly what it should do: converting a binary value of 11 to an integer value of 3.
If you are trying to convert an integer value of 11 to a binary value than you need to use the Number.toString method:
console.log(11..toString(2)); // 1011
.toString(2) works when applied to a Number type.
255.toString(2) // syntax error
"255".toString(2); // 255
var n=255;
n.toString(2); // 11111111
// or in short
Number(255).toString(2) // 11111111
// or use two dots so that the compiler does
// mistake with the decimal place as in 250.x
255..toString(2) // 11111111
The parseInt() function parses a string argument and returns an integer of the specified radix (the base in mathematical numeral systems).
So you are telling the system you want to convert 11 as binary to an decimal.
Specifically to the website you are referring, if you look closer it is actually using JS to issue a HTTP GET to convert it on web server side. Something like following:
http://www.binaryhexconverter.com/hesapla.php?fonksiyon=dec2bin°er=11&pad=false
The shortes method I've found for converting a decimal string into a binary is:
const input = "54654";
const output = (input*1).toString(2);
print(output);
I think you should understand the math behind decimal to binary conversion. Here is the simple implementation in javascript.
main();
function main() {
let input = 12;
let result = decimalToBinary(input);
console.log(result);
}
function decimalToBinary(input) {
let base = 2;
let inputNumber = input;
let quotient = 0;
let remainderArray = [];
let resultArray = [];
if (inputNumber) {
while (inputNumber) {
quotient = parseInt(inputNumber / base);
remainderArray.push(inputNumber % base);
inputNumber = quotient;
}
for (let i = remainderArray.length - 1; i >= 0; i--) {
resultArray.push(remainderArray[i]);
}
return parseInt(resultArray.join(''));
} else {
return `${input} is not a valid input`;
}
}
This is an old question, however I have another solution that might contribute a little bit. I usually use this function to convert a decimal number into a binary:
function dec2bin(dec) {
return (dec >>> 0).toString(2);
}
The dec >>> 0 converts the number into a byte and then toString(radix) function is called to return a binary string. It is simple and clean.
Note: a radix is used for representing a numeric value. Must be an integer between 2 and 36. For example:
2 - The number will show as a binary value
8 - The number will show as an octal value
16 - The number will show as an hexadecimal value
function num(n){
return Number(n.toString(2));
}
console.log(num(5));
This worked for me: parseInt(Number, original_base).toString(final_base)
Eg: parseInt(32, 10).toString(2) for decimal to binary conversion.
Source: https://www.w3resource.com/javascript-exercises/javascript-math-exercise-3.php
Here is a concise recursive version of a manual decimal to binary algorithm:
Divide decimal number in half and aggregate remainder per operation until value==0 and print concatenated binary string
Example using 25: 25/2 = 12(r1)/2 = 6(r0)/2 = 3(r0)/2 = 1(r1)/2 = 0(r1) => 10011 => reverse => 11001
function convertDecToBin(input){
return Array.from(recursiveImpl(input)).reverse().join(""); //convert string to array to use prototype reverse method as bits read right to left
function recursiveImpl(quotient){
const nextQuotient = Math.floor(quotient / 2); //divide subsequent quotient by 2 and take lower limit integer (if fractional)
const remainder = ""+quotient % 2; //use modulus for remainder and convert to string
return nextQuotient===0?remainder:remainder + recursiveImpl(nextQuotient); //if next quotient is evaluated to 0 then return the base case remainder else the remainder concatenated to value of next recursive call
}
}
To get better understanding, I think you should try to do the math of that conversion by yourself.
(1) 11 / 2 = 5
(1) 5 / 2 = 2
(0) 2 / 2 = 1
(1) 1 / 2 = 0
I made a function based on that logic
function decimalToBinary(inputNum) {
let binary = [];
while (inputNum > 0) {
if (inputNum % 2 === 1) {
binary.splice(0,0,1);
inputNum = (inputNum - 1) / 2;
} else {
binary.splice(0,0,0);
inputNum /= 2;
}
}
binary = binary.join('');
console.log(binary);
}
This is what I did to get the solution:
function addBinary(a,b) {
// function that converts decimal to binary
function dec2bin(dec) {
return (dec >>> 0).toString(2);
}
var sum = a+b; // add the two numbers together
return sum.toString(2); //converts sum to binary
}
addBinary(2, 3);
I first converted the decimal number to binary like it said, and I got the function from w3schools under the JavaScript Bitwise lesson. Then to make it easier on myself, I created the variable "sum" which does the addition and finally, I made the addBinary function return the sum as a binary code, then called it. It passed in CodeWars. I hope this makes sense and it helps you.
Just use Number(x).toString(base). Where base needs to be equals 2.
var num1=13;
Number(num1).toString(2)
result: "1101"
Number(11).toString(2)
result: "1011"
It seems like the conversion with the string radix (dec >>> 0).toString(2) is returning the binary number formatted in the wrong direction. I have validated this solution in Chrome. In case anyone wants to manually calculate binary for validation, from left to right you add the numbers together that correspond to a 1 position in your binary number mapping to [1][2][4][8][16][32][64][128] ....
For example:
10 in binary is 0101 OR 0 + 2 + 0 + 8.
13 in binary is 1011 OR 1 + 0 + 4 + 8.
255 in binary is 11111111 OR 1 + 2 + 4 + 8 + 16 + 32 + 64 + 128
function dec2bin(dec){
return (dec >>> 0).toString(2).split('').reverse().join('');
}
This will give the decimal to binary:
let num = "1234"
console.log(num.toString(2));
This will give binary to decimal:
let num = "10011010010";
console.log(parseInt(num, 2));
I was checking out an online game physics library today and came across the ~~ operator. I know a single ~ is a bitwise NOT, would that make ~~ a NOT of a NOT, which would give back the same value, wouldn't it?
It removes everything after the decimal point because the bitwise operators implicitly convert their operands to signed 32-bit integers. This works whether the operands are (floating-point) numbers or strings, and the result is a number.
In other words, it yields:
function(x) {
if(x < 0) return Math.ceil(x);
else return Math.floor(x);
}
only if x is between -(231) and 231 - 1. Otherwise, overflow will occur and the number will "wrap around".
This may be considered useful to convert a function's string argument to a number, but both because of the possibility of overflow and that it is incorrect for use with non-integers, I would not use it that way except for "code golf" (i.e. pointlessly trimming bytes off the source code of your program at the expense of readability and robustness). I would use +x or Number(x) instead.
How this is the NOT of the NOT
The number -43.2, for example is:
-43.210 = 111111111111111111111111110101012
as a signed (two's complement) 32-bit binary number. (JavaScript ignores what is after the decimal point.) Inverting the bits gives:
NOT -4310 = 000000000000000000000000001010102 = 4210
Inverting again gives:
NOT 4210 = 111111111111111111111111110101012 = -4310
This differs from Math.floor(-43.2) in that negative numbers are rounded toward zero, not away from it. (The floor function, which would equal -44, always rounds down to the next lower integer, regardless of whether the number is positive or negative.)
The first ~ operator forces the operand to an integer (possibly after coercing the value to a string or a boolean), then inverts the lowest 31 bits. Officially ECMAScript numbers are all floating-point, but some numbers are implemented as 31-bit integers in the SpiderMonkey engine.
You can use it to turn a 1-element array into an integer. Floating-points are converted according to the C rule, ie. truncation of the fractional part.
The second ~ operator then inverts the bits back, so you know that you will have an integer. This is not the same as coercing a value to boolean in a condition statement, because an empty object {} evaluates to true, whereas ~~{} evaluates to false.
js>~~"yes"
0
js>~~3
3
js>~~"yes"
0
js>~~false
0
js>~~""
0
js>~~true
1
js>~~"3"
3
js>~~{}
0
js>~~{a:2}
0
js>~~[2]
2
js>~~[2,3]
0
js>~~{toString: function() {return 4}}
4
js>~~NaN
0
js>~~[4.5]
4
js>~~5.6
5
js>~~-5.6
-5
In ECMAScript 6, the equivalent of ~~ is Math.trunc:
Returns the integral part of a number by removing any fractional digits. It does not round any numbers.
Math.trunc(13.37) // 13
Math.trunc(42.84) // 42
Math.trunc(0.123) // 0
Math.trunc(-0.123) // -0
Math.trunc("-1.123")// -1
Math.trunc(NaN) // NaN
Math.trunc("foo") // NaN
Math.trunc() // NaN
The polyfill:
function trunc(x) {
return x < 0 ? Math.ceil(x) : Math.floor(x);
}
The ~ seems to do -(N+1). So ~2 == -(2 + 1) == -3 If you do it again on -3 it turns it back: ~-3 == -(-3 + 1) == 2 It probably just converts a string to a number in a round-about way.
See this thread: http://www.sitepoint.com/forums/showthread.php?t=663275
Also, more detailed info is available here: http://dreaminginjavascript.wordpress.com/2008/07/04/28/
Given ~N is -(N+1), ~~N is then -(-(N+1) + 1). Which, evidently, leads to a neat trick.
Just a bit of a warning. The other answers here got me into some trouble.
The intent is to remove anything after the decimal point of a floating point number, but it has some corner cases that make it a bug hazard. I'd recommend avoiding ~~.
First, ~~ doesn't work on very large numbers.
~~1000000000000 == -727279968
As an alternative, use Math.trunc() (as Gajus mentioned, Math.trunc() returns the integer part of a floating point number but is only available in ECMAScript 6 compliant JavaScript). You can always make your own Math.trunc() for non-ECMAScript-6 environments by doing this:
if(!Math.trunc){
Math.trunc = function(value){
return Math.sign(value) * Math.floor(Math.abs(value));
}
}
I wrote a blog post on this for reference: http://bitlords.blogspot.com/2016/08/the-double-tilde-x-technique-in.html
Converting Strings to Numbers
console.log(~~-1); // -1
console.log(~~0); // 0
console.log(~~1); // 1
console.log(~~"-1"); // -1
console.log(~~"0"); // 0
console.log(~~"1"); // 1
console.log(~~true); // 1
console.log(~~false); // 0
~-1 is 0
if (~someStr.indexOf("a")) {
// Found it
} else {
// Not Found
}
source
~~ can be used as a shorthand for Math.trunc()
~~8.29 // output 8
Math.trunc(8.29) // output 8
Here is an example of how this operator can be used efficiently, where it makes sense to use it:
leftOffset = -(~~$('html').css('padding-left').replace('px', '') + ~~$('body').css('margin-left').replace('px', '')),
Source:
See section Interacting with points
Tilde(~) has an algorihm -(N+1)
For examle:
~0 = -(0+1) = -1
~5 = -(5+1) = -6
~-7 = -(-7+1) = 6
Double tilde is -(-(N+1)+1)
For example:
~~5 = -(-(5+1)+1) = 5
~~-3 = -(-(-3+1)+1) = -3
Triple tilde is -(-(-(N+1)+1)+1)
For example:
~~~2 = -(-(-(2+1)+1)+1) = -3
~~~3 = -(-(-(3+1)+1)+1) = -4
Same as Math.abs(Math.trunc(-0.123)) if you want to make sure the - is also removed.
In addition to truncating real numbers, ~~ can also be used as an operator for updating counters in an object. The ~~ applied to an undefined object property will resolve to zero, and will resolve to the same integer if that counter property already exists, which you then increment.
let words=["abc", "a", "b", "b", "bc", "a", "b"];
let wordCounts={};
words.forEach( word => wordCounts[word] = ~~wordCounts[word] + 1 );
console.log("b count == " + wordCounts["b"]); // 3
The following two assignments are equivalent.
wordCounts[word] = (wordCounts[word] ? wordCounts[word] : 0) + 1;
wordCounts[word] = ~~wordCounts[word] + 1;
While working on a project, I came across a JS-script created by a former employee that basically creates a report in the form of
Name : Value
Name2 : Value2
etc.
The peoblem is that the values can sometimes be floats (with different precision), integers, or even in the form 2.20011E+17. What I want to output are pure integers. I don't know a lot of JavaScript, though. How would I go about writing a method that takes these sometimes-floats and makes them integers?
If you need to round to a certain number of digits use the following function
function roundNumber(number, digits) {
var multiple = Math.pow(10, digits);
var rndedNum = Math.round(number * multiple) / multiple;
return rndedNum;
}
You hav to convert your input into a number and then round them:
function toInteger(number){
return Math.round( // round to nearest integer
Number(number) // type cast your input
);
};
Or as a one liner:
function toInt(n){ return Math.round(Number(n)); };
Testing with different values:
toInteger(2.5); // 3
toInteger(1000); // 1000
toInteger("12345.12345"); // 12345
toInteger("2.20011E+17"); // 220011000000000000
According to the ECMAScript specification, numbers in JavaScript are represented only by the double-precision 64-bit format IEEE 754. Hence there is not really an integer type in JavaScript.
Regarding the rounding of these numbers, there are a number of ways you can achieve this. The Math object gives us three rounding methods wich we can use:
The Math.round() is most commonly used, it returns the value rounded to the nearest integer. Then there is the Math.floor() wich returns the largest integer less than or equal to a number. Lastly we have the Math.ceil() function that returns the smallest integer greater than or equal to a number.
There is also the toFixed() that returns a string representing the number using fixed-point notation.
Ps.: There is no 2nd argument in the Math.round() method. The toFixed() is not IE specific, its within the ECMAScript specification aswell
Here is a way to be able to use Math.round() with a second argument (number of decimals for rounding):
// 'improve' Math.round() to support a second argument
var _round = Math.round;
Math.round = function(number, decimals /* optional, default 0 */)
{
if (arguments.length == 1)
return _round(number);
var multiplier = Math.pow(10, decimals);
return _round(number * multiplier) / multiplier;
}
// examples
Math.round('123.4567', 2); // => 123.46
Math.round('123.4567'); // => 123
You can also use toFixed(x) or toPrecision(x) where x is the number of digits.
Both these methods are supported in all major browsers
You can use Math.round() for rounding numbers to the nearest integer.
Math.round(532.24) => 532
Also, you can use parseInt() and parseFloat() to cast a variable to a certain type, in this case integer and floating point.
A very good approximation for rounding:
function Rounding (number, precision){
var newNumber;
var sNumber = number.toString();
var increase = precision + sNumber.length - sNumber.indexOf('.') + 1;
if (number < 0)
newNumber = (number - 5 * Math.pow(10,-increase));
else
newNumber = (number + 5 * Math.pow(10,-increase));
var multiple = Math.pow(10,precision);
return Math.round(newNumber * multiple)/multiple;
}
Only in some cases when the length of the decimal part of the number is very long will it be incorrect.
Math.floor(19.5) = 19 should also work.