How do I result in a Float number with 2 integers? [closed] - javascript

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
var ratings = 3193;
var reviews = 9;
var average = parseFloat(ratings) / reviews; //I want a floating point number at the end.
Is this the right way to do it?

All numbers in JavaScript are double-precision 64-bit binary format IEEE 754. There is no need to typecast "integers" into "floats" as you would expect it from C/C++ and other languages. You need parse* only if you handle strings.
See also:
Number value
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
parseInt, parseFloat (both take a string as parameter)

The conversion isn't necessary. JavaScript automatically converts between types. And numbers are not actually represented as integers internally. They're all floating point anyway.
So, the simplest solution should have the desired effect:
var ratings = 3193;
var reviews = 9;
var average = ratings/reviews;
What you have in your example causes the engine to convert ratings to a String and parse that string as a double (theoretically resulting in the value it had to begin with) before treating it as the numerator in your calculation.

Related

How to verify huge number in JS [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I have a task to filter out a number which is bigger than 9e+65 (65 zeros).
As input I have a number and as output I need to return a boolean value. The function can accept regular formatted numbers (42342) and any scientific notation (1e5).
My approach is:
const 65zerosCheck = (num: number):boolean =>
num.toString().includes("e+")
: Number(value.toString().split('e+')[1]) > 65
: false
It looks dirty and the reviewer didn't accept it.
To quote MDN:
In JavaScript, numbers are implemented in double-precision 64-bit binary format IEEE 754 (i.e., a number between ±2^−1022 and ±2^+1023, or about ±10^−308 to ±10^+308, with a numeric precision of 53 bits). Integer values up to ±2^53 − 1 can be represented exactly.
You do not have to worry about such huge numbers. I have added a link to MDN quote above at the end of this snippet where it is discussed in details about how Javascript handles Numbers.
const HUGE_NUMBER_THRESHOLD = 9e+65;
const checkHugeNumbers = (num) => num > HUGE_NUMBER_THRESHOLD;
let myTestNum = 9000000000000000000000000000000000000000000000000000000000000000000;
console.log(checkHugeNumbers(myTestNum));
// OUTPUT:
// true
For further study, here is the reference link.
There doesn't seem to be anything logically wrong with your approach, but if your reviewer is asking for a cleaner approach, this is my suggestion.
It does the same thing, but is more readable and its easy to add on to, in the future. Splitting up the logic and results into descriptive variables makes it easier to read, and catch any errors or oversights that may be encountered.
Also you can save a step by directly getting the index, without using split and creating three types (array, string, and number), that can make it confusing to follow. This approach keeps everything between strings and numbers
const checkOver65Zeros = (num: number) =>{
const numString = num.toString()
const idxOfZeros = numString.indexOf("e+")
if(idxOfZeros!== -1)
return Number(numString.substring(idxOfZeros + 2)) > 65
return false
}
console.log(checkOver65Zeros(900000000000000000000000000000000000000))

Convert 17digit number string to a integer [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I want to convert a 17digit number string into a number
this is the number "76561197962169398".I tried using parseInt()
The result of using parseInt is :-
76561197962169390
I am loosing the last digit.I also tried BigInt() 'n' is getting appended to the number.
I m thinking of using replace() with a regex for only digits.
Is there any other way I can achieve this without loosing precision.
Please any help regarding this is really appriciated.THANK YOU
in chrome 83 devtools:
x=76561197962169398n
76561197962169398n
++x
76561197962169399n
typeof x
"bigint"
y=BigInt("76561197962169398")
76561197962169398n
++y
76561197962169399n
x+y
153122395924338798n
x + 1
VM342:1 Uncaught TypeError: Cannot mix BigInt and other types, use explicit conversions
at <anonymous>:1:2
(anonymous) # VM342:1
x + 1n
76561197962169400n
[5n, 3n, 9n, 7n].sort()
[3n, 5n, 7n, 9n]
The n suffix is for display - and in code it's needed to say a literal value needs to be treated as bigint instead of number - think of it like quotes for strings - without quotes a sequence of characters is not a string - similarly a number without n suffix is not a bigint - it's a number that has limited precision and simply cannot be used for large values

Convert number value to string [duplicate]

This question already has answers here:
What is JavaScript's highest integer value that a number can go to without losing precision?
(21 answers)
Closed 8 years ago.
Can anybody solve the following problem with javascript
var i = 10152233307863175;
alert(i.toString());
alert shows value 10152233307863176. Any solution. Problem is when I get json object on client and when string is converted to json it contains wrong values.
This is a limitation in the precision of the numeric data format that javascript uses (double precision floating point).
The best way of storing that value, assuming you don't need to do any mathematical operations, is storing it as a string in the first place.
MDN has this to say about numbers in JavaScript.
Numbers in JavaScript are "double-precision 64-bit format IEEE 754 values", according to the spec.
There is no real integers in JavaScript. According to this source:
ECMAScript numbers are represented in binary as IEEE-754 (IEC 559) Doubles, with a resolution of 53 bits, giving an accuracy of 15-16 decimal digits; integers up to just over 9e15 are precise, ...
Your number 10152233307863175 contains 17 digits. Since the number is represented as a floating point number, JavaScript tries to do it's best and set bits in a way that the resulting number is closest to the supplied number.

Why javascript don't have a random function to generate integers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm using this function to generate random int values :
var r = function(min, max){
return Math.floor(Math.random() * (max - min + 1)) + min;
};
It works perfectly but makes me wonder ... why there is no randomInt and randomFloat in javascript?
JavaScript has a Number type which is a 64-bit float; there is no Integer type per se. Math.random by itself gives you a random Number, which is already a 64-bit float. I don't see why there couldn't be a Math.randomInt (internally it could either truncate, floor, or ceil the value). There is no good answer as to why the language doesn't have it; you would have to ask Brendan Eich. However, you can emulate what you want using Math.ceil or Math.floor. This will give you back a whole number, which isn't really an Integer typewise, but is still a Number type.
Because Javascript doesn't have those types. Pure javascript only has a generic number type.
More info on Javascript types may be found here and here.
You may also want to look into this question: Integers in JavaScript
The marked answer says, and I quote:
There are really only a few data types in Javascript: Objects, numbers, and strings. As you read, JS numbers are all 64-bit floats. There are no ints.

Javascript floating point addition workaround [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Ok, so we all know of the floating point number problem, such as:
0.23 - 1 + 1 = 0.22999999999999998
And since in javascript, unlike other languages, all numbers are actually floating points and there's no int/decimal there are all kinds of libraries and workarounds such as BigDecimal to handle this problem. This is best discussed here.
I was creating a "numeric spinner" control that supports floating point numbers, and obviously I wouldn't want the user to click "up" and then "down" and get a different number from what he started with, so I tried writing a workaround - a "addPrecise" method of sorts.
My idea was the following:
Take the two numbers I'm about to add, and figure out their "precision" - how many digits they have after the decimal point.
Add the two numbers as floats
Apply toFixed() on the result with the max precision of the two
For example:
float #1: 1.23
float #2: -1
adding them normally would result in 0.22999999999999998 but if I'm taking the maximal number of decimal places, which is #1's 2 digits, and apply toFixed(2) I get 0.23 as I wanted.
I've done this with the following piece of code but I'm not happy with it.
function addPrecise(f1, f2){
var floatRegex = /[^\d\-]/;
var f1p = floatRegex.exec(f1.toString()) ? f1.toString().split(floatRegex.exec(f1.toString()))[1].length : 0;
var f2p = floatRegex.exec(f2.toString()) ? f2.toString().split(floatRegex.exec(f2.toString()))[1].length : 0;
var precision = Math.max(f1p,f2p);
return parseFloat((parseFloat(f1) + parseFloat(f2)).toFixed(precision));
}
(It's worth noting that I'm using the regex to find the 'floating point' because in other locales it might be a comma instead of a period. I'm also taking into account the possibility that I got an Int (with no point) in which case it's precision is 0.)
Is there a cleaner/simpler/better way to do this?
Edit: I'd also like to point out that finding a reliable way to extract the number of decimal digits is also part of the question. I'm unsure about my method there.
An appropriate solution for this problem is:
Determine how many digits, say d, are needed after the decimal point.
Read inputs from the user and convert them to integers scaled by 10d.
Do all arithmetic using the integers.
When displaying values for the user, divide them by 10d for display.
More details:
If all work is in whole units of user-entered data, then the number of digits needed after the decimal point, d, is the same as the number of digits to be accepted from the user. (If one were going to do some arithmetic with fractions of the step size, for example, then more digits might be needed.)
When the user enters input, it should be converted to the scaled integers. Note that the user enters a string (not a floating-point number; it is just a string when the user types it), and the string should be a numeral (characters that represent a number). One way to process this input is to call library routines to convert this numeral to a floating-point number, then multiply by 10d, then round the floating-point result to the nearest integer (to compensate for floating-point rounding errors). For numbers of reasonable size, this will always produce exactly the desired result; floating-point rounding errors will not be a problem. (Excessively large user input should be rejected.) Another way to process this input is to write your own code to convert the numeral directly to a scaled integer, avoiding floating-point entirely.
As long as only addition, multiplication, and subtraction are used (e.g., multiplying the step size by a number of steps and adding to a prior value), and the integer range is not exceeded, then all arithmetic is exact; there are no rounding errors. If division is used, this approach must be reconsidered.
As with reading input, displaying output can be performed either with an assist from floating point and library routines or by writing your own code to convert directly. To use floating-point, convert the integer to floating-point, divide by 10d, and use a library routine to format it with no more than d digits after the decimal place. (Using default format conversions in library routines might result in more than d digits displayed. If only d digits are displayed, and d is a reasonably small number, then floating-point rounding errors that occur during the formatting will not be visible.)
It is possible to implement all of the above using only floating-point, as long as care is taken to ensure that the arithmetic uses entirely integers (or other values that are exactly representable in the floating-point format) within reasonable bounds.

Categories