Surprising simple operation results - javascript

In JavaScript, I tried executing the below statements
var x = 1+"2"-3; //Anser is 9.
var y = 1-"2"+4 //Anser is 3.
For such operations, what is converted to what?
I guess 1+"2" = 12(number) and then 12-3?

- converts the both operands to numbers. But if either operand to + is a string, the other is converted to string and it's a concatenation. Like "Hi, " + "how are you?" = "Hi, how are you?" So your answers are correct.
var x = 1+"2"-3;
// concats the string as 12 and then subtracts...
12 - 3 = 9
var y = 1-"2"+4
// converts to numbers and subtracts, making -1 and then adds 4 giving out 3
-1 + 4 = 3
This was the process.

Scenario I
Step 1:
1 + "2" => "12" //concatenation happened
Step 2
"12" - 3 => 9 //String widens to number since we are using - symbol here.
Scenario II
Step 1:
1 - "2" => -1 //String widens to number since we are using - symbol here.
Step 2:
-1 + 4 => 3 //Normal addition happens

Related

in Javascript, exist a function that return only the fractional part of a number, like Math dot something.... ? How can I get it? [duplicate]

I have a floating point number:
var f = 0.1457;
Or:
var f = 4.7005
How do I get just the fraction remainder as integer?
I.e. in the first example I want to get:
var remainder = 1457;
In the second example:
var remainder = 7005;
function frac(f) {
return f % 1;
}
While this is not what most people will want, but TS asked for fract as integer, here it is:
function fract(n){ return Number(String(n).split('.')[1] || 0); }
fract(1.23) // = 23
fract(123) // = 0
fract(0.0008) // = 8
This will do it (up to the 4 digits that you want, change the multipler (10000) to larger or smaller if you want smaller or larger number):
Math.ceil(((f < 1.0) ? f : (f % Math.floor(f))) * 10000)
parseInt(parseFloat(amount).toString().split('.')[1], 10)
You can subtract the floor of the number, giving you just the fractional part, and then multiply by 10000, i.e.:
var remainder = (f-Math.floor(f))*10000;
I would argue that, assuming we want to display these values to the user, treating these numbers as strings would be the best approach. This gets round the issue of fractional values such as 0.002.
I came accross this issue when trying to display prices with the cents in superscript.
let price = 23.43; // 23.43
let strPrice = price.toFixed(2) + ''; // "23.43"
let integer = strPrice.split(".")[0] // "23"
let fractional = strPrice.split(".")[1] // "43"
This also depends on what you want to do with the remainder (as commenters already asked). For instance, if the base number is 1.03, do you want the returned remainder as 3 or 03 -- I mean, do you want it as a number or as a string (for purposes of displaying it to the user). One example would be article price display, where you don't want to conver 03 to 3 (for instance $1.03) where you want to superscript 03.
Next, the problem is with float precision. Consider this:
var price = 1.03;
var frac = (price - Math.floor(price))*100;
// frac = 3.0000000000000027
So you can "solve" this by slicing the string representation without multiplication (and optional zero-padding) in such cases. At the same time, you avoid floating precision issue. Also demonstrated in this jsfiddle.
This post about floating precision might help as well as this one.
var strNumber = f.toString();
var remainder = strNumber.substr(strNumber.indexOf('.') + 1, 4);
remainder = Number(reminder);
Similar method to Martina's answer with a basic modulo operation but solves some of the issues in the comments by returning the same number of decimal places as passed in.
Modifies a method from an answer to a different question on SO which handles the scientific notation for small floats.
Additionally allows the fractional part to be returned as an integer (ie OP's request).
function sfract(n, toInt) {
toInt = false || toInt;
let dec = n.toString().split('e-');
let places = dec.length > 1
? parseInt(dec[1], 10)
: Math.floor(n) !== n ? dec[0].split('.')[1].length : 0;
let fract = parseFloat((n%1).toFixed(places));
return toInt ? fract * Math.pow(10,places) : fract;
};
Tests
function sfract(n, toInt) {
toInt = false || toInt;
let dec = n.toString().split('e-');
let places = dec.length > 1
? parseInt(dec[1], 10)
: Math.floor(n) !== n ? dec[0].split('.')[1].length : 0;
let fract = parseFloat((n%1).toFixed(places));
return toInt ? fract * Math.pow(10,places) : fract;
};
console.log(sfract(0.0000005)); // 5e-7
console.log(sfract(0.0000005, true)); // 5
console.log(sfract(4444)); // 0
console.log(sfract(4444, true)); // 0
console.log(sfract(44444.0000005)); // 5e-7
console.log(sfract(44444.00052121, true)); // 52121
console.log(sfract(34.5697)); // 0.5697
console.log(sfract(730.4583333333321, true)); // 4583333333321
#Udara Seneviratne
const findFraction = (num) => {
return parseInt( // 5.---------------- And finally we parses a "string" type and returns an integer
// 1. We convert our parameter "num" to the "string" type (to work as with an array in the next step)
// result: "1.012312"
num.toString()
// 2. Here we separating the string as an array using the separator: " . "
// result: ["1", "012312"]
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/split
.split('.')
// 3. With help a method "Array.splice" we cut the first element of our array
// result: ["012312"]
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice
.splice(1.1)
// 4. With help a method "Array.shift" we remove the first element from an array and returns that
// result: 012312 (But it's still the "string" type)
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/shift
.shift()
)
}
// Try it
console.log("Result is = " + findFraction (1.012312))
// Type of result
console.log("Type of result = " + typeof findFraction (1.012312))
// Some later operation
console.log("Result + some number is = " + findFraction (1.012312) + 555)

Deciding key-value pairs for Roman Numeral Converter

Every solution I've found uses the following object.
function converter(num) {
var romanKeys =
{M:1000,CM:900,D:500,CD:400,C:100,XC:90,L:50,XL:40,X:10,IX:9,V:5,IV:4,I:1}
When attempting the problem myself, I wasn't too sure which roman numerals were redundant when constructing the object. Procedurally, how do we arrive to this object? e.g How do I know that
"VI: 6" is unnecessary but "IV: 4" is?
When a symbol appears after a larger (or equal) symbol it is added
Example: VI = V + I = 5 + 1 = 6
Example: LXX = L + X + X = 50 + 10 + 10 = 70
But if the symbol appears before a larger symbol it is subtracted
Example: IV = V − I = 5 − 1 = 4
Example: IX = X − I = 10 − 1 = 9
I can be placed before V (5) and X (10) to make 4 and 9.
X can be placed before L (50) and C (100) to make 40 and 90.
C can be placed before D (500) and M (1000) to make 400 and 900.
When you are scanning a roman number you are looking from left to right at each symbol and if it appears before a larger symbol, you take them together, do the substraction and add it to the result, then move to the symbol after them. Otherwise you take a single symbol and add its value to the result and move to the next symbol.
For example for XIV:
1) result = 0
2) X < I => result += 10 (result = 10)
3) I < V => result += (5-1) (result = 14)
Note that if you are using that mapping, you only need the combinations where the second symbol is greater than the first one for which the substraction rule applies, as noted above (CM, CD, XC, XL, IX, IV).
Having something like XI in that mapping would give you a wrong result. For XIV you will have XI (11) + V (5) = 16, not X (10) + IV (4) = 14.

How to verify if a number has at most N decimal places?

How to verify if an input type number contains a maximum of 3 decimals, without using regex?
let x = 1.5555
let y = 1.55
x is false
y is true
You can use a formula like:
(x * 10**N) % 1 === 0
Here x is your number that potentially contains decimals (eg: 1.555) and N is the maximum amount of decimal places you want to allow for.
Eg, for numbers with 3 (N = 3) or fewer decimal places, you will get x*1000, which will evaluate to an integer. Eg:
1.55 -> 1550
1.555 -> 1555
For numbers with more than 3 decimal places, doing x*1000 won't convert it to an int, it will only shift parts of the number over:
1.5555 -> 1555.5 // still a decimal
The % 1 check then gets the remainder of the above number if it was to be divided by 1. If the remainder is 0, then the number was converted to an integer, if it is more than 0, then x*1000 failed to convert the number to an int, meaning that it has more than 3 decimals:
const validate = (x, N) => (x * 10**N) % 1 === 0;
console.log(validate(1.5555, 3)); // false
console.log(validate(1.55, 3)); // true
console.log(validate(1.555, 3)); // true
console.log(validate(0.00000001, 3)); // false
You can convert to string using the toString() method, then split at the point . with the .split() method this will result in an array.
The first element in the array is a string containing the whole number part which is not interesting here for us.
The second element at indice 1 in the resulting array is the decimal part as string.
Now you can check the length property of this string if it is equal or less then three which means it has three or less decimal numbers then we return true in the validation function when not we return false.
const x = 1.5555;
const y = 1.555;
const z = 1.55
function validate(num){
return num.toString().split(".")[1].length <= 3;
}
console.log(validate(x));
console.log(validate(y));
console.log(validate(z));
This may solve your problem
let x = 1.5555;
let y = 1.55;
int length = x.Substring(number.IndexOf(".")).Length;
bool result = length > 3 ? true: false;

Can someone help explain why it returns a different sum? [duplicate]

This question already has answers here:
JavaScript adding a string to a number
(8 answers)
Closed 3 years ago.
var x = 10 + Number("1"+"6");
console.log(x);
returns: 26
var y = 10 + 1 + 6;
console.log(y);
returns: 17
You're adding two strings together inside Number(...):
"1" + "6" = "16"
So the line basically comes down to:
var x = 10 + Number( "16" )
> 26
In your first example Number("1"+"6"), "1" and "6" evaluate as strings (because of the quotes). When JS adds strings it concatenates them, so "1" + "6" becomes "16" the same way that "Hello " + "world" becomes "Hello world".
In your second example all of the numbers are treated as numbers, so they are added as you expect.
"1"+"6" = "16" : concatenation fo 2 strings
Number("1"+"6") = Number("16") = 16
10 + 16 = 26
let x = 10 + Number("1") + Number("6"); //for x to equal 17
here a function i use to sum numbers regardless of arguments are numbers or strings (return null if any of the arguments is not a number)
function sumNumbers(){
let result = 0;
for(arg of arguments){
let nArg = Number(arg);
if(isNaN(nArg)){
return null;
};
result+=nArg;
}
return result;
}

Why is 23 equal to 10111 in binary?

I've tried to convert 23 to binary and came up with the number 100111 by using the following process:
1) 23 = 22 + 1 // find out the least significant bit 1
2) 22/2 = 10 + 1 // next bit is 1
3) 10/2 = 4 + 1 // next bit is 1
4) 4/2 = 2 + 0 // next bit is 0
So I'm left with the 2 in decimal, which is 10 in binary. Now I'm writing down the number:
10 plus the the bits from the operations 4, 3, 2, 1 gives me
100111, however, the answer is 10111. Where is my mistake?
Provided with this method, the binary representation of the decimal number 23 is as follows:
ex: Convert 23 to a binary representation
23 / 2 = 11 R 1
11 / 2 = 5 R 1
5 / 2 = 2 R 1
2 / 2 = 1 R 0
1 / 2 = 0 R 1
answer = 10111
As you currently have it,
1) 23 = 22 + 1 // find out the least significant bit 1
This step is unnecessary. You don't need to shave off the odd number first. Simply follow the procedure outlined in the link to generate the output. What this means is that the only operation you perform on your decimal number output is repeated divisions by 2, with the remainders spelling out your binary representation of your number.
If this isn't a programming question, it should be migrated to the correct forum.
If you do want a JavaScript solution as well, since you have marked this question with the JavaScript tag, then the easiest way is to simply do (N).toString(2), where (N) is your decimal number and .toString(2) converts your number to a binary representation of your number in string form. The 2 represents the radix/base.
In the end, what you want to get is
23 = 16 + 4 + 2 + 1
= 1*16 + 0*8 + 1*4 + 1*2 + 1*1
= 1*2^4 + 0*2^3 + 1*2^2 + 1*2^1 + 1*2^0
The calculations should look like this:
23 = 2*11 + 1 (1st least significant digit is 1)
11 = 2*5 + 1 (2nd least significant digit is 1)
5 = 2*2 + 1 (3rd least significant digit is 1)
2 = 2*1 + 0 (4th least significant digit is 0)
1 = 2*0 + 1 (5th least significant digit is 1)

Categories