I am looking to see how some Javascript functions work under the hood. For e.g. I want to learn how Chrome's V8 Engine implements the Unary (-) operation or the String.prototype.toString() method.
How can I see the native C/C++ implementation? I have seen many answers here linking to the Chromium repository and the V8 repository, but these are giant and not very beginner friendly, and there aren't really any guides anywhere as far as I could find.
I'm looking for something like this:
// Pseudo code
function -(arg) {
return arg * -1
}
Obviously, I understand that I wouldn't find this written in Javascript. I'm just looking for a similar level of detail.
I'm yet to find an answer that concisely shows how to find the native implementation of Javascript functions anywhere. Could someone point me in the right direction?
The ECMA specs here give the following specs for the Unary - operation:
Unary - Operator
The unary - operator converts its operand to Number type and then
negates it. Note that negating +0 produces −0, and negating −0
produces +0.
The production UnaryExpression : - UnaryExpression is evaluated as
follows:
Let expr be the result of evaluating UnaryExpression. Let oldValue be
ToNumber(GetValue(expr)). If oldValue is NaN, return NaN. Return the
result of negating oldValue; that is, compute a Number with the same
magnitude but opposite sign.
This is quite useful, but what I'm trying to understand is, how
compute a Number with the same magnitude but opposite sign
Is calculated. Is it the number * -1 or something else? Or is it multiple ways?
There is no piece of code or single implementation of individual operators in V8. V8 is a just-in-time compiler that executes all JavaScript by compiling it to native code on the fly. V8 supports about 10 different CPU architectures and for each has 4 tiers of compilers. That already makes 40 different implementations of every operator. In many of those, compilation goes through a long pipeline of stages that transform the input to the actual machine code. And in each case the exact transformation depends on the type information that is available at compile time (collected on previous runs).
To understand what's going on you would need to understand a significant part of V8's complicated architecture, so it is pretty much impossible to answer your question in an SO reply. If you are just interested in the semantics, I rather suggest looking at the EcmaScript language definition.
(The snippet you cite is just a helper function for the converting compiler-internal type information for unary operators in one of the many stages.)
Edit: the excerpt from the EcmaScript definition you cite in the updated question is the right place to look. Keep in mind that all JavaScript numbers are IEEE floating point numbers. The sentence is basically saying that - just inverts the sign bit of such a number. You'd have to refer to the IEEE 754 standard for more details. Multiplication with -1.0 is a much more complicated operation, but will have the same result in most cases (probably with the exception of NaN operands).
Related
I'm wondering if someone might be able to explain a specific aspect of the JavaScript BigInt implementation to me.
The overview implementation I understand - rather than operating in base 10, build an array representing digits effectively operating in base 2^32/2^64 depending on build architecture.
What I'm curious about is the display/console.log implementation for this type - it's incredibly fast for most common cases, to the point where if you didn't know anything about the implementation you'd probably assume it was native. But, knowing what I do about the implementation, it's incredible to me that it's able to do the decimal cast/string concatenation math as quickly as it can, and I'm deeply curious how it works.
A moderate look into bigint.cc and bigint.h in the Chromium source has only confused me further, as there are a number of methods whose signatures are defined, but whose implementations I can't seem to find.
I'd appreciate even being pointed to another spot in the Chromium source which contains the decimal cast implementation.
(V8 developer here.)
#Bergi basically provided the relevant links already, so just to sum it up:
Formatting a binary number as a decimal string is a "base conversion", and its basic building block is:
while (number > 0) {
next_char = "0123456789"[number % 10];
number = number / 10; // Truncating integer division.
}
(Assuming that next_char is also written into some string backing store; this string is being built up from the right.)
Special-cased for the common situation that the BigInt only had one 64-bit "digit" to begin with, you can find this algorithm in code here.
The generalization for more digits and non-decimal radixes is here; it's the same algorithm.
This algorithm runs sufficiently fast for sufficiently small BigInts; its problem is that it scales quadratically with the length of the BigInt. So for large BigInts (where some initial overhead easily pays for itself due to enabling better scaling), we have a divide-and-conquer implementation that's built on better-scaling division and multiplication algorithms.
When the requested radix is a power of two, then no such heavy machinery is necessary, because a linear-time implementation is easy. That's why some_bigint.toString(16) is and always will be much faster than some_bigint.toString() (at least for large BigInts), so when you need de/serialization rather than human readability, hex strings are preferable for performance.
if you didn't know anything about the implementation you'd probably assume it was native
What does that even mean?
I know that floating point values in JavaScript are stored with a binary base-2 format specified in IEEE 754. To me, this means that when I assign the literal value .1 to a variable, the value actually stored will be 0.100000001490116119384765625 (or some high-precision number like that--my math may be wrong).
But counter to that assumption, the console.log of a stored value does not reflect this. The following code:
var a = 0.1;
console.log(a);
...when executed in Chrome, and probably other browsers, will output:
0.1
I would have expected it to be:
0.100000001490116119384765625
Does the value of a at this point hold 0.1 or 0.1000000...? If the latter, then by what means does console.log() show 0.1? I'm interested in what's going on under the hood here. (E.g. Is JS storing a text representation of the number in the variable?)
For you diligent admins that might be a little quick to "mark as duplicate", please note that I am asking the opposite of the more common question and variations "Why do I suddenly see these wacky high-precision numbers?"
JavaScript’s default formatting of floating-point values uses just enough decimal digits to uniquely distinguish the value from neighboring floating-point values.
This question is a duplicate except it uses console.log, which is outside of JavaScript. The standard for JavaScript, the ECMAScript 2017 Language Specification does not mention console. This feature is provided by vendors as an extension. So each vendor may implement their own behavior. However, it is somewhat likely that using console.log with a Number will use the usual ToString behavior, resulting in the conversion explain in the answer linked above.
You are right! Neverthless I never noticed that and can't explain it. Maybe some different does. I assume it's just not shown, because it would irritate many people and for most stuff it's not an issue.
var a = 0.1;
console.log(a.toPrecision(21))
I'm writing a shunting yard algorithm in Javascript for boolean logic, and I'm running into a hitch with order of operations. The operations that I allow are:
and, or, implies, equals(biconditional), not, xor, nor, nand
However, I don't know what the precedence is for these. As of now, I have:
not>equals>implies>xor>nor>nand>or>and
Is this correct? Is there any standard I can use, similar to the PEMDAS/BODMAS system for numbers?
The reason you are having such a hard time finding a definition of precedence of those operators for JavaScript is that:
Precedence only comes into play when using infix notation. Since you mention the shunting yard algorithm I take you intend to use infix notation.
Each language can define it's own precedence and since you are creating a DSL, you create the precedence, but it must be consistent.
Those names are really prefix function names and infix is more common with symbols for operators than names. You should be using operators and not function names:
and &
or |
implies →
equals (biconditional) ↔
not !
xor ⊕
nor ⊽
nand ⊼
When parsing you convert infix to prefix or postfix, so the operators symbols should change to function names if you are building an intermediate form such as an AST.
You didn't mention associativity which you will need for NOT.
There appears to be no standard as noted by the differences between these two respectable sources.
From "Foundations of Computer Science" by Jeffrey D. Ullman
Associativity and Precedence of Logical Operators
The order of precedence we shall use is
1. NOT (highest)
2. NAND
3. NOR
4. AND
5. OR
6. IMPLIES
7. BICONDITIONAL(lowest)
From Mathematica
NOT
AND
NAND
XOR
OR
NOR
EQUIVALENT
IMPLIES
It's seems there is no a standart.
I've a book (Digital design by Morris Mano) that says NOT>AND>OR. this is an accepted opinion.
About the rest, i've found few opinions.
This guy think EQUIV is the lowest (Wikipedia assist). but this guy think EQUIV is in the middle XOR>EQUIV>OR (with few references).
Another disagreement is about XOR place. here this third guy agree with the second guy :)
In short, two opinions:
1) NOT>AND>NAND>XOR>EQUIV>OR>NOR (ignoring NOR)
2) NOT>AND>NAND>NOR>OR>IMPLIES>XOR>EQUIV
Note: only the NOT>AND>OR part is certified academically.
Has any one faced Math.js auto approximation issue and got any work around for this?
If I enter any number more than 18 digits then this library returns the approximate value; not the exact value. Lets say if user enters "03030130000309293689" then it returns "3030130000309293600" and when user enters "3030130000309293799" even it returns "3030130000309293600". Can we stop this approximation? This is a bug or if not then how can I avoid approximation?
Due to this approximation if any user enters "03030130000309293695 == 03030130000309293799" then it will always return true which is totally wrong.
github -- https://github.com/josdejong/mathjs
We can try this at http://mathjs.org/ ( in Demo notepad).
This is released for production!
I think if any time user enters like "03030130000309293695 == 03030130000309293799" both side number only then we can do string comparison. Rest all cases will be taken care by approximation.Why I am saying this is because if i use the same library for "73712347274723714284 *73712347274723713000" computation then it gives result in scientific notation.
03030130000309293695 and 03030130000309293799 are pretty much the same number.
HOW?
According to this answer the limit of JS number is 9007199254740992(2^53). Your both numbers are greater than this number and so precision is left out. You probably need to use library like Big.js
It's not a library issue, it's just language architecture issue. You can even open your browser console and type in your equation to see it it truthy.
This is not really a problem of Math.js but a result of how numbers work in javascript. Javascript uses 64bit binary floating point numbers (also known as 64bit double in C). As such, it has only 53 bits to store your number.
I've written an explanation here: Javascript number gets another value
You can read the wikipedia page for 64 bit doubles for more detail: http://en.wikipedia.org/wiki/Double_precision_floating-point_format
Now for the second part of your question:
If not then how can I avoid approximation?
There are several libraries in javascript that implements big numbers:
For the browser there's this: https://github.com/MikeMcl/bignumber.js
Which is written in pure javascript. Should also be usable node.js.
For node.js there's this: https://github.com/justmoon/node-bignum
Which is a wrapper around the big number library used by OpenSSL. It's written in C so can't be loaded in the browser but should be faster and maybe more memory efficient on node.js.
The latest version of math.js has support for bignumbers, see docs:
https://github.com/josdejong/mathjs/blob/master/docs/datatypes/bignumbers.md
Python version | Javascript version | Whitepaper
So, I'm working on a website to calculate Glicko ratings for two player games. It involves a lot of floating point arithmetic (square roots, exponents, division, all the nasty stuff) and I for some reason am getting a completely different answer from the Python implementation of the algorithm I translated line-for-line. The Python version is giving basically the expected answer for the example found in the original whitepaper describing the algorithm, but the Javascript version is quite a bit off.
Have I made an error in translation or is Javascript's floating point math just less accurate?
Expected answer: [1464, 151.4]
Python answer: [1462, 155.5]
Javascript answer: [1470.8, 89.7]
So the rating calculation isn't THAT bad, being 99.6% accurate, but the variance is off by 2/3!
Edit: People have pointed out that the default value of RD in the Pyglicko version is 200. This is a case of the original implementer leaving in test code I believe, as the test case is done on a person with an RD of 200, but clearly default is supposed to be 350. I did, however, specify 200 in my test case in Javascript, so that is not the issue here.
Edit: Changed the algorithm to use map/reduce. Rating is less accurate, variance is more accurate, both for no discernible reason. Wallbanging commence.
typically you get errors like this where you are subtracting two similar numbers - then the normally insignificant differences between values become amplified. for example, if you have two values that are 1.2345 and 1.2346 in python, but 1.2344 and 1.2347 in javascript, then the differences are 1e-4 and 3 e-4 respectively (ie one is 3x the other).
so i would look at where you have subtractions in your code and check those values. you may find that you can either (1) rewrite the maths to avoid the subtraction (often it turns out that you can find an expression that calculates the difference in some other way) or (2) focus on why the values at that particular point differ between the two languages (perhaps the difference in pi that the other answer identified is being amplified in this way).
it's also possible, although less likely here, that you have a difference because something is treated as an integer in python, but as a float in javascript. in python there is a difference between integers and floats, and if you are not careful you can do things like divide two integers to get another integer (eg 3/2 = 1 in python). while in javascript, all numbers are "really" floats, so this does not occur.
finally, it's possible there are small differences in how the calculations are performed. but these are "normal" - to get such drastic differences you need something like what i described above to occur as well.
PS: also note what Daniel Baulig said about the initial value of the parameter rd in the comments above.
My guess is that involves the approximations you're using for some of the constants in the JavaScript version. Your pi2 in particular seems a little.. brief. I believe Python is using doubles for those values.