But why? Chrome javascript function strange result [duplicate] - javascript

Executing this snippet in the Chrome console:
function foo() {
return typeof null === 'undefined';
}
for(var i = 0; i < 1000; i++) console.log(foo());
should print 1000 times false, but on some machines will print false for a number of iterations, then true for the rest.
Why is this happening? Is it just a bug?

There is a chromium bug open for this:
Issue 604033 - JIT compiler not preserving method behavior
So yes It's just a bug!

It's actually a V8 JavaScript engine (Wiki) bug.
This engine is used in Chromium, Maxthron, Android OS, Node.js etc.
Relatively simple bug description you can find in this Reddit topic:
Modern JavaScript engines compile JS code into optimized machine code
when it is executed (Just In Time compilation) to make it run faster.
However, the optimization step has some initial performance cost in
exchange for a long term speedup, so the engine dynamically decides
whether a method is worth it depending on how commonly it is used.
In this case there appears to be a bug only in the optimized path,
while the unoptimized path works fine. So at first the method works as
intended, but if it's called in a loop often enough at some point the
engine will decide to optimize it and replaces it with the buggy
version.
This bug seems to have been fixed in V8 itself (commit), aswell as in Chromium (bug report) and NodeJS (commit).

To answer the direct question of why it changes, the bug is in the "JIT" optimisation routine of the V8 JS engine used by Chrome. At first, the code is run exactly as written, but the more you run it, the more potential there is for the benefits of optimisation to outweigh the costs of analysis.
In this case, after repeated execution in the loop, the JIT compiler analyses the function, and replaces it with an optimised version. Unfortunately, the analysis makes an incorrect assumption, and the optimised version doesn't actually produce the correct result.
Specifically, Reddit user RainHappens suggests that it is an error in type propagation:
It also does some type propagation (as in what types a variable etc can be). There's a special "undetectable" type for when a variable is undefined or null. In this case the optimizer goes "null is undetectable, so it can be replaced with the "undefined" string for the comparison.
This is one of the hard problems with optimising code: how to guarantee that code which has been rearranged for performance will still have the same effect as the original.

This was fixed two month ago and will land in Chrome soon (already in Canary).
V8 Issue 1912553002 - Fix 'typeof null' canonicalization in crankshaft
Chromium Issue 604033 - JIT compiler not preserving method behavior

Related

TypeScript, why are subsequent function calls much faster than the original call? [duplicate]

I've got this problem I have been working on and found some interesting behavior. Basically, if I benchmark the same code multiple times in a row, the code execution gets significantly faster.
Here's the code:
http://codepen.io/kirkouimet/pen/xOXLPv?editors=0010
Here's a screenshot from Chrome:
Anybody know what's going on?
I'm checking performance with:
var benchmarkStartTimeInMilliseconds = performance.now();
...
var benchmarkEndTimeInMilliseconds = performance.now() - benchmarkStartTimeInMilliseconds;
Chrome's V8 optimizing compiler initially compiles your code without optimizations. If a certain part of your code is executed very often (e.g. a function or a loop body), V8 will replace it with an optimized version (so called "on-stack replacement").
According to https://wingolog.org/archives/2011/06/08/what-does-v8-do-with-that-loop:
V8 always compiles JavaScript to native code. The first time V8 sees a
piece of code, it compiles it quickly but without optimizing it. The
initial unoptimized code is fully general, handling all of the various
cases that one might see, and also includes some type-feedback code,
recording what types are being seen at various points in the
procedure.
At startup, V8 spawns off a profiling thread. If it notices that a
particular unoptimized procedure is hot, it collects the recorded type
feedback data for that procedure and uses it to compile an optimized
version of the procedure. The old unoptimized code is then replaced
with the new optimized code, and the process continues
Other modern JS engines identify such hotspots and optimize them as well, in a similar fashion.

Which JS benchmark site is correct?

I created a benchmark on both jsperf.com and jsben.ch, however, they're giving substantially different results.
JSPerf: https://jsperf.com/join-vs-template-venryx
JSBench: http://jsben.ch/9DaxR
Note that the code blocks are exactly the same.
On jsperf, block 1 is "61% slower" than the fastest:
On jsbench, block 1 is only 32% slower than the fastest: ((99 - 75) / 75)
What gives? I would expect benchmark sites to give the same results, at least within a few percent.
As it stands, I'm unable to make a conclusion on which option is fastest because of the inconsistency.
EDIT
Extended list of benchmarks:
https://jsperf.com/join-vs-template-venryx
https://jsbench.me/f3k3g71sg9
http://jsbench.github.io/#7f03c3d3fdc9ae3a399d0f2d6de3d69f
https://run.perf.zone/view/Join-vs-Template-Venryx-1512492228976
http://jsben.ch/9DaxR
Not sure which is the best, but I'd skip jsben.ch (the last one) for the reasons Job mentions: it doesn't display the number of runs, the error margin, or the number of operations per second -- which is important for estimating absolute performance impact, and enabling stable comparison between benchmark sites and/or browsers and browser versions.
(At the moment http://jsbench.me is my favorite.)
March 2019 Update: results are inconsistent between Firefox and Chrome - perf.zone behave anomalously on Chrome, jsben.ch behaves anomalously on Firefox. Until we know exactly why, the best you can do is benchmark on multiple websites (but I'd still skip jsben.ch, the others give you a least some error margin and stats on how many runs were taken, and so on)
TL;DR: running your code on perf.zone and on jsbench.github.io (see here and here), the results closely match jsperf. Personally, and for other reasons than just these results, I trust these three websites more than jsben.ch.
Recently, I tried benchmarking the performance of string concatenation too, but in my case it's building one string out of 1000000+ single character strings (join('') wins for numbers this large and up, btw). On my machine the jsben.ch timed out instead of giving a result at all. Perhaps it works better on yours, but for me that's a big warning sign:
http://jsben.ch/mYaJk
http://jsbench.github.io/#26d1f3705b3340ace36cbad7b24055fb
https://run.perf.zone/view/join-vs-concat-when-dealing-with-very-long-lists-of-single-character-strings-1512490506658
(I can't be bothered to ever have to deal with jsperf's not all tests inserted again, sorry)
At the moment I suspect but can't prove that perf.zone has slightly more reliable benchmark numbers:
when optimising lz-string I used jsbench.github.io for a very long time, but at some point I noticed there were impossibly large error margins for certain types of code, over 100%.
running benchmarks on mobile is fine with jsperf.com and perf.zone, but jsbench.github.io is kinda janky and the CSS breaks while running tests.
Perhaps these two things are related: perhaps the method that jsbench.github.io uses to update the DOM introduces some kind of overhead that affects the benchmarks (they should meta-benchmark that...).
Note: perf.zone is not without its flaws. It sometimes times out when trying to save a benchmark (the worst time to do so...) and you can only fork your own code, not edit it. But the output still seems to be more in line with jsperf, and it has a really nice "quick" mode for throwaway benchmarking
Sorry for a bump but might be interesting for others running into this in search results.
I can't speak for others but jsbench.me just uses benchmark.js for testing. Its a single-page React app meaning it runs completely on your browser and your engine of choice, so results should be consistent within single browser. You can run it in Firefox or mobile and results will be different of course. But absolutely nothing related to testing is on the server, other than AWS DynamoDB to store results.
P.S. I'm the author, so only passion project of individual. Currently doesnt cost me anything as its optimized for serverless and it fits AWS free tier. Anount of work on it is proportional to number of users :)
AFAIK one issue is that various JavaScript engines optimize vastly differently based on the environment.
I have a test of the exact same function that produces different results based on where the function is created. In other words, for example, in one test it's
const lib = {}
lib.testFn = function() {
....
}
And in other it's
const lib = {
testFn: function() {
....
},
};
and in another it's
function testFn() {
....
}
const lib = {}
lib.testFn = testFn
and there's a >10% difference in results for a non-trivial function in the same browser and different results across browsers.
What this means is no JavaScript benchmark is correct because how that benchmark runs it's tests, as in the test harness itself, affects the results. The harness for example might XHR the test script. Might call eval. Might run the test in a worker. Might run the test in an iframe. And the JS engine might optimize all of those differently.

Why does JavaScript code execute faster over time?

I've got this problem I have been working on and found some interesting behavior. Basically, if I benchmark the same code multiple times in a row, the code execution gets significantly faster.
Here's the code:
http://codepen.io/kirkouimet/pen/xOXLPv?editors=0010
Here's a screenshot from Chrome:
Anybody know what's going on?
I'm checking performance with:
var benchmarkStartTimeInMilliseconds = performance.now();
...
var benchmarkEndTimeInMilliseconds = performance.now() - benchmarkStartTimeInMilliseconds;
Chrome's V8 optimizing compiler initially compiles your code without optimizations. If a certain part of your code is executed very often (e.g. a function or a loop body), V8 will replace it with an optimized version (so called "on-stack replacement").
According to https://wingolog.org/archives/2011/06/08/what-does-v8-do-with-that-loop:
V8 always compiles JavaScript to native code. The first time V8 sees a
piece of code, it compiles it quickly but without optimizing it. The
initial unoptimized code is fully general, handling all of the various
cases that one might see, and also includes some type-feedback code,
recording what types are being seen at various points in the
procedure.
At startup, V8 spawns off a profiling thread. If it notices that a
particular unoptimized procedure is hot, it collects the recorded type
feedback data for that procedure and uses it to compile an optimized
version of the procedure. The old unoptimized code is then replaced
with the new optimized code, and the process continues
Other modern JS engines identify such hotspots and optimize them as well, in a similar fashion.

Bug with typeof null in JS V8 engine [duplicate]

Executing this snippet in the Chrome console:
function foo() {
return typeof null === 'undefined';
}
for(var i = 0; i < 1000; i++) console.log(foo());
should print 1000 times false, but on some machines will print false for a number of iterations, then true for the rest.
Why is this happening? Is it just a bug?
There is a chromium bug open for this:
Issue 604033 - JIT compiler not preserving method behavior
So yes It's just a bug!
It's actually a V8 JavaScript engine (Wiki) bug.
This engine is used in Chromium, Maxthron, Android OS, Node.js etc.
Relatively simple bug description you can find in this Reddit topic:
Modern JavaScript engines compile JS code into optimized machine code
when it is executed (Just In Time compilation) to make it run faster.
However, the optimization step has some initial performance cost in
exchange for a long term speedup, so the engine dynamically decides
whether a method is worth it depending on how commonly it is used.
In this case there appears to be a bug only in the optimized path,
while the unoptimized path works fine. So at first the method works as
intended, but if it's called in a loop often enough at some point the
engine will decide to optimize it and replaces it with the buggy
version.
This bug seems to have been fixed in V8 itself (commit), aswell as in Chromium (bug report) and NodeJS (commit).
To answer the direct question of why it changes, the bug is in the "JIT" optimisation routine of the V8 JS engine used by Chrome. At first, the code is run exactly as written, but the more you run it, the more potential there is for the benefits of optimisation to outweigh the costs of analysis.
In this case, after repeated execution in the loop, the JIT compiler analyses the function, and replaces it with an optimised version. Unfortunately, the analysis makes an incorrect assumption, and the optimised version doesn't actually produce the correct result.
Specifically, Reddit user RainHappens suggests that it is an error in type propagation:
It also does some type propagation (as in what types a variable etc can be). There's a special "undetectable" type for when a variable is undefined or null. In this case the optimizer goes "null is undetectable, so it can be replaced with the "undefined" string for the comparison.
This is one of the hard problems with optimising code: how to guarantee that code which has been rearranged for performance will still have the same effect as the original.
This was fixed two month ago and will land in Chrome soon (already in Canary).
V8 Issue 1912553002 - Fix 'typeof null' canonicalization in crankshaft
Chromium Issue 604033 - JIT compiler not preserving method behavior

Steps to migrate a project to using "strict mode"?

What are some things I must verify before taking an existing code base and converting it to Strict Mode?
The project is a website that is designed to run on all browsers from IE 8 and above, and doesn't have a lot of unit tests or a js linter continuous integration setup at this point.
What are some language features to look out for, that our own code, or libraries that we use might be using, that will silently break if I turn on strict mode?
Is there a code analysis process that can look at my whole project, and point out the specific pitfalls from this migration?
Any particular browsers I should be worried about? (Well, I guess IE 8 is the usual suspect ... but what exactly to look for in IE's 8 support, or lackthereof, of strict mode?)
If I'm not looking for raw performance, but first at avoiding bugs ... is strict mode a cost effective way to help early bug detection?
The one bit of strict code that I violate most is the strict delete operator.
delete someobject.someproperty throws an error if someobject.someproperty is not defined,
while in 'normal' code it deletes the property if it exists, and carries on without an error if it doesn't.
It's easy to fix-
if('someproperty' in someobject) delete someobject.someproperty;
I liked the old way, but things change. Some people hate giving up arguments.callee...
You can avoid prototype chaining by changing the code as follows:
if (someObject.hasOwnProperty("someProperty")) {
delete someObject.someProperty;
}

Categories