javascript performance boost using pointer? - javascript

Let's say I have a javascript object.
var hash = { a : [ ] };
Now I want to edit the array hash.a 2 ways: first by accessing hash.a every time, second by making a pointer var arr = hash.a to store hash.a's memory address. Is second way faster, or they are the same.
Example:
// first way
hash.a.push(1);
hash.a.push(2);
hash.a.push(3);
hash.a.push(4);
hash.a.push(5);
//second way
var arr = hash.a;
arr.push(1);
arr.push(2);
arr.push(3);
arr.push(4);
arr.push(5);
Thanks a lot!

I don't think there would be any real performance gain, and if there is a slight gain it isn't worth it as you are hampering legibility and maintainability of code + use more memory (creation and garbage collection of another variable arr, also more chances for memory leaks too if you don't handle it properly). I wouldn't recommend it.
In a typical software project, only 20% of the time is development, rest 80% is testing and maintaining it.

You're doing the compiler's job in this situation - any modern compiler will optimize this code when interpreting it so both cases should be the same in terms of performance.
Even if these two cases weren't optimized by the compiler, the performance gains would be negligible. Focus on making your code as readable as possible, and let compiler handle these referencing optimizations.

Related

Why does Math.random() (in Chrome) allocate memory that needs cleanup by the Garbage Collector (gc)?

Story
During some tests for a performance critical code, I observed a side-effect of Math.random() that I do not understand. I am looking for
some deep technical explanation
a falsification for my test (or expectation)
link to a V8 problem/bug ticket
Problem
It looks like that calling Math.random() allocates some memory that needs to be cleaned up by the Gargabe Collector (gc).
Test: With Math.random()
const numberOfWrites = 100;
const obj = {
value: 0
};
let i = 0;
function test() {
for(i = 0; i < numberOfWrites; i++) {
obj.value = Math.random();
}
}
window.addEventListener('DOMContentLoaded', () => {
setInterval(() => {
test();
}, 10);
});
Observation 1: Chrome profile
Chrome: 95.0.463869, Windows 10, Edge: 95.0.1020.40
Running this code in the browser and record a perfromance profile will result in a classic memory zig-zag
Memory profile of Math.random() test
Obersation 2: Firefox
Firefox Developer: 95, Windows 10
No Garbage Collection (CC/GCMinor) detected - memory quite linear
Workarounds
crypto.getRandomValues()
Replace Math.random() with a large enough array of pre-calculated random numbers using self.crypto.getRandomValues`.
(V8 developer here.)
Yes, this is expected. It's a (pretty fundamental) design decision, not a bug, and not strictly related to Math.random(). V8 "boxes" floating-point numbers as objects on the heap. That's because it uses 32 bits per field in an object, which obviously isn't enough for a 64-bit double, and a layer of indirection solves that.
There are a number of special cases where this boxing can be avoided:
in optimized code, for values that never leave the current function.
for numbers whose values are sufficiently small integers ("Smis", signed 31-bit integer range).
for elements in arrays that have only ever seen numbers as elements (e.g. [1, 2.5, NaN], but not [1, true, "hello"]).
possibly other cases that I'm not thinking of right now. Also, all these internal details can (and do!) change over time.
Firefox uses a fundamentally different technique for storing internal references. The benefit is that it avoids having to box numbers, the drawback is that it uses more memory for things that aren't numbers. Neither approach is strictly better than the other, it's just a different tradeoff.
Generally speaking you're not supposed to have to worry about this, it's just your JavaScript engine doing its thing :-)
Problem: Running this code in the browser and record a performance profile will result in a classic memory zig-zag
Why is that a problem? That's how garbage-collected memory works. (Also, just to put things in perspective: the GC only spends ~0.3ms every ~8s in your profile.)
Workaround: Replace Math.random() with a large enough array of pre-calculated random numbers using self.crypto.getRandomValues`.
Replacing tiny short-lived HeapNumbers with a big and long-lived array doesn't sound like a great way to save memory.
If it really matters, one way to avoid boxing of numbers is to store them in arrays instead of as object properties. But before going through hard-to-maintain contortions in your code, be sure to measure whether it really matters for your app. It's easy to demonstrates huge effects in a microbenchmark, it's rare to see it have much impact in real apps.

Why are two calls to string.charCodeAt() faster than having one with another one in a never reached if?

I discovered a weird behavior in nodejs/chrome/v8. It seems this code:
var x = str.charCodeAt(5);
x = str.charCodeAt(5);
is faster than this
var x = str.charCodeAt(5); // x is not greater than 170
if (x > 170) {
x = str.charCodeAt(5);
}
At first I though maybe the comparison is more expensive than the actual second call, but when the content inside the if block is not calling str.charCodeAt(5) the performance is the same as with a single call.
Why is this? My best guess is v8 is optimizing/deoptimizing something, but I have no idea how to exactly figure this out or how to prevent this from happening.
Here is the link to jsperf that demonstrates this behavior pretty well at least on my machine:
https://jsperf.com/charcodeat-single-vs-ifstatment/1
Background: The reason i discovered this because I tried to optimize the token reading inside of babel-parser.
I tested and str.charCodeAt() is double as fast as str.codePointAt() so I though I can replace this code:
var x = str.codePointAt(index);
with
var x = str.charCodeAt(index);
if (x >= 0xaa) {
x = str.codePointAt(index);
}
But the second code does not give me any performance advantage because of above behavior.
V8 developer here. As Bergi points out: don't use microbenchmarks to inform such decisions, because they will mislead you.
Seeing a result of hundreds of millions of operations per second usually means that the optimizing compiler was able to eliminate all your code, and you're measuring empty loops. You'll have to look at generated machine code to see if that's what's happening.
When I copy the four snippets into a small stand-alone file for local investigation, I see vastly different performance results. Which of the two are closer to your real-world use case? No idea. And that kind of makes any further analysis of what's happening here meaningless.
As a general rule of thumb, branches are slower than straight-line code (on all CPUs, and with all programming languages). So (dead code elimination and other microbenchmarking pitfalls aside) I wouldn't be surprised if the "twice" case actually were faster than either of the two "if" cases. That said, calling String.charCodeAt could well be heavyweight enough to offset this effect.

Javascript perf, weird results

I want to know what is the better way to code in javascript for my nodejs project, so I did this:
function clas(){
}
clas.prototype.index = function(){
var i = 0;
while(i < 1000){
i++;
}
}
var t1 = new clas();
var f = 0;
var d1 = new Date();
while(f < 1000){
t1.index();
f++;
}
console.log("t1: "+(new Date()-d1)+"ms");
f=0;
var d2 = new Date();
while(f < 1000){
var t2 = new clas();
t2.index();
f++;
}
console.log("t2: "+(new Date()-d2)+"ms");
on my browser, the first and the second are the same... 1ms and with nodejs, i have t1 = 15ms and t2 = 1ms, why? why the first take more time than the second as he doesn't initialise my class?
Here are several issues. Your example shows that you have very little experience in benchmarking or system performance. That is why I recommend brushing up on the very basics, and until you got some more feel for it, don't try optimizing at all. Optimizing prematurely is generally a bad thing. If done by someone who does not know anything about performance optimization in the first place, "optimizations" end up being pure noise: Some work and some don't, pretty much at random.
For completeness, here are some things that are wrong with your test case:
First of all, 1000 is not enough for a performance test. You want to do iterations in the order of millions for your CPU to actually spend a remarkable amount of time on it.
Secondly, for benchmarking, you want to use a high performance timer. The reason as to why node gives you 15ms, is because it uses a coarse-grained system timer whose smallest unit is about 15ms, which most probably corresponds to your system's scheduling granularity.
Thirdly, regarding your actual question: Allocating a new object inside your loop, if not necessary, is almost always a bad choice for performance. There is a lot going on under the hood, including the possibility of heap allocations. However, in your simple case, most run-times will probably optimize away most of the overhead, for two reasons:
Your test case is too simple, and the optimizer can easily optimize simple code segments, but has a much harder time in real situations.
Your test case is transient. If the optimizer is smart enough, it will detect that, and it will skip the entire loop.
It is because node does jut-in-time ( JIT )compilation optimizations to the code.
by JIT-optimization, we mean that the node tries to optimize the code when it is executed.
So... the first call the the function is taking more time, and node realizes that it can optimize this for-loop, as it does nothing at all. Whereas for all other calls the optimized loop is executed.
So... subsequent calls will take less time.
You can try by changing the order. The first call will take the more time.
Where as in some browser's the code is optimized ahead-of-time (ie. before running the code).

Why is a frozen "enum" slower?

In order to access the data in an array I created an enum-like variable to have human readable identifiers to the fields.
var columns = { first: 0, second: 1 };
var array = ['first', 'second'];
var data = array[columns.first];
When I found out about Object.freeze I wanted to use this for the enum so that it can not be changed, and I expected the VM to use this info to its advantage.
As it turns out, the tests get slower on Chrome and Node, but slightly faster on Firefox (compared to direct access by number).
The code is available here: http://jsperf.com/array-access-via-enum
Here are the benchmarks from Node (corresponding to the JSPerf tests):
fixed Number: 12ms
enum: 12ms
frozenEnum: 85ms
Does V8 just not yet have a great implementation, or is there something suboptimal with this approach for my use-case?
I tried your test in Firefox 20, which is massively faster across the board, and IE 10 which slightly faster and more consistant.
So my answer is No, V8 does not yet have a great implementation
According to this bugreport, freezing an object currently puts it in "dictionary-mode", which is slow.
So instead of improving the performance, it becomes a definite slowdown for "enums"/small arrays.

Javascript: What's the algorithmic performance of 'splice'?

That is, would I be better suited to use some kind of tree or skip list data structure if I need to be calling this function a lot for individual array insertions?
You might consider whether you want to use an object instead; all JavaScript objects (including Array instances) are (highly-optimized) sets of key/value pairs with an optional prototype An implementation should (note I don't say "does") have a reasonable performance hashing algorithm. (Update: That was in 2010. Here in 2018, objects are highly optimized on all significant JavaScript engines.)
Aside from that, the performance of splice is going to vary a lot between implementations (e.g., vendors). This is one reason why "don't optimize prematurely" is even more appropriate advice for JavaScript applications that will run in multiple vendor implementations (web apps, for instance) than it is even for normal programming. Keep your code well modularized and address performance issues if and when they occur.
Here's a good rule of thumb, based on tests done in Chrome, Safari and Firefox: Splicing a single value into the middle of an array is roughly half as fast as pushing/shifting a value to one end of the array. (Note: Only tested on an array of size 10,000.)
http://jsperf.com/splicing-a-single-value
That's pretty fast. So, it's unlikely that you need to go so far as to implement another data structure in order to squeeze more performance out.
Update: As eBusiness points out in the comments below, the test performs an expensive copy operation along with each splice, push, and shift, which means that it understates the difference in performance. Here's a revised test that avoids the array copying, so it should be much more accurate: http://jsperf.com/splicing-a-single-value/19
Move single value
// tmp = arr[1][i];
// arr[1].splice(i, 1); // splice is slow in FF
// arr[1].splice(end0_1, 0, tmp);
tmp = arr[1][i];
ii = i;
while (ii<end0_1)
{
arr[1][ii] = arr[1][++ii];
cycles++;
}
arr[1][end0_1] = tmp;

Categories