Why JavaScript loops do not overflow stack? - javascript

I've got a couple of questions about memory allocation in JavaScript
As far as I know, JavaScript primitives are immutable and stored in the stack. If we change the value of a primitive or if we assign a new variable to the old variable, it creates a new memory location for each case.
let x = 2
let y = x
x = 3
console.log(y) // 2
When we run a large loop just as the example bellow, 99999999 times 8bytes needed in stack to allocate space for i. So, why the stack is not overflown?
for(let i = 0; i<99999999; i++){
let x = i
}
If millions of objects are created simultaneously (in a real world app), is the stack enough to hold references to all the objects in the heap?

(V8 developer here.)
When we run a large loop just as the example bellow, 99999999 times 8bytes needed in stack to allocate space for i. So, why the stack is not overflown?
The premise is incorrect. There's only one stack slot for i. Each iteration of the loop reuses it. Whether this loop runs once, or twice, or 100 times, or 9999... times therefore doesn't change how much stack space it needs.
If millions of objects are created simultaneously (in a real world app), is the stack enough to hold references to all the objects in the heap?
The usable stack size is a little less than a megabyte (this is determined by the operating system); on a 64-bit platform that's enough for around 100K pointers, which (very roughly) translates to several tens of thousand local variables distributed across all functions that currently have an activation (there's no specific number because "it depends"). So it's certainly possible to have more objects on the heap than can be referred to directly from the stack, but this is not typically a limitation that code runs into: for example, you could have a local variable pointing at an array, and that array can in turn refer to thousands of other objects. That way, a single stack slot can keep many objects easily accessible.
(Nitpick: millions of objects are never created "simultaneously", they're always allocated after each other, maybe in quick succession.)

the key point here is that the code includes a garbage collector,
once a memory allocation has been narked as not required, the GC is allowed to reallocated for a different use
this is a little oversimplified but lets looks at what's happening at each stage
let x = 3;
let y = x;
x = 2;
**sudo**
allocate x;
set x = 3;
allocate y;
set y = read x
set x = 2
so the stack never exceeds a single state that is scoped to 2 allocuted variables
for your loop example
for(let i = 0; i<99999999; i++){ //
let x = i
}
**sudo**
label forloop;
allocate i;
set i = 0;
allocate x;
set x = read i;
if i < 99999999
set i = read i + 1
goto forloop
else
deallocate x;
deallocate i;
here their may be some growth in the memory usage from the allocation of the x variable in the loop however GC can easily clean this up
now for the issue you haven't considered functions and recursion
function doSomething(i){
if(i>99999999){
return i;
else
return doSomething(i+1);
}
doSomething(0)
**sudo**
goto main;
label doSomething;
if read current_state_i < 99999999
allocate new_state;
allocate i in new_state;
set new_state_i = read current_state_i + 1
stack_push doSomething using new_state
//stack_pop returns here
set current_state_return = read new_state_return
else
set current_state_return = current_state_i
deallocate new_state
goto stack_pop
label main;
allocate dosomething_state;
allocate i in dosomething_state;
set dosomething_state_i = 0;
stack_push doSomething using dosomething_state
//stack_pop returns here
for this one every time you call the function doSomething the stack has an state added to it this state is a memory slot that stores the passed in value and any variables created in scope and the return
when doSomething is completed this state is marked as no required
however until the code can return the system has to keep allocating more and more room on the stack for each calls state this is why recursion is much less safe than normal loops

As commenters have already said, there is no stack involved in your example, or at least not how you imagine there is.
If you want to break your stack, try this instead:
function Overflow(count){
console.log('count:', count)
return Overflow(count + 1)
}
Overflow(0)

Related

Memory handling vs. performance

I'm building a WebGL game and I've come so far that I've started to investigate performance bottlenecks. I can see there are a lot of small dips in FPS when there are GC going on. Hence, I created a small memory pool handler. I still see a lot of GC after I've started to use it and I might suspect that I've got something wrong.
My memory pool code looks like this:
function Memory(Class) {
this.Class = Class;
this.pool = [];
Memory.prototype.size = function() {
return this.pool.length;
};
Memory.prototype.allocate = function() {
if (this.pool.length === 0) {
var x = new this.Class();
if(typeof(x) == "object") {
x.size = 0;
x.push = function(v) { this[this.size++] = v; };
x.pop = function() { return this[--this.size]; };
}
return x;
} else {
return this.pool.pop();
}
};
Memory.prototype.free = function(object) {
if(typeof(object) == "object") {
object.size = 0;
}
this.pool.push(object);
};
Memory.prototype.gc = function() {
this.pool = [];
};
}
I then use this class like this:
game.mInt = new Memory(Number);
game.mArray = new Memory(Array); // this will have a new push() and size property.
// Allocate an number
var x = game.mInt.allocate();
<do something with it, for loop etc>
// Free variable and push into mInt pool to be reused.
game.mInt.free(x);
My memory handling for an array is based on using myArray.size instead of length, which keeps track of the actual current array size in an overdimensioned array (that has been reused).
So to my actual question:
Using this approach to avoid GC and keep memory during play-time. Will my variables I declare with "var" inside functions still be GC even though they are returned as new Class() from my Memory function?
Example:
var x = game.mInt.allocate();
for(x = 0; x < 100; x++) {
...
}
x = game.mInt.free(x);
Will this still cause memory garbage collection of the "var" due to some memcopy behind the scenes? (which would make my memory handler useless)
Is my approach good/meaningful in my case with a game that I'm trying to get high FPS in?
So you let JS instantiate a new Object
var x = new this.Class();
then add anonymous methods to this object and therefore make it a one of a kind
x.push = function...
x.pop = function...
so that now every place you're using this object is harder to optimize by the JS engine, because they have now distinct interfaces/hidden classes (equal ain't identical)
Additionally, every place you use these objects, will have to implement additional typecasts, to convert the Number Object back into a primitive, and typecasts ain't for free either. Like, in every iteration of a loop? maybe even multiple times?
And all this overhead just to store a 64bit float?
game.mInt = new Memory(Number);
And since you cannot change the internal State and therefore the value of a Number object, these values are basically static, like their primitive counterpart.
TL;DR:
Don't pool native types, especially not primitives. These days, JS is pretty good at optimizing the code if it doesn't have to deal with surprizes. Surprizes like distinct objects with distinct interfaces that first have to be cast to a primitive value, before they can be used.
Array resizing ain't for free either. Although JS optimizes this and usually pre-allocates more memory than the Array may need, you may still hit that limit, and therefore enforce the engine to allocate new memory, move all the values to that new memory and free the old one.
I usually use Linked lists for pools.
Don't try to pool everything. Think about wich objects can really be reused, and wich you are bending to fit them into this narrative of "reusability".
I'd say: If you have to do as little as adding a single new property to an object (after it has been constructed), and therefore you'd need to delete this property for clean up, this object should not be pooled.
Hidden Classes: When talking about optimizations in JS you should know this topic at least at a very basic level
summary:
don't add new properties after an object has been constructed.
and to extend this first point, no deletes!
the order in wich you add properties matters
changing the value of a property (even its type) doesn't matter! Except when we talk about properties that contain functions (aka. methods). The optimizer may be a bit picky here, when we're talking about functions attached to objects, so avoid it.
And last but not least: Distinct between optimized and "dictionary" objects. First in your concepts, then in your code.
There's no benefit in trying to fit everything into a pattern with static interfaces (this is JS, not Java). But static types make the life easier for the optimizer. So compose the two.

Performance of passing object as argument in javascript

Theoretical question, if for e.g. I have a big object called Order and it has a tons on props: strings, numbers, arrays, nested objects.
I have a function:
function removeShipment(order) {
order.shipment.forEach(
// remove shipment action
);
}
Which mean I access only one prop (shipment), but send a big object.
From perspective of garbage collection and performance is there a difference, between pass Order and pass Order.shipment?
Because object passed by reference, and don't actually copy Order into variable.
As ibrahim mahrir stated in a comment-- though I don't know why they didn't post an answer, because OPs are incentivised to pick a "best answer" & the sole, bewildering response was therefore chosen-- there is no practical performance difference between passing order to your removeShipment method, or passing order.shipment
This is because JavaScript functions are "pass-by-value" for primitive types, like number and boolean, and it uses something known as "call-by-sharing" for passing copies of references for Objects (like your order and assumedly your Array of shipments). The entire object is not copied when passed as a parameter, just a copy of a reference to it in memory. Either approach, passing order or order.shipments, is effectively identical.
I did write a couple timing tests for this, but the actual difference is so small that it's exceptionally difficult to write a test that even properly measures it. I'll include my code at the end for completeness' sake, but from my limited testing in Firefox & Chrome, they were practically identical, as expected.
For another question / answer in the same vein as yours (as well as a great video on why "Micro-benchmarking" often doesn't produce correct results) that corroborates what I wrote, see: does size of argument in a javascript function affects its performance?
See this answer regarding the implications of "call-by-sharing" Is JavaScript a pass-by-reference or pass-by-value language?
You didn's specify what, "remove shipment action" actually "means" in practice. You could just do testOrder.shipments = [] if you just wanted to "remove all shipments" from the order object. They'd be garbage collected at some point after this if nothing else can reach them. I'm just going to iterate through each & perform an addition operation as a stub, as I'm afraid otherwise everything would just be optimised out.
// "num" between 0 inclusive & 26 exclusive
function letter(num)
{
return String.fromCharCode(num + 65)
}
// Ships have a 3-letter name & a random value between 0 & 1
function getShipment() {
return { "name": "Ship", "val": Math.random() }
}
// "order" has 100 "Shipments"
// As well as 676 other named object properties with random "result" values
// e.g. order.AE => Object { result: 14.9815045239037 }
function getOrder() {
var order = {}
for (var i = 0; i < 26; i++)
for (var j = 0; j < 26; j++) {
order[letter(i) + letter(j)] = { "result": (i+j) * Math.random() }
}
order.shipments = Array.from({length: 100}).map(getShipment)
return order
}
function removeShipmentOrder(order) {
order.shipments.forEach(s => s.val++);
}
function removeShipmentList(shipmentList) {
shipmentList.forEach(s => s.val++);
}
// Timing tests
var testOrder = getOrder();
console.time()
for(var i = 0; i < 1000000; i++)
removeShipmentOrder(testOrder)
console.timeEnd()
// Break in-between tests;
// Running them back-to-back, the second test always took longer.
// I assume it's actually due to some kind of compiler optimisation
var testOrder = getOrder();
console.time()
for(var i = 0; i < 1000000; i++)
removeShipmentList(testOrder.shipments)
console.timeEnd()
I was wondering this myself. I decided to test it. Here is my test code:
var a = "Here's a string value";
var b = 5; // and a number
var c = false;
var object = {
a, b, c
}
var array = [
a, b, c
];
var passObject = (obj) => {
return obj.a.length + obj.b * obj.c ? 2 : 1;
}
var passRawValues = (val_a, val_b, val_c) => {
return val_a.length + val_b * val_c ? 2 : 1;
}
var passArray = (arr) => {
return arr[0].length + arr[1] * arr[2] ? 2 : 1;
}
var x = 0;
Then I called the three functions like this:
x << 1;
x ^= passObject(object);
x << 1;
x ^= passRawValues(a, b, c);
x << 1;
x ^= passArray(array);
The reason it does the bit shifting and XORing is that without it, the function call was optimized away entirely by some JS runtimes. By storing the result of the function, I forced the runtime to actually do the function call.
Results
In Webkit and Chromium, passing an object and passing an array were about the same speed, and passing raw values was a little bit slower. Firefox showed about the same performance ratio but I'm not sure that I trust the results since it was literally ten times faster than Chromium.
Here is a link to my my test case on MeasureThat. In case the link doesn't work: it's the same code as above.
Here's a screenshot of the run results (in Chromium on an M1 Macbook Air):
About 5 million ops/s in Chromium for passing an object, versus about 3.7 million for passing a trio of primitive values.
Explanation
So why is that? Well, JavaScript strictly uses pass-by-value semantics. But when you pass an object to a function, the value that you're passing isn't actually the object itself, but rather a pointer to the object. So the variable storing the pointer gets duplicated, but the contents of what it points to does not. This is also why you can have a function that takes an object and alters its properties and that change will happen outside the function as well, but if you reassign the object, the outside scope will still reference the old object.
For this reason, the size of the passed object is largely irrelevant for performance. If the var object = {...} above is changed to contain a bunch of other data, the operations per second achieved when passing it to the function remains exactly the same, because the only thing changing is the amount of data in the block of memory storing the object. The value being passed to the function isn't bigger just because the object is bigger.
Created a simple test here https://jsperf.com/passing-object-vs-passing-raw-value
Test results:
in Chrome passing object is ~7% slower that passing raw value
in Firefox passing object is ~15% slower that passing raw value
in IE11 passing object is ~10% slower that passing raw value
This is syntetic test for passing only one variable, so in other cases results may differ

Incredibly fast JS loop?

Today I've got an idea to check performance of loop which I have called "scoped for". The idea is simply. This loop has two variables, "i" and "l" which are defined "one scope higher" than loop itself.
There's nothing else in those two scopes.
I've created jsPerf and got amazing results.
http://jsperf.com/variable-scoped-loop/6
I decided to create my local test, and results are even better ( 1000x1000 loops average time of 5s for "standard for" and under 0.01s for "scoped for" ).
So now I am wondering why this loop is so damn fast. I`m assuming that it's all about V8, but you never know.
So anyone willing to explain?
TLDR :
Why this loop is so damn fast?
var loop = ( function() {
var i, l;
return function( length, action ) {
for( i = 0, l = length ; i < l ; ++i ) {
action();
}
};
}() );
Unfortunately, there's no magic here: your test is faulty.
For varInFor, the empty function is correctly called 9999^2 times, whereas with varInScope, it's only called 9999 times. That's why it finishes a lot quicker. You can test this easily by making the empty function print something.
The reason why is the fact that variables i and l are shared between the outer and inner call of varInScope. So after the inner loop finishes, i is already equal l and the outer loop immediately exits.
See another JSPerf for a fixed version that initializes the functions every time (to create a new set of variables in the closure) and it is, indeed, up to 20% slower than the "normal" for loop.

Why is recursion faster than a flat for loop for a summation function on JavaScript?

I'm working in a language that translates to JavaScript. In order to avoid some stack overflows, I'm applying tail call optimization by converting certain functions to for loops. What is surprising is that the conversion is not faster than the recursive version.
http://jsperf.com/sldjf-lajf-lkajf-lkfadsj-f/5
Recursive version:
(function recur(a0,s0){
return a0==0 ? s0 : recur(a0-1, a0+s0)
})(10000,0)
After tail call optimization:
ret3 = void 0;
a1 = 10000;
s2 = 0;
(function(){
while (!ret3) {
a1 == 0
? ret3 = s2
: (a1_tmp$ = a1 - 1 ,
s2_tmp$ = a1 + s2,
a1 = a1_tmp$,
s2 = s2_tmp$);
}
})();
ret3;
After some cleanup using Google Closure Compiler:
ret3 = 0;
a1 = 1E4;
for(s2 = 0; ret3 == 0;)
0 == a1
? ret3 = s2
: (a1_tmp$ = a1 - 1 ,
s2_tmp$ = a1 + s2,
a1 = a1_tmp$,
s2 = s2_tmp$);
c=ret3;
The recursive version is faster than the "optimized" ones! How can this be possible, if the recursive version has to handle thousands of context changes?
There's more to optimising than tail-call optimisation.
For instance, I notice you're using two temporary variables, when all you need is:
s2 += a1;
a1--;
This alone practically reduces the number of operations by a third, resulting in a performance increase of 50%
In the long run, it's important to optimise what operations are being performed before trying to optimise the operations themselves.
EDIT: Here's an updated jsperf
as Kolink say what your piece of code do is simply adding n to the total, reduce n by 1, and loop until n not reach 0
so just do that :
n = 10000, o = 0; while(n) o += n--;
it's more faster and lisible than the recursive version, and off course output the same result
There are not so much context changes inside the recursive version as you expect, since the named function recur is contained in the scope of recur itself/they share the same scope. The reason for that has to do with the way the JavaScript engines evaluate scope, and there are plenty of websites out there which explain this topic, so I will not do it here. At a second look you will notice that recur is also a so called "pure" function, which basically means it never has to leave it's own scope as long as the internal execution runs (simply put: until it returns a value). These two facts make it basically fast. I just want to mention here, the first example is the only tail call optimized one of all three – a tc optimization can only be done in recursive functions and this is the only recursive one.
However, a second look at the second example (no pun intended) reveals, that the "optimizer" made things worse for you, since it introduced scopes into the former pure function by splitting the operation into
variables instead of arguments
a while loop
a IIFE (immediatly invoked function expression) that separates the introduced inner and outer variables
Which leads to poorer performance since now the engine has to handle 10000 context changes.
To tell you the truth I do not know why the third example is poorer in performance than the recursive one, so maybe it has to do with:
the browser you use (ever tried another one and compared the results?)
the number of variables
stack frames created by for-loops (never heard of though), which
would have to do with the first example: the JS engines interpret a
pure recursive function until it finds a return statement. If the last thing following the statement is a function call, then evaluate any expressions (if any) and variables to pass as arguments, call the function and throw away the frame
something, only the browser-vendors can truly tell you :)

Memory allocation for JavaScript types

I'm trying to optimize the hell out of a mobile app I'm working on, and I'd like to know what takes up the smallest memory footprint (I realize this may vary across browser):
object pointers
boolean literals
number literals
string literals
Which should theoretically take the least amount of memory space?
On V8:
Boolean, number, string, null and void 0 literals take constant 4/8 bytes of memory to for the pointer or the immediate integer value embedded in the pointer. But there is no heap allocation for these at all as a string literal will just be internalized. Exception can be big integers or doubles which are boxed with 4/8 bytes for the box pointer and 12-16 bytes for the box. In optimized code local doubles can stay unboxed in registers or stack, or an array that always contains exclusively doubles will store them unboxed.
Consider the meat of the generated code for:
function weird(d) {
var a = "foo";
var b = "bar";
var c = "quz";
if( d ) {
sideEffects(a, b, c);
}
}
As you can see, the pointers to the strings are hard-coded and no allocation happens.
Object identities at minimum take 12/24 bytes for plain object, 16/32 bytes for array and 32/72 for function (+ ~30/60 bytes if context object needs to be allocated). You can only get away without heap allocation here if you run bleeding edge v8 and the identity doesn't escape into a function that cannot be inlined.
So for example:
function arr() {
return [1,2,3]
}
The backing array for values 1,2,3 will be shared as a copy-on-write array by all arrays returned by the function but still unique identity object for each array needed to be allocated. See how complicated the generated code is. So even with this optimization, if you don't need unique identities for the arrays, just returning an array from upper scope will avoid allocation for the identity everytime the function is called:
var a = [1,2,3];
function arr() {
return a;
}
Much simpler.
If you have memory problems with js without doing anything seemingly crazy, you are surely creating functions dynamically. Hoist all functions to a level where they don't need to be recreated. As you can see from above, merely the identity for a function is very fat already considering most code can get away with static functions by taking advantage of this.
So if you want to take anything from this, avoid non-IIFE closures if your goal is performance. Any benchmark that shows they are not a problem is a broken benchmark.
You might have intuition that what does additional memory usage matter when you have 8GB. Well it wouldn't matter in C. But in Javascript the memory doesn't just sit there, it is being traced by garbage collector. The more memory and objects that sits there, the worse performance.
Just consider running something like:
var l = 1024 * 1024 * 2
var a = new Array(l);
for( var i = 0, len = a.length; i < len; ++i ) {
a[i] = function(){};
}
With --trace_gc --trace_gc_verbose --print_cumulative_gc_stat. Just look how much work was done for nothing.
Compare with static function:
var l = 1024 * 1024 * 2
var a = new Array(l);
var fn = function(){};
for( var i = 0, len = a.length; i < len; ++i ) {
a[i] = fn;
}
"Literal" means code (even if not in string serialisation), which is a more complex type and will therefore cost more space than values.
Theoretically, boolean values could take the least amount of space since they fit in a single bit. It's unlikely though that any engine does optimize this. If you want to force this, you can do it manually and juggle around with typed arrays.
However, performance is a practical thing, and you can only test, test, test it. As you already know, there is no definitive cross-browser cross-version answer.

Categories