size of a function in memory - javascript

I have been exploring the size of various objects in javascript using sizeof.js. When I examine the size of a function I get zero bytes no matter how many lines of code the function contains. For example:
alert(sizeof(
function(){
return 1;
}
));
This returns a size of zero. Even if I give the function many lines of code I get a size of zero bytes. The amount of memory needed to store a string is dependent on the length of the string. But the complexity or size of a function seems to be irrelevant. Why is this so?

If you look at the source code of the sizeof library, you can find that the object type function is not handled in the sizeof library, so by default the size 0 is returned.
/*
sizeof.js
A function to calculate the approximate memory usage of objects
Created by Stephen Morley - http://code.stephenmorley.org/ - and released under
the terms of the CC0 1.0 Universal legal code:
http://creativecommons.org/publicdomain/zero/1.0/legalcode
*/
/* Returns the approximate memory usage, in bytes, of the specified object. The
* parameter is:
*
* object - the object whose size should be determined
*/
function sizeof(object){
// initialise the list of objects and size
var objects = [object];
var size = 0;
// loop over the objects
for (var index = 0; index < objects.length; index ++){
// determine the type of the object
switch (typeof objects[index]){
// the object is a boolean
case 'boolean': size += 4; break;
// the object is a number
case 'number': size += 8; break;
// the object is a string
case 'string': size += 2 * objects[index].length; break;
// the object is a generic object
case 'object':
// if the object is not an array, add the sizes of the keys
if (Object.prototype.toString.call(objects[index]) != '[object Array]'){
for (var key in objects[index]) size += 2 * key.length;
}
// loop over the keys
for (var key in objects[index]){
// determine whether the value has already been processed
var processed = false;
for (var search = 0; search < objects.length; search ++){
if (objects[search] === objects[index][key]){
processed = true;
break;
}
}
// queue the value to be processed if appropriate
if (!processed) objects.push(objects[index][key]);
}
}
}
// return the calculated size
return size;
}
You can see that size is initialized with value 0 and the type function is not handled in the switch statement so it will always return 0 for a function.

What sizeof.js does is just summarizes sizes of iterable attributes of an objects or plain sizes of scalar values. It uses the fixed known sizes of scalar values for that. So obviously the result is pretty much inaccurate.
It cannot calculate the size of a function because:
It is implementation dependent
It may even vary in runtime
ECMAScript does not declare how functions must be implemented

The website for sizeof.js says: "While JavaScript does not include a mechanism to find the exact memory usage of an object, sizeof.js provides a function that determines the approximate amount of memory used." (Emphasis mine.)
The result is "approximate" because it's limited by the information that the browser makes available, and the browser apparently doesn't provide size information for functions. This is unsurprising, since the code of a function — the actual bytecode or machine code that the browser executes — is an implementation detail that's not visible from within the program. A function is basically a black box.

it is too hard to determine how much memory a function is holding
all the local variable in that function need to be take into consideration, even clousure need to be take into count
as #Arun point out, function is not take into count in sizeof.js

Related

Why JavaScript loops do not overflow stack?

I've got a couple of questions about memory allocation in JavaScript
As far as I know, JavaScript primitives are immutable and stored in the stack. If we change the value of a primitive or if we assign a new variable to the old variable, it creates a new memory location for each case.
let x = 2
let y = x
x = 3
console.log(y) // 2
When we run a large loop just as the example bellow, 99999999 times 8bytes needed in stack to allocate space for i. So, why the stack is not overflown?
for(let i = 0; i<99999999; i++){
let x = i
}
If millions of objects are created simultaneously (in a real world app), is the stack enough to hold references to all the objects in the heap?
(V8 developer here.)
When we run a large loop just as the example bellow, 99999999 times 8bytes needed in stack to allocate space for i. So, why the stack is not overflown?
The premise is incorrect. There's only one stack slot for i. Each iteration of the loop reuses it. Whether this loop runs once, or twice, or 100 times, or 9999... times therefore doesn't change how much stack space it needs.
If millions of objects are created simultaneously (in a real world app), is the stack enough to hold references to all the objects in the heap?
The usable stack size is a little less than a megabyte (this is determined by the operating system); on a 64-bit platform that's enough for around 100K pointers, which (very roughly) translates to several tens of thousand local variables distributed across all functions that currently have an activation (there's no specific number because "it depends"). So it's certainly possible to have more objects on the heap than can be referred to directly from the stack, but this is not typically a limitation that code runs into: for example, you could have a local variable pointing at an array, and that array can in turn refer to thousands of other objects. That way, a single stack slot can keep many objects easily accessible.
(Nitpick: millions of objects are never created "simultaneously", they're always allocated after each other, maybe in quick succession.)
the key point here is that the code includes a garbage collector,
once a memory allocation has been narked as not required, the GC is allowed to reallocated for a different use
this is a little oversimplified but lets looks at what's happening at each stage
let x = 3;
let y = x;
x = 2;
**sudo**
allocate x;
set x = 3;
allocate y;
set y = read x
set x = 2
so the stack never exceeds a single state that is scoped to 2 allocuted variables
for your loop example
for(let i = 0; i<99999999; i++){ //
let x = i
}
**sudo**
label forloop;
allocate i;
set i = 0;
allocate x;
set x = read i;
if i < 99999999
set i = read i + 1
goto forloop
else
deallocate x;
deallocate i;
here their may be some growth in the memory usage from the allocation of the x variable in the loop however GC can easily clean this up
now for the issue you haven't considered functions and recursion
function doSomething(i){
if(i>99999999){
return i;
else
return doSomething(i+1);
}
doSomething(0)
**sudo**
goto main;
label doSomething;
if read current_state_i < 99999999
allocate new_state;
allocate i in new_state;
set new_state_i = read current_state_i + 1
stack_push doSomething using new_state
//stack_pop returns here
set current_state_return = read new_state_return
else
set current_state_return = current_state_i
deallocate new_state
goto stack_pop
label main;
allocate dosomething_state;
allocate i in dosomething_state;
set dosomething_state_i = 0;
stack_push doSomething using dosomething_state
//stack_pop returns here
for this one every time you call the function doSomething the stack has an state added to it this state is a memory slot that stores the passed in value and any variables created in scope and the return
when doSomething is completed this state is marked as no required
however until the code can return the system has to keep allocating more and more room on the stack for each calls state this is why recursion is much less safe than normal loops
As commenters have already said, there is no stack involved in your example, or at least not how you imagine there is.
If you want to break your stack, try this instead:
function Overflow(count){
console.log('count:', count)
return Overflow(count + 1)
}
Overflow(0)

Performance of passing object as argument in javascript

Theoretical question, if for e.g. I have a big object called Order and it has a tons on props: strings, numbers, arrays, nested objects.
I have a function:
function removeShipment(order) {
order.shipment.forEach(
// remove shipment action
);
}
Which mean I access only one prop (shipment), but send a big object.
From perspective of garbage collection and performance is there a difference, between pass Order and pass Order.shipment?
Because object passed by reference, and don't actually copy Order into variable.
As ibrahim mahrir stated in a comment-- though I don't know why they didn't post an answer, because OPs are incentivised to pick a "best answer" & the sole, bewildering response was therefore chosen-- there is no practical performance difference between passing order to your removeShipment method, or passing order.shipment
This is because JavaScript functions are "pass-by-value" for primitive types, like number and boolean, and it uses something known as "call-by-sharing" for passing copies of references for Objects (like your order and assumedly your Array of shipments). The entire object is not copied when passed as a parameter, just a copy of a reference to it in memory. Either approach, passing order or order.shipments, is effectively identical.
I did write a couple timing tests for this, but the actual difference is so small that it's exceptionally difficult to write a test that even properly measures it. I'll include my code at the end for completeness' sake, but from my limited testing in Firefox & Chrome, they were practically identical, as expected.
For another question / answer in the same vein as yours (as well as a great video on why "Micro-benchmarking" often doesn't produce correct results) that corroborates what I wrote, see: does size of argument in a javascript function affects its performance?
See this answer regarding the implications of "call-by-sharing" Is JavaScript a pass-by-reference or pass-by-value language?
You didn's specify what, "remove shipment action" actually "means" in practice. You could just do testOrder.shipments = [] if you just wanted to "remove all shipments" from the order object. They'd be garbage collected at some point after this if nothing else can reach them. I'm just going to iterate through each & perform an addition operation as a stub, as I'm afraid otherwise everything would just be optimised out.
// "num" between 0 inclusive & 26 exclusive
function letter(num)
{
return String.fromCharCode(num + 65)
}
// Ships have a 3-letter name & a random value between 0 & 1
function getShipment() {
return { "name": "Ship", "val": Math.random() }
}
// "order" has 100 "Shipments"
// As well as 676 other named object properties with random "result" values
// e.g. order.AE => Object { result: 14.9815045239037 }
function getOrder() {
var order = {}
for (var i = 0; i < 26; i++)
for (var j = 0; j < 26; j++) {
order[letter(i) + letter(j)] = { "result": (i+j) * Math.random() }
}
order.shipments = Array.from({length: 100}).map(getShipment)
return order
}
function removeShipmentOrder(order) {
order.shipments.forEach(s => s.val++);
}
function removeShipmentList(shipmentList) {
shipmentList.forEach(s => s.val++);
}
// Timing tests
var testOrder = getOrder();
console.time()
for(var i = 0; i < 1000000; i++)
removeShipmentOrder(testOrder)
console.timeEnd()
// Break in-between tests;
// Running them back-to-back, the second test always took longer.
// I assume it's actually due to some kind of compiler optimisation
var testOrder = getOrder();
console.time()
for(var i = 0; i < 1000000; i++)
removeShipmentList(testOrder.shipments)
console.timeEnd()
I was wondering this myself. I decided to test it. Here is my test code:
var a = "Here's a string value";
var b = 5; // and a number
var c = false;
var object = {
a, b, c
}
var array = [
a, b, c
];
var passObject = (obj) => {
return obj.a.length + obj.b * obj.c ? 2 : 1;
}
var passRawValues = (val_a, val_b, val_c) => {
return val_a.length + val_b * val_c ? 2 : 1;
}
var passArray = (arr) => {
return arr[0].length + arr[1] * arr[2] ? 2 : 1;
}
var x = 0;
Then I called the three functions like this:
x << 1;
x ^= passObject(object);
x << 1;
x ^= passRawValues(a, b, c);
x << 1;
x ^= passArray(array);
The reason it does the bit shifting and XORing is that without it, the function call was optimized away entirely by some JS runtimes. By storing the result of the function, I forced the runtime to actually do the function call.
Results
In Webkit and Chromium, passing an object and passing an array were about the same speed, and passing raw values was a little bit slower. Firefox showed about the same performance ratio but I'm not sure that I trust the results since it was literally ten times faster than Chromium.
Here is a link to my my test case on MeasureThat. In case the link doesn't work: it's the same code as above.
Here's a screenshot of the run results (in Chromium on an M1 Macbook Air):
About 5 million ops/s in Chromium for passing an object, versus about 3.7 million for passing a trio of primitive values.
Explanation
So why is that? Well, JavaScript strictly uses pass-by-value semantics. But when you pass an object to a function, the value that you're passing isn't actually the object itself, but rather a pointer to the object. So the variable storing the pointer gets duplicated, but the contents of what it points to does not. This is also why you can have a function that takes an object and alters its properties and that change will happen outside the function as well, but if you reassign the object, the outside scope will still reference the old object.
For this reason, the size of the passed object is largely irrelevant for performance. If the var object = {...} above is changed to contain a bunch of other data, the operations per second achieved when passing it to the function remains exactly the same, because the only thing changing is the amount of data in the block of memory storing the object. The value being passed to the function isn't bigger just because the object is bigger.
Created a simple test here https://jsperf.com/passing-object-vs-passing-raw-value
Test results:
in Chrome passing object is ~7% slower that passing raw value
in Firefox passing object is ~15% slower that passing raw value
in IE11 passing object is ~10% slower that passing raw value
This is syntetic test for passing only one variable, so in other cases results may differ

Garbage-collected cache via Javascript WeakMaps

I want to cache large objects in JavaScript. These objects are retrieved by key, and it makes sense to cache them. But they won't fit in memory all at once, so I want them to be garbage collected if needed - the GC obviously knows better.
It is pretty trivial to make such a cache using WeakReference or WeakValueDictionary found in other languages, but in ES6 we have WeakMap instead, where keys are weak.
So, is it possible to make something like a WeakReference or make garbage-collected caches from WeakMap?
There are two scenarios where it's useful for a hash map to be weak (yours seems to fit the second):
One wishes to attach information to an object with a known identity; if the object ceases to exist, the attached information will become meaningless and should likewise cease to exist. JavaScript supports this scenario.
One wishes to merge references to semantically-identical objects, for the purposes of reducing storage requirements and expediting comparisons. Replacing many references to identical large subtrees, for example, with references to the same subtree can allow order-of-magnitude reductions in memory usage and execution time. Unfortunately JavaScript doesn't support this scenario.
In both cases, references in the table will be kept alive as long as they are useful, and will "naturally" become eligible for collection when they become useless. Unfortunately, rather than implementing separate classes for the two usages defined above, the designers of WeakReference made it so it can kinda-sorta be usable for either, though not terribly well.
In cases where the keys define equality to mean reference identity, WeakHashMap will satisfy the first usage pattern, but the second would be meaningless (code which held a reference to an object that was semantically identical to a stored key would hold a reference to the stored key, and wouldn't need the WeakHashMap to give it one). In cases where keys define some other form of equality, it generally doesn't make sense for a table query to return anything other than a reference to the stored object, but the only way to avoid having the stored reference keep the key alive is to use a WeakHashMap<TKey,WeakReference<TKey>> and have the client retrieve the weak reference, retrieve the key reference stored therein, and check whether it's still valid (it could get collected between the time the WeakHashMap returns the WeakReference and the time the WeakReference itself gets examined).
is it possible to make WeakReference from WeakMap or make garbage-collected cache from WeakMap ?
AFAIK the answer is "no" to both questions.
It's now possible thanks to FinalizationRegistry and WeakRef
Example:
const caches: Record<string, WeakRef<R>> = {}
const finalizer = new FinalizationRegistry((key: string) =>
{
console.log(`Finalizing cache: ${key}`)
delete caches[key]
})
function setCache(key: string, value: R)
{
const cache = getCache(key)
if (cache)
{
if (cache === value) return
finalizer.unregister(cache)
}
caches[key] = new WeakRef(value)
finalizer.register(value, key, value)
}
function getCache(key: string)
{
return caches[key]?.deref()
}
As the other answers mentioned, unfortunately there's no such thing as a weak map, like there is in Java / C#.
As a work around, I created this CacheMap that keeps a maximum number of objects around, and tracks their usage over a set period of time so that you:
Always remove the least accessed object, when necessary
Don't create a memory leak.
Here's the code.
"use strict";
/**
* This class keeps a maximum number of items, along with a count of items requested over the past X seconds.
*
* Unfortunately, in JavaScript, there's no way to create a weak map like in Java/C#.
* See https://stackoverflow.com/questions/25567578/garbage-collected-cache-via-javascript-weakmaps
*/
module.exports = class CacheMap {
constructor(maxItems, secondsToKeepACountFor) {
if (maxItems < 1) {
throw new Error("Max items must be a positive integer");
}
if (secondsToKeepACountFor < 1) {
throw new Error("Seconds to keep a count for must be a positive integer");
}
this.itemsToCounts = new WeakMap();
this.internalMap = new Map();
this.maxItems = maxItems;
this.secondsToKeepACountFor = secondsToKeepACountFor;
}
get(key) {
const value = this.internalMap.get(key);
if (value) {
this.itemsToCounts.get(value).push(CacheMap.getCurrentTimeInSeconds());
}
return value;
}
has(key) {
return this.internalMap.has(key);
}
static getCurrentTimeInSeconds() {
return Math.floor(Date.now() / 1000);
}
set(key, value) {
if (this.internalMap.has(key)) {
this.internalMap.set(key, value);
} else {
if (this.internalMap.size === this.maxItems) {
// Figure out who to kick out.
let keys = this.internalMap.keys();
let lowestKey;
let lowestNum = null;
let currentTime = CacheMap.getCurrentTimeInSeconds();
for (let key of keys) {
const value = this.internalMap.get(key);
let totalCounts = this.itemsToCounts.get(value);
let countsSince = totalCounts.filter(count => count > (currentTime - this.secondsToKeepACountFor));
this.itemsToCounts.set(value, totalCounts);
if (lowestNum === null || countsSince.length < lowestNum) {
lowestNum = countsSince.length;
lowestKey = key;
}
}
this.internalMap.delete(lowestKey);
}
this.internalMap.set(key, value);
}
this.itemsToCounts.set(value, []);
}
size() {
return this.internalMap.size;
}
};
And you call it like so:
// Keeps at most 10 client databases in memory and keeps track of their usage over a 10 min period.
let dbCache = new CacheMap(10, 600);

Memory allocation for JavaScript types

I'm trying to optimize the hell out of a mobile app I'm working on, and I'd like to know what takes up the smallest memory footprint (I realize this may vary across browser):
object pointers
boolean literals
number literals
string literals
Which should theoretically take the least amount of memory space?
On V8:
Boolean, number, string, null and void 0 literals take constant 4/8 bytes of memory to for the pointer or the immediate integer value embedded in the pointer. But there is no heap allocation for these at all as a string literal will just be internalized. Exception can be big integers or doubles which are boxed with 4/8 bytes for the box pointer and 12-16 bytes for the box. In optimized code local doubles can stay unboxed in registers or stack, or an array that always contains exclusively doubles will store them unboxed.
Consider the meat of the generated code for:
function weird(d) {
var a = "foo";
var b = "bar";
var c = "quz";
if( d ) {
sideEffects(a, b, c);
}
}
As you can see, the pointers to the strings are hard-coded and no allocation happens.
Object identities at minimum take 12/24 bytes for plain object, 16/32 bytes for array and 32/72 for function (+ ~30/60 bytes if context object needs to be allocated). You can only get away without heap allocation here if you run bleeding edge v8 and the identity doesn't escape into a function that cannot be inlined.
So for example:
function arr() {
return [1,2,3]
}
The backing array for values 1,2,3 will be shared as a copy-on-write array by all arrays returned by the function but still unique identity object for each array needed to be allocated. See how complicated the generated code is. So even with this optimization, if you don't need unique identities for the arrays, just returning an array from upper scope will avoid allocation for the identity everytime the function is called:
var a = [1,2,3];
function arr() {
return a;
}
Much simpler.
If you have memory problems with js without doing anything seemingly crazy, you are surely creating functions dynamically. Hoist all functions to a level where they don't need to be recreated. As you can see from above, merely the identity for a function is very fat already considering most code can get away with static functions by taking advantage of this.
So if you want to take anything from this, avoid non-IIFE closures if your goal is performance. Any benchmark that shows they are not a problem is a broken benchmark.
You might have intuition that what does additional memory usage matter when you have 8GB. Well it wouldn't matter in C. But in Javascript the memory doesn't just sit there, it is being traced by garbage collector. The more memory and objects that sits there, the worse performance.
Just consider running something like:
var l = 1024 * 1024 * 2
var a = new Array(l);
for( var i = 0, len = a.length; i < len; ++i ) {
a[i] = function(){};
}
With --trace_gc --trace_gc_verbose --print_cumulative_gc_stat. Just look how much work was done for nothing.
Compare with static function:
var l = 1024 * 1024 * 2
var a = new Array(l);
var fn = function(){};
for( var i = 0, len = a.length; i < len; ++i ) {
a[i] = fn;
}
"Literal" means code (even if not in string serialisation), which is a more complex type and will therefore cost more space than values.
Theoretically, boolean values could take the least amount of space since they fit in a single bit. It's unlikely though that any engine does optimize this. If you want to force this, you can do it manually and juggle around with typed arrays.
However, performance is a practical thing, and you can only test, test, test it. As you already know, there is no definitive cross-browser cross-version answer.

Prepare array for sorting in closure

According to my research and googling, Javascript seems to lack support for locale aware sorting and string comparisons. There is localeCompare(), but it has been reported of browser specific differencies and impossibility to explicitly set which locale is used (the OS locale is not always the one wanted). There is some intentions to add collation support inside ECMAScript, but before it, we are on our own. And depending how consistent the results are across browsers, may be we are on our own forever :(.
I have the following code, which makes alphabetical sort of an array. It's made speed in mind, and the ideas are got from https://stackoverflow.com/a/11598969/1691517, to which I made some speed improvements.
In this example, the words array has 13 members and the sort-function is called 34 times. I want to replace some of the letters in the words-array (you don't have to know what replacements are made, because it's not the point in this question). If I make these replacements in sort-function ( the one that starts with return function(a, b) ), the code is inefficient, because replacements are made more than once per array member. Of course I can make these replacements outside of this closure, I mean before the line words.sort(sortbyalphabet_timo);, but it's not what I want.
Question 1: Is it possible to modify the words-array in between the lines "PREPARATION STARTS" and "PREPARATION ENDS" so that the sort function uses modified words-array?
Question 2: Is it possible to input arguments to the closure so that code between PREPARATION STARTS and PREPARATION ENDS can use them? I have tried this without success:
var caseinsensitive = true;
words.sort( sortbyalphabet_timo(caseinsensitive) );
And here is finally the code example, and the ready to run example is in http://jsfiddle.net/3E7wb/:
var sortbyalphabet_timo = (function() {
// PREPARATION STARTS
var i, alphabet = "-0123456789AaÀàÁáÂâÃãÄäBbCcÇçDdEeÈèÉéÊêËëFfGgHhIiÌìÍíÎîÏïJjKkLlMmNnÑñOoÒòÓóÔôÕõÖöPpQqRrSsTtUuÙùÚúÛûÜüVvWwXxYyÝýŸÿZz",
index = {};
i = alphabet.length;
while (i--) index[alphabet.charCodeAt(i)] = i;
// PREPARATION ENDS
return function(a, b) {
var i, len, diff;
if (typeof a === "string" && typeof b === "string") {
(a.length > b.length) ? len = a.length : len = b.length;
for (i = 0; i < len; i++) {
diff = index[a.charCodeAt(i)] - index[b.charCodeAt(i)];
if (diff !== 0) {
return diff;
}
}
// sort the shorter first
return a.length - b.length;
} else {
return 0;
}
};
})();
var words = ['tauschen', '66', '55', '33', 'täuschen', 'andern', 'ändern', 'Ast', 'Äste', 'dosen', 'dösen', 'Donaudam-0', 'Donaudam-1'];
$('#orig').html(words.toString());
words.sort(sortbyalphabet_timo);
$('#sorted').html(words.toString());`
Is it possible to modify the words-array in between the lines "PREPARATION STARTS" and "PREPARATION ENDS" so that the sort function uses modified words-array?
No, not really. You don't have access to the array itself, your function only builds the compare-function that is later used when .sort is invoked on the array. If you needed to alter the array, you'll need to write a function that gets it as an argument; for example you could add a method on Array.prototype. It would look like
function mysort(arr) {
// Preparation
// declaration of compare function
// OR execution of closure to get the compare function
arr.sort(comparefn);
return arr;
}
Is it possible to input arguments to the closure so that code between PREPARATION STARTS and PREPARATION ENDS can use them?
Yes, of course - that is the reason to use closures :-) However, you can't use sortbyalphabet_timo(caseinsensitive) with your current code. The closure you have is immediately invoked (called an IIFE) and returns the compare-function, which you pass into sort as in your demo.
If you want sortbyalphabet_timo to be the closure instead of the result, you have to remove the brackets after it. You also you can use arguments there, which are accessible in the whole closure scope (including the comparefunction):
var sortbyalphabet_timo_closure = function(caseinsensitive) {
// Preparation, potentially using the arguments
// Declaration of compare function, potentially using the arguments
return comparefn;
}
// then use
words.sort(sortbyalphabet_timo_closure(true));
Currently, you are doing this:
var sortbyalphabet_timo_closure = function(/*having no arguments*/) {
// Preparation, potentially using the arguments
// Declaration of compare function, potentially using the arguments
return comparefn;
}
var sortbyalphabet_timo = sortbyalphabet_timo_closure();
// then use
words.sort(sortbyalphabet_timo);
…which just caches the result of executing the closure, if you'd need to sort multiple times.

Categories