Memory handling vs. performance - javascript

I'm building a WebGL game and I've come so far that I've started to investigate performance bottlenecks. I can see there are a lot of small dips in FPS when there are GC going on. Hence, I created a small memory pool handler. I still see a lot of GC after I've started to use it and I might suspect that I've got something wrong.
My memory pool code looks like this:
function Memory(Class) {
this.Class = Class;
this.pool = [];
Memory.prototype.size = function() {
return this.pool.length;
};
Memory.prototype.allocate = function() {
if (this.pool.length === 0) {
var x = new this.Class();
if(typeof(x) == "object") {
x.size = 0;
x.push = function(v) { this[this.size++] = v; };
x.pop = function() { return this[--this.size]; };
}
return x;
} else {
return this.pool.pop();
}
};
Memory.prototype.free = function(object) {
if(typeof(object) == "object") {
object.size = 0;
}
this.pool.push(object);
};
Memory.prototype.gc = function() {
this.pool = [];
};
}
I then use this class like this:
game.mInt = new Memory(Number);
game.mArray = new Memory(Array); // this will have a new push() and size property.
// Allocate an number
var x = game.mInt.allocate();
<do something with it, for loop etc>
// Free variable and push into mInt pool to be reused.
game.mInt.free(x);
My memory handling for an array is based on using myArray.size instead of length, which keeps track of the actual current array size in an overdimensioned array (that has been reused).
So to my actual question:
Using this approach to avoid GC and keep memory during play-time. Will my variables I declare with "var" inside functions still be GC even though they are returned as new Class() from my Memory function?
Example:
var x = game.mInt.allocate();
for(x = 0; x < 100; x++) {
...
}
x = game.mInt.free(x);
Will this still cause memory garbage collection of the "var" due to some memcopy behind the scenes? (which would make my memory handler useless)
Is my approach good/meaningful in my case with a game that I'm trying to get high FPS in?

So you let JS instantiate a new Object
var x = new this.Class();
then add anonymous methods to this object and therefore make it a one of a kind
x.push = function...
x.pop = function...
so that now every place you're using this object is harder to optimize by the JS engine, because they have now distinct interfaces/hidden classes (equal ain't identical)
Additionally, every place you use these objects, will have to implement additional typecasts, to convert the Number Object back into a primitive, and typecasts ain't for free either. Like, in every iteration of a loop? maybe even multiple times?
And all this overhead just to store a 64bit float?
game.mInt = new Memory(Number);
And since you cannot change the internal State and therefore the value of a Number object, these values are basically static, like their primitive counterpart.
TL;DR:
Don't pool native types, especially not primitives. These days, JS is pretty good at optimizing the code if it doesn't have to deal with surprizes. Surprizes like distinct objects with distinct interfaces that first have to be cast to a primitive value, before they can be used.
Array resizing ain't for free either. Although JS optimizes this and usually pre-allocates more memory than the Array may need, you may still hit that limit, and therefore enforce the engine to allocate new memory, move all the values to that new memory and free the old one.
I usually use Linked lists for pools.
Don't try to pool everything. Think about wich objects can really be reused, and wich you are bending to fit them into this narrative of "reusability".
I'd say: If you have to do as little as adding a single new property to an object (after it has been constructed), and therefore you'd need to delete this property for clean up, this object should not be pooled.
Hidden Classes: When talking about optimizations in JS you should know this topic at least at a very basic level
summary:
don't add new properties after an object has been constructed.
and to extend this first point, no deletes!
the order in wich you add properties matters
changing the value of a property (even its type) doesn't matter! Except when we talk about properties that contain functions (aka. methods). The optimizer may be a bit picky here, when we're talking about functions attached to objects, so avoid it.
And last but not least: Distinct between optimized and "dictionary" objects. First in your concepts, then in your code.
There's no benefit in trying to fit everything into a pattern with static interfaces (this is JS, not Java). But static types make the life easier for the optimizer. So compose the two.

Related

Are there any downsides to changing a value of a JavaScript WeakMap key/value pair without using the WeakMap.set method?

I'm just beginning to learn the use cases for the ES6 WeakMap feature. I've read a lot about it, but I haven't been able to find an answer for this particular question.
I'm implementing a Node.js Minesweeper game for the terminal - just for fun and practice. I've created a class called MineBoard that will store all the necessary data and methods for the game to operate. I want some members, such as _uncoveredCount (number of squares uncovered) and _winningCount (number of squares uncovered needed to win) to remain unaccessible to the user. Although this game isn't going into production, I'd still like it to be uncheatable ;) - and the naming convention of the _ prefix to signal private members is not enough.
To do this - I've implemented a WeakMap to store the two above examples in, and other private members.
METHOD 1:
let _mineBoardPrivateData = new WeakMap();
class MineBoard {
constructor(size, difficulty) {
this.size = size || 10;
this.difficulty = difficulty || 1;
_mineBoardPrivateData.set(this, {});
_mineBoardPrivateData.get(this).winningCount = someMethodForDeterminingCount();
_mineBoardPrivateData.get(this).isGameOver = false;
_mineBoardPrivateData.get(this).uncoveredCount = 0;
//more code
}
generateNewBoard() {
//code
}
uncoverSquare() {
//more code
_mineBoardPrivateData.get(this).uncoveredCount++;
}
//more code
}
It is much easier for me to do it this way above - and also much easier on the eyes. However, most of the examples of WeakMap implementations I've seen follow the style below.
METHOD 2:
let _winningCount = new WeakMap();
let _isGameOver = new WeakMap();
let _uncoveredCount = new WeakMap();
//more instantiations of WeakMap here
class MineBoard {
constructor(size, difficulty) {
this.size = size || 10;
this.difficulty = difficulty || 1;
_winningCount.set(this, someMethodForDeterminingWinningCount() );
_isGameOver.set(this, false);
_uncoveredCount.set(this, 0);
//more private data assignment here
}
generateNewBoard() {
//code
}
uncoverSquare() {
//more code
_uncoveredCount.set(this, _uncoveredCount.get(this) + 1);
}
//more code
}
So my question is - are there any drawbacks to using Method 1 that I am not seeing? This makes for the simpler solution and IMO easier to read and follow code.
Thank you!
You can (and should) use the first method. There are no drawback to using the first method and it probably is more efficient anyways since you're only creating a single object per MineBoard instance. It also means that adding/removing private properties is much easier. Additionally, from inside your MineBoard instance you would be able to easily iterate over all the private properties just using Object.keys(_mineBoardPrivateData.get(this)) (try doing that with the second method).

Garbage-collected cache via Javascript WeakMaps

I want to cache large objects in JavaScript. These objects are retrieved by key, and it makes sense to cache them. But they won't fit in memory all at once, so I want them to be garbage collected if needed - the GC obviously knows better.
It is pretty trivial to make such a cache using WeakReference or WeakValueDictionary found in other languages, but in ES6 we have WeakMap instead, where keys are weak.
So, is it possible to make something like a WeakReference or make garbage-collected caches from WeakMap?
There are two scenarios where it's useful for a hash map to be weak (yours seems to fit the second):
One wishes to attach information to an object with a known identity; if the object ceases to exist, the attached information will become meaningless and should likewise cease to exist. JavaScript supports this scenario.
One wishes to merge references to semantically-identical objects, for the purposes of reducing storage requirements and expediting comparisons. Replacing many references to identical large subtrees, for example, with references to the same subtree can allow order-of-magnitude reductions in memory usage and execution time. Unfortunately JavaScript doesn't support this scenario.
In both cases, references in the table will be kept alive as long as they are useful, and will "naturally" become eligible for collection when they become useless. Unfortunately, rather than implementing separate classes for the two usages defined above, the designers of WeakReference made it so it can kinda-sorta be usable for either, though not terribly well.
In cases where the keys define equality to mean reference identity, WeakHashMap will satisfy the first usage pattern, but the second would be meaningless (code which held a reference to an object that was semantically identical to a stored key would hold a reference to the stored key, and wouldn't need the WeakHashMap to give it one). In cases where keys define some other form of equality, it generally doesn't make sense for a table query to return anything other than a reference to the stored object, but the only way to avoid having the stored reference keep the key alive is to use a WeakHashMap<TKey,WeakReference<TKey>> and have the client retrieve the weak reference, retrieve the key reference stored therein, and check whether it's still valid (it could get collected between the time the WeakHashMap returns the WeakReference and the time the WeakReference itself gets examined).
is it possible to make WeakReference from WeakMap or make garbage-collected cache from WeakMap ?
AFAIK the answer is "no" to both questions.
It's now possible thanks to FinalizationRegistry and WeakRef
Example:
const caches: Record<string, WeakRef<R>> = {}
const finalizer = new FinalizationRegistry((key: string) =>
{
console.log(`Finalizing cache: ${key}`)
delete caches[key]
})
function setCache(key: string, value: R)
{
const cache = getCache(key)
if (cache)
{
if (cache === value) return
finalizer.unregister(cache)
}
caches[key] = new WeakRef(value)
finalizer.register(value, key, value)
}
function getCache(key: string)
{
return caches[key]?.deref()
}
As the other answers mentioned, unfortunately there's no such thing as a weak map, like there is in Java / C#.
As a work around, I created this CacheMap that keeps a maximum number of objects around, and tracks their usage over a set period of time so that you:
Always remove the least accessed object, when necessary
Don't create a memory leak.
Here's the code.
"use strict";
/**
* This class keeps a maximum number of items, along with a count of items requested over the past X seconds.
*
* Unfortunately, in JavaScript, there's no way to create a weak map like in Java/C#.
* See https://stackoverflow.com/questions/25567578/garbage-collected-cache-via-javascript-weakmaps
*/
module.exports = class CacheMap {
constructor(maxItems, secondsToKeepACountFor) {
if (maxItems < 1) {
throw new Error("Max items must be a positive integer");
}
if (secondsToKeepACountFor < 1) {
throw new Error("Seconds to keep a count for must be a positive integer");
}
this.itemsToCounts = new WeakMap();
this.internalMap = new Map();
this.maxItems = maxItems;
this.secondsToKeepACountFor = secondsToKeepACountFor;
}
get(key) {
const value = this.internalMap.get(key);
if (value) {
this.itemsToCounts.get(value).push(CacheMap.getCurrentTimeInSeconds());
}
return value;
}
has(key) {
return this.internalMap.has(key);
}
static getCurrentTimeInSeconds() {
return Math.floor(Date.now() / 1000);
}
set(key, value) {
if (this.internalMap.has(key)) {
this.internalMap.set(key, value);
} else {
if (this.internalMap.size === this.maxItems) {
// Figure out who to kick out.
let keys = this.internalMap.keys();
let lowestKey;
let lowestNum = null;
let currentTime = CacheMap.getCurrentTimeInSeconds();
for (let key of keys) {
const value = this.internalMap.get(key);
let totalCounts = this.itemsToCounts.get(value);
let countsSince = totalCounts.filter(count => count > (currentTime - this.secondsToKeepACountFor));
this.itemsToCounts.set(value, totalCounts);
if (lowestNum === null || countsSince.length < lowestNum) {
lowestNum = countsSince.length;
lowestKey = key;
}
}
this.internalMap.delete(lowestKey);
}
this.internalMap.set(key, value);
}
this.itemsToCounts.set(value, []);
}
size() {
return this.internalMap.size;
}
};
And you call it like so:
// Keeps at most 10 client databases in memory and keeps track of their usage over a 10 min period.
let dbCache = new CacheMap(10, 600);

JavaScript memory: return new object vs. store in result

I've been wondering about memory usage and garbage collection in JavaScript when returning new (temporary) objects. Say I have a function that returns multiple values:
function foo() {
return { a: 0, b: 1, c: 2 };
}
This creates a new Object every time the function is called. Now, if I'm only interested in the return values, I don't really need that object, so I thought of a different pattern:
function bar(result) {
result.a = 0;
result.b = 1;
result.c = 2;
}
In theory, bar should save some memory over foo. In fact I saw this pattern a few times in real world applications, e.g. in three.js there are "optionalTarget" parameters in the Ray class so the functions don't have to create new vectors inside and instead set the result on the target (Ray.js source).
However, after fiddling around with custom benchmarks, I cannot find any differences in practice. I'm a total novice in benchmarking Javascript, so the question is: Can you create a convincing benchmark to show the difference, if any? If there is none, does this mean the garbage collector is smart enough to throw away the object immediately?

Memory allocation for JavaScript types

I'm trying to optimize the hell out of a mobile app I'm working on, and I'd like to know what takes up the smallest memory footprint (I realize this may vary across browser):
object pointers
boolean literals
number literals
string literals
Which should theoretically take the least amount of memory space?
On V8:
Boolean, number, string, null and void 0 literals take constant 4/8 bytes of memory to for the pointer or the immediate integer value embedded in the pointer. But there is no heap allocation for these at all as a string literal will just be internalized. Exception can be big integers or doubles which are boxed with 4/8 bytes for the box pointer and 12-16 bytes for the box. In optimized code local doubles can stay unboxed in registers or stack, or an array that always contains exclusively doubles will store them unboxed.
Consider the meat of the generated code for:
function weird(d) {
var a = "foo";
var b = "bar";
var c = "quz";
if( d ) {
sideEffects(a, b, c);
}
}
As you can see, the pointers to the strings are hard-coded and no allocation happens.
Object identities at minimum take 12/24 bytes for plain object, 16/32 bytes for array and 32/72 for function (+ ~30/60 bytes if context object needs to be allocated). You can only get away without heap allocation here if you run bleeding edge v8 and the identity doesn't escape into a function that cannot be inlined.
So for example:
function arr() {
return [1,2,3]
}
The backing array for values 1,2,3 will be shared as a copy-on-write array by all arrays returned by the function but still unique identity object for each array needed to be allocated. See how complicated the generated code is. So even with this optimization, if you don't need unique identities for the arrays, just returning an array from upper scope will avoid allocation for the identity everytime the function is called:
var a = [1,2,3];
function arr() {
return a;
}
Much simpler.
If you have memory problems with js without doing anything seemingly crazy, you are surely creating functions dynamically. Hoist all functions to a level where they don't need to be recreated. As you can see from above, merely the identity for a function is very fat already considering most code can get away with static functions by taking advantage of this.
So if you want to take anything from this, avoid non-IIFE closures if your goal is performance. Any benchmark that shows they are not a problem is a broken benchmark.
You might have intuition that what does additional memory usage matter when you have 8GB. Well it wouldn't matter in C. But in Javascript the memory doesn't just sit there, it is being traced by garbage collector. The more memory and objects that sits there, the worse performance.
Just consider running something like:
var l = 1024 * 1024 * 2
var a = new Array(l);
for( var i = 0, len = a.length; i < len; ++i ) {
a[i] = function(){};
}
With --trace_gc --trace_gc_verbose --print_cumulative_gc_stat. Just look how much work was done for nothing.
Compare with static function:
var l = 1024 * 1024 * 2
var a = new Array(l);
var fn = function(){};
for( var i = 0, len = a.length; i < len; ++i ) {
a[i] = fn;
}
"Literal" means code (even if not in string serialisation), which is a more complex type and will therefore cost more space than values.
Theoretically, boolean values could take the least amount of space since they fit in a single bit. It's unlikely though that any engine does optimize this. If you want to force this, you can do it manually and juggle around with typed arrays.
However, performance is a practical thing, and you can only test, test, test it. As you already know, there is no definitive cross-browser cross-version answer.

How to detect when a property is added to a JavaScript object?

var obj = {};
obj.a = 1; // fire event, property "a" added
This question is different from this one, where ways to detect when an already declared property is changed, being discussed.
this is possible, technically, but since all current JS implementations that I know of are single threaded it won't be very elegant. The only thing I can think of is a brute force interval:
var checkObj = (function(watchObj)
{
var initialMap = {},allProps = [],prop;
for (prop in watchObj)
{
if (watchObj.hasOwnProperty(prop))
{//make tracer object: basically clone it
initialMap[prop] = watchObj[prop];
allProps.push(prop);//keep an array mapper
}
}
return function()
{
var currentProps = [];
for (prop in watchObj)
{
if (watchObj.hasOwnProperty(prop))
{//iterate the object again, compare
if (watchObj[prop] !== initialMap[prop])
{//type andvalue check!
console.log(initialMap[prop] + ' => ' watchObj[prop]);
//diff found, deal with it whichever way you see fit
}
currentProps.push(prop);
}
}
//we're not done yet!
if (currentProps.length < allProps.length)
{
console.log('some prop was deleted');
//loop through arrays to find out which one
}
};
})(someObjectToTrack);
var watchInterval = setInterval(checkObj,100);//check every .1 seconds?
That allows you to track an object to some extent, but again, it's quite a lot of work to do this 10/sec. Who knows, maybe the object changes several times in between the intervals, too.All in all, I feel as though this is a less-then-ideal approach... perhaps it would be easier to compare the string constants of the JSON.stringify'ed object, but that does mean missing out on functions, and (though I filtered them out in this example) prototype properties.
I have considered doing something similar at one point, but ended up just using my event handlers that changed the object in question to check for any changes.
Alternatively, you could also try creating a DOMElement, and attach an onchange listener to that... sadly, again, functions/methods might prove tricky to track, but at least it won't slow your script down as much as the code above will.
You could count the properties on the object and see if has changed from when you last checked:
How to efficiently count the number of keys/properties of an object in JavaScript?
this is a crude workaround, to use in case you can't find a proper support for the feature in the language.
If performance matters and you are in control of the code that changes the objects, create a control class that modifies your objects for you, e.g.
var myObj = new ObjectController({});
myObj.set('field', {});
myObj.set('field.arr', [{hello: true}]);
myObj.set('field.arr.0.hello', false);
var obj = myObj.get('field'); // obj === {field: {arr: [{hello: false}]}}
In your set() method, you now have the ability to see where every change occurs in a pretty high-performance fashion, compared with setting an interval and doing regular scans to check for changes.
I do something similar but highly optimised in ForerunnerDB. When you do CRUD operations on the database, change events are fired for specific field paths, allowing data-bound views to be updated when their underlying data changes.

Categories