Garbage-collected cache via Javascript WeakMaps - javascript

I want to cache large objects in JavaScript. These objects are retrieved by key, and it makes sense to cache them. But they won't fit in memory all at once, so I want them to be garbage collected if needed - the GC obviously knows better.
It is pretty trivial to make such a cache using WeakReference or WeakValueDictionary found in other languages, but in ES6 we have WeakMap instead, where keys are weak.
So, is it possible to make something like a WeakReference or make garbage-collected caches from WeakMap?

There are two scenarios where it's useful for a hash map to be weak (yours seems to fit the second):
One wishes to attach information to an object with a known identity; if the object ceases to exist, the attached information will become meaningless and should likewise cease to exist. JavaScript supports this scenario.
One wishes to merge references to semantically-identical objects, for the purposes of reducing storage requirements and expediting comparisons. Replacing many references to identical large subtrees, for example, with references to the same subtree can allow order-of-magnitude reductions in memory usage and execution time. Unfortunately JavaScript doesn't support this scenario.
In both cases, references in the table will be kept alive as long as they are useful, and will "naturally" become eligible for collection when they become useless. Unfortunately, rather than implementing separate classes for the two usages defined above, the designers of WeakReference made it so it can kinda-sorta be usable for either, though not terribly well.
In cases where the keys define equality to mean reference identity, WeakHashMap will satisfy the first usage pattern, but the second would be meaningless (code which held a reference to an object that was semantically identical to a stored key would hold a reference to the stored key, and wouldn't need the WeakHashMap to give it one). In cases where keys define some other form of equality, it generally doesn't make sense for a table query to return anything other than a reference to the stored object, but the only way to avoid having the stored reference keep the key alive is to use a WeakHashMap<TKey,WeakReference<TKey>> and have the client retrieve the weak reference, retrieve the key reference stored therein, and check whether it's still valid (it could get collected between the time the WeakHashMap returns the WeakReference and the time the WeakReference itself gets examined).

is it possible to make WeakReference from WeakMap or make garbage-collected cache from WeakMap ?
AFAIK the answer is "no" to both questions.

It's now possible thanks to FinalizationRegistry and WeakRef
Example:
const caches: Record<string, WeakRef<R>> = {}
const finalizer = new FinalizationRegistry((key: string) =>
{
console.log(`Finalizing cache: ${key}`)
delete caches[key]
})
function setCache(key: string, value: R)
{
const cache = getCache(key)
if (cache)
{
if (cache === value) return
finalizer.unregister(cache)
}
caches[key] = new WeakRef(value)
finalizer.register(value, key, value)
}
function getCache(key: string)
{
return caches[key]?.deref()
}

As the other answers mentioned, unfortunately there's no such thing as a weak map, like there is in Java / C#.
As a work around, I created this CacheMap that keeps a maximum number of objects around, and tracks their usage over a set period of time so that you:
Always remove the least accessed object, when necessary
Don't create a memory leak.
Here's the code.
"use strict";
/**
* This class keeps a maximum number of items, along with a count of items requested over the past X seconds.
*
* Unfortunately, in JavaScript, there's no way to create a weak map like in Java/C#.
* See https://stackoverflow.com/questions/25567578/garbage-collected-cache-via-javascript-weakmaps
*/
module.exports = class CacheMap {
constructor(maxItems, secondsToKeepACountFor) {
if (maxItems < 1) {
throw new Error("Max items must be a positive integer");
}
if (secondsToKeepACountFor < 1) {
throw new Error("Seconds to keep a count for must be a positive integer");
}
this.itemsToCounts = new WeakMap();
this.internalMap = new Map();
this.maxItems = maxItems;
this.secondsToKeepACountFor = secondsToKeepACountFor;
}
get(key) {
const value = this.internalMap.get(key);
if (value) {
this.itemsToCounts.get(value).push(CacheMap.getCurrentTimeInSeconds());
}
return value;
}
has(key) {
return this.internalMap.has(key);
}
static getCurrentTimeInSeconds() {
return Math.floor(Date.now() / 1000);
}
set(key, value) {
if (this.internalMap.has(key)) {
this.internalMap.set(key, value);
} else {
if (this.internalMap.size === this.maxItems) {
// Figure out who to kick out.
let keys = this.internalMap.keys();
let lowestKey;
let lowestNum = null;
let currentTime = CacheMap.getCurrentTimeInSeconds();
for (let key of keys) {
const value = this.internalMap.get(key);
let totalCounts = this.itemsToCounts.get(value);
let countsSince = totalCounts.filter(count => count > (currentTime - this.secondsToKeepACountFor));
this.itemsToCounts.set(value, totalCounts);
if (lowestNum === null || countsSince.length < lowestNum) {
lowestNum = countsSince.length;
lowestKey = key;
}
}
this.internalMap.delete(lowestKey);
}
this.internalMap.set(key, value);
}
this.itemsToCounts.set(value, []);
}
size() {
return this.internalMap.size;
}
};
And you call it like so:
// Keeps at most 10 client databases in memory and keeps track of their usage over a 10 min period.
let dbCache = new CacheMap(10, 600);

Related

Memory handling vs. performance

I'm building a WebGL game and I've come so far that I've started to investigate performance bottlenecks. I can see there are a lot of small dips in FPS when there are GC going on. Hence, I created a small memory pool handler. I still see a lot of GC after I've started to use it and I might suspect that I've got something wrong.
My memory pool code looks like this:
function Memory(Class) {
this.Class = Class;
this.pool = [];
Memory.prototype.size = function() {
return this.pool.length;
};
Memory.prototype.allocate = function() {
if (this.pool.length === 0) {
var x = new this.Class();
if(typeof(x) == "object") {
x.size = 0;
x.push = function(v) { this[this.size++] = v; };
x.pop = function() { return this[--this.size]; };
}
return x;
} else {
return this.pool.pop();
}
};
Memory.prototype.free = function(object) {
if(typeof(object) == "object") {
object.size = 0;
}
this.pool.push(object);
};
Memory.prototype.gc = function() {
this.pool = [];
};
}
I then use this class like this:
game.mInt = new Memory(Number);
game.mArray = new Memory(Array); // this will have a new push() and size property.
// Allocate an number
var x = game.mInt.allocate();
<do something with it, for loop etc>
// Free variable and push into mInt pool to be reused.
game.mInt.free(x);
My memory handling for an array is based on using myArray.size instead of length, which keeps track of the actual current array size in an overdimensioned array (that has been reused).
So to my actual question:
Using this approach to avoid GC and keep memory during play-time. Will my variables I declare with "var" inside functions still be GC even though they are returned as new Class() from my Memory function?
Example:
var x = game.mInt.allocate();
for(x = 0; x < 100; x++) {
...
}
x = game.mInt.free(x);
Will this still cause memory garbage collection of the "var" due to some memcopy behind the scenes? (which would make my memory handler useless)
Is my approach good/meaningful in my case with a game that I'm trying to get high FPS in?
So you let JS instantiate a new Object
var x = new this.Class();
then add anonymous methods to this object and therefore make it a one of a kind
x.push = function...
x.pop = function...
so that now every place you're using this object is harder to optimize by the JS engine, because they have now distinct interfaces/hidden classes (equal ain't identical)
Additionally, every place you use these objects, will have to implement additional typecasts, to convert the Number Object back into a primitive, and typecasts ain't for free either. Like, in every iteration of a loop? maybe even multiple times?
And all this overhead just to store a 64bit float?
game.mInt = new Memory(Number);
And since you cannot change the internal State and therefore the value of a Number object, these values are basically static, like their primitive counterpart.
TL;DR:
Don't pool native types, especially not primitives. These days, JS is pretty good at optimizing the code if it doesn't have to deal with surprizes. Surprizes like distinct objects with distinct interfaces that first have to be cast to a primitive value, before they can be used.
Array resizing ain't for free either. Although JS optimizes this and usually pre-allocates more memory than the Array may need, you may still hit that limit, and therefore enforce the engine to allocate new memory, move all the values to that new memory and free the old one.
I usually use Linked lists for pools.
Don't try to pool everything. Think about wich objects can really be reused, and wich you are bending to fit them into this narrative of "reusability".
I'd say: If you have to do as little as adding a single new property to an object (after it has been constructed), and therefore you'd need to delete this property for clean up, this object should not be pooled.
Hidden Classes: When talking about optimizations in JS you should know this topic at least at a very basic level
summary:
don't add new properties after an object has been constructed.
and to extend this first point, no deletes!
the order in wich you add properties matters
changing the value of a property (even its type) doesn't matter! Except when we talk about properties that contain functions (aka. methods). The optimizer may be a bit picky here, when we're talking about functions attached to objects, so avoid it.
And last but not least: Distinct between optimized and "dictionary" objects. First in your concepts, then in your code.
There's no benefit in trying to fit everything into a pattern with static interfaces (this is JS, not Java). But static types make the life easier for the optimizer. So compose the two.

Performance of passing object as argument in javascript

Theoretical question, if for e.g. I have a big object called Order and it has a tons on props: strings, numbers, arrays, nested objects.
I have a function:
function removeShipment(order) {
order.shipment.forEach(
// remove shipment action
);
}
Which mean I access only one prop (shipment), but send a big object.
From perspective of garbage collection and performance is there a difference, between pass Order and pass Order.shipment?
Because object passed by reference, and don't actually copy Order into variable.
As ibrahim mahrir stated in a comment-- though I don't know why they didn't post an answer, because OPs are incentivised to pick a "best answer" & the sole, bewildering response was therefore chosen-- there is no practical performance difference between passing order to your removeShipment method, or passing order.shipment
This is because JavaScript functions are "pass-by-value" for primitive types, like number and boolean, and it uses something known as "call-by-sharing" for passing copies of references for Objects (like your order and assumedly your Array of shipments). The entire object is not copied when passed as a parameter, just a copy of a reference to it in memory. Either approach, passing order or order.shipments, is effectively identical.
I did write a couple timing tests for this, but the actual difference is so small that it's exceptionally difficult to write a test that even properly measures it. I'll include my code at the end for completeness' sake, but from my limited testing in Firefox & Chrome, they were practically identical, as expected.
For another question / answer in the same vein as yours (as well as a great video on why "Micro-benchmarking" often doesn't produce correct results) that corroborates what I wrote, see: does size of argument in a javascript function affects its performance?
See this answer regarding the implications of "call-by-sharing" Is JavaScript a pass-by-reference or pass-by-value language?
You didn's specify what, "remove shipment action" actually "means" in practice. You could just do testOrder.shipments = [] if you just wanted to "remove all shipments" from the order object. They'd be garbage collected at some point after this if nothing else can reach them. I'm just going to iterate through each & perform an addition operation as a stub, as I'm afraid otherwise everything would just be optimised out.
// "num" between 0 inclusive & 26 exclusive
function letter(num)
{
return String.fromCharCode(num + 65)
}
// Ships have a 3-letter name & a random value between 0 & 1
function getShipment() {
return { "name": "Ship", "val": Math.random() }
}
// "order" has 100 "Shipments"
// As well as 676 other named object properties with random "result" values
// e.g. order.AE => Object { result: 14.9815045239037 }
function getOrder() {
var order = {}
for (var i = 0; i < 26; i++)
for (var j = 0; j < 26; j++) {
order[letter(i) + letter(j)] = { "result": (i+j) * Math.random() }
}
order.shipments = Array.from({length: 100}).map(getShipment)
return order
}
function removeShipmentOrder(order) {
order.shipments.forEach(s => s.val++);
}
function removeShipmentList(shipmentList) {
shipmentList.forEach(s => s.val++);
}
// Timing tests
var testOrder = getOrder();
console.time()
for(var i = 0; i < 1000000; i++)
removeShipmentOrder(testOrder)
console.timeEnd()
// Break in-between tests;
// Running them back-to-back, the second test always took longer.
// I assume it's actually due to some kind of compiler optimisation
var testOrder = getOrder();
console.time()
for(var i = 0; i < 1000000; i++)
removeShipmentList(testOrder.shipments)
console.timeEnd()
I was wondering this myself. I decided to test it. Here is my test code:
var a = "Here's a string value";
var b = 5; // and a number
var c = false;
var object = {
a, b, c
}
var array = [
a, b, c
];
var passObject = (obj) => {
return obj.a.length + obj.b * obj.c ? 2 : 1;
}
var passRawValues = (val_a, val_b, val_c) => {
return val_a.length + val_b * val_c ? 2 : 1;
}
var passArray = (arr) => {
return arr[0].length + arr[1] * arr[2] ? 2 : 1;
}
var x = 0;
Then I called the three functions like this:
x << 1;
x ^= passObject(object);
x << 1;
x ^= passRawValues(a, b, c);
x << 1;
x ^= passArray(array);
The reason it does the bit shifting and XORing is that without it, the function call was optimized away entirely by some JS runtimes. By storing the result of the function, I forced the runtime to actually do the function call.
Results
In Webkit and Chromium, passing an object and passing an array were about the same speed, and passing raw values was a little bit slower. Firefox showed about the same performance ratio but I'm not sure that I trust the results since it was literally ten times faster than Chromium.
Here is a link to my my test case on MeasureThat. In case the link doesn't work: it's the same code as above.
Here's a screenshot of the run results (in Chromium on an M1 Macbook Air):
About 5 million ops/s in Chromium for passing an object, versus about 3.7 million for passing a trio of primitive values.
Explanation
So why is that? Well, JavaScript strictly uses pass-by-value semantics. But when you pass an object to a function, the value that you're passing isn't actually the object itself, but rather a pointer to the object. So the variable storing the pointer gets duplicated, but the contents of what it points to does not. This is also why you can have a function that takes an object and alters its properties and that change will happen outside the function as well, but if you reassign the object, the outside scope will still reference the old object.
For this reason, the size of the passed object is largely irrelevant for performance. If the var object = {...} above is changed to contain a bunch of other data, the operations per second achieved when passing it to the function remains exactly the same, because the only thing changing is the amount of data in the block of memory storing the object. The value being passed to the function isn't bigger just because the object is bigger.
Created a simple test here https://jsperf.com/passing-object-vs-passing-raw-value
Test results:
in Chrome passing object is ~7% slower that passing raw value
in Firefox passing object is ~15% slower that passing raw value
in IE11 passing object is ~10% slower that passing raw value
This is syntetic test for passing only one variable, so in other cases results may differ

Hashing JavaScript objects

I have a function that receives a list of JS objects as an argument. I need to store information about those objects in a private variable for future reference. I do not want to stuff a property into the objects themselves, I just want to keep it out of band in a dictionary. I need to be able to lookup metadata for an object in sub-linear time.
For this I need a hash function such that, for any two objects o1 and o2,
hash(o1) !== hash(o2) whenever o1 !== o2.
A perfect example of such a hash function would be the memory address of the object, but I don't think JS exposes that. Is there a way?
Each object reference is different. Why not push the object onto an array? Traversing the array looking for an object reference might still perform better than inspecting each object in a recursive manor to generate a hash key.
function Dictionary() {
var values = [];
function contains(x) {
var i = values.length;
while(i--) {
if (values[i] === x) {
return true;
}
}
return false;
}
function count() {
return values.length;
}
function get(i) {
return (i >= 0 && i < values.length) ? values[i] : null;
}
function set(o) {
if (contains(o)) {
throw new Error("Object already exists in the Dictionary");
}
else {
return values.push(o) - 1;
}
}
function forEach(callback, context) {
for (var i = 0, length = values.length; i < length; i++) {
if (callback.call(context, values[i], i, values) === false) {
break;
}
}
}
return {
get: get,
set: set,
contains: contains,
forEach: forEach,
count: count
};
}
And to use it:
var objects = Dictionary();
var key = objects.set({});
var o = objects.get(key);
objects.contains(key); // returns true
objects.forEach(function(obj, key, values) {
// do stuff
}, this);
objects.count(); // returns 1
objects.set(o); // throws an error
To store metadata about objects, you can use an WeakMap:
WeakMaps are key/value maps in which keys are objects.
Note that this API is still experimental and thus not widely supported yet (see support table). There is a polyfill implementation which makes use of defineProperty to set GUIDs (see details here).
Javascript does not provide direct access to memory (or to the file system for that matter).
You'd probably just want to create your properties/variables within the analysis (hash) function, and then return them to where the function was called from to be stored/persisted for later reference.
Thanks everyone who chipped in to reply. You all have convinced me that what I want to do is currently not possible in JavaScript.
There seem to be two basic compromises that someone with this use case can chose between:
Linear search using ===
=== appears to be the only built-in way to distinguish between two identically-valued objects that have different references. (If you had two objects, o1 and o2, and did a deep comparison and discovered that they were value-identical, you might still want to know if they're reference-identical. Besides === you could do something weird like add a property to o1 and see if showed up in o2).
Add a property to the object.
I didn't like this approach because there's no good reason why I should have to expose this information to the outside world. However, a colleague tipped me off to a feature that I didn't know about: Object.defineProperty. With this, I can alleviate my main concerns: first, that my id would show up, unwanted, during object enumeration, and second, that someone could inadvertently alter my id if there were to be a namespace collision.
So, in case anyone comes here wanting the same thing I wanted, I'm putting it up there for the record that I'm going to add a unique id using Object.defineProperty.

How to detect when a property is added to a JavaScript object?

var obj = {};
obj.a = 1; // fire event, property "a" added
This question is different from this one, where ways to detect when an already declared property is changed, being discussed.
this is possible, technically, but since all current JS implementations that I know of are single threaded it won't be very elegant. The only thing I can think of is a brute force interval:
var checkObj = (function(watchObj)
{
var initialMap = {},allProps = [],prop;
for (prop in watchObj)
{
if (watchObj.hasOwnProperty(prop))
{//make tracer object: basically clone it
initialMap[prop] = watchObj[prop];
allProps.push(prop);//keep an array mapper
}
}
return function()
{
var currentProps = [];
for (prop in watchObj)
{
if (watchObj.hasOwnProperty(prop))
{//iterate the object again, compare
if (watchObj[prop] !== initialMap[prop])
{//type andvalue check!
console.log(initialMap[prop] + ' => ' watchObj[prop]);
//diff found, deal with it whichever way you see fit
}
currentProps.push(prop);
}
}
//we're not done yet!
if (currentProps.length < allProps.length)
{
console.log('some prop was deleted');
//loop through arrays to find out which one
}
};
})(someObjectToTrack);
var watchInterval = setInterval(checkObj,100);//check every .1 seconds?
That allows you to track an object to some extent, but again, it's quite a lot of work to do this 10/sec. Who knows, maybe the object changes several times in between the intervals, too.All in all, I feel as though this is a less-then-ideal approach... perhaps it would be easier to compare the string constants of the JSON.stringify'ed object, but that does mean missing out on functions, and (though I filtered them out in this example) prototype properties.
I have considered doing something similar at one point, but ended up just using my event handlers that changed the object in question to check for any changes.
Alternatively, you could also try creating a DOMElement, and attach an onchange listener to that... sadly, again, functions/methods might prove tricky to track, but at least it won't slow your script down as much as the code above will.
You could count the properties on the object and see if has changed from when you last checked:
How to efficiently count the number of keys/properties of an object in JavaScript?
this is a crude workaround, to use in case you can't find a proper support for the feature in the language.
If performance matters and you are in control of the code that changes the objects, create a control class that modifies your objects for you, e.g.
var myObj = new ObjectController({});
myObj.set('field', {});
myObj.set('field.arr', [{hello: true}]);
myObj.set('field.arr.0.hello', false);
var obj = myObj.get('field'); // obj === {field: {arr: [{hello: false}]}}
In your set() method, you now have the ability to see where every change occurs in a pretty high-performance fashion, compared with setting an interval and doing regular scans to check for changes.
I do something similar but highly optimised in ForerunnerDB. When you do CRUD operations on the database, change events are fired for specific field paths, allowing data-bound views to be updated when their underlying data changes.

Confused about JavaScript prototypal inheritance with constructors

I've read pages and pages about JavaScript prototypal inheritance, but I haven't found anything that addresses using constructors that involve validation. I've managed to get this constructor to work but I know it's not ideal, i.e. it's not taking advantage of prototypal inheritance:
function Card(value) {
if (!isNumber(value)) {
value = Math.floor(Math.random() * 14) + 2;
}
this.value = value;
}
var card1 = new Card();
var card2 = new Card();
var card3 = new Card();
This results in three Card objects with random values. However, the way I understand it is that each time I create a new Card object this way, it is copying the constructor code. I should instead use prototypal inheritance, but this doesn't work:
function Card(value) {
this.value = value;
}
Object.defineProperty( Card, "value", {
set: function (value) {
if (!isNumber(value)) {
value = Math.floor(Math.random() * 14) + 2;
}
this.value = value;
}
});
This doesn't work either:
Card.prototype.setValue = function (value) {
if (!isNumber(value)) {
value = Math.floor(Math.random() * 14) + 2;
}
this.value = value;
};
For one thing, I can no longer call new Card(). Instead, I have to call var card1 = new Card(); card1.setValue(); This seems very inefficient and ugly to me. But the real problem is it sets the value property of each Card object to the same value. Help!
Edit
Per Bergi's suggestion, I've modified the code as follows:
function Card(value) {
this.setValue(value);
}
Card.prototype.setValue = function (value) {
if (!isNumber(value)) {
value = Math.floor(Math.random() * 14) + 2;
}
this.value = value;
};
var card1 = new Card();
var card2 = new Card();
var card3 = new Card();
This results in three Card objects with random values, which is great, and I can call the setValue method later on. It doesn't seem to transfer when I try to extend the class though:
function SpecialCard(suit, value) {
Card.call(this, value);
this.suit = suit;
}
var specialCard1 = new SpecialCard("Club");
var specialCard2 = new SpecialCard("Diamond");
var specialCard3 = new SpecialCard("Spade");
I get the error this.setValue is not a function now.
Edit 2
This seems to work:
function SpecialCard(suit, value) {
Card.call(this, value);
this.suit = suit;
}
SpecialCard.prototype = Object.create(Card.prototype);
SpecialCard.prototype.constructor = SpecialCard;
Is this a good way to do it?
Final Edit!
Thanks to Bergi and Norguard, I finally landed on this implementation:
function Card(value) {
this.setValue = function (val) {
if (!isNumber(val)) {
val = Math.floor(Math.random() * 14) + 2;
}
this.value = val;
};
this.setValue(value);
}
function SpecialCard(suit, value) {
Card.call(this, value);
this.suit = suit;
}
Bergi helped me identify why I wasn't able to inherit the prototype chain, and Norguard explained why it's better not to muck with the prototype chain at all. I like this approach because the code is cleaner and easier to understand.
the way I understand it is that each time I create a new Card object this way, it is copying the constructor code
No, it is executing it. No problems, and your constructor works perfect - this is how it should look like.
Problems will only arise when you create values. Each invocation of a function creates its own set of values, e.g. private variables (you don't have any). They usually get garbage collected, unless you create another special value, a privileged method, which is an exposed function that holds a reference to the scope it lives in. And yes, every object has its own "copy" of such functions, which is why you should push everything that does not access private variables to the prototype.
Object.defineProperty( Card, "value", ...
Wait, no. Here you define a property on the constructor, the function Card. This is not what you want. You could call this code on instances, yes, but note that when evaluating this.value = value; it would recursively call itself.
Card.prototype.setValue = function(){ ... }
This looks good. You could need this method on Card objects when you are going to use the validation code later on, for example when changing the value of a Card instance (I don't think so, but I don't know?).
but then I can no longer call new Card()
Oh, surely you can. The method is inherited by all Card instances, and that includes the one on which the constructor is applied (this). You can easily call it from there, so declare your constructor like this:
function Card(val) {
this.setValue(val);
}
Card.prototype...
It doesn't seem to transfer when I try to extend the class though.
Yes, it does not. Calling the constructor function does not set up the prototype chain. With the new keyword the object with its inheritance is instantiated, then the constructor is applied. With your code, SpecialCards inherit from the SpecialCard.prototype object (which itself inherits from the default Object prototype). Now, we could either just set it to the same object as normal cards, or let it inherit from that one.
SpecialCard.prototype = Card.prototype;
So now every instance inherits from the same object. That means, SpecialCards will have no special methods (from the prototype) that normal Cards don't have... Also, the instanceof operator won't work correctly any more.
So, there is a better solution. Let the SpecialCards prototype object inherit from Card.prototype! This can be done by using Object.create (not supported by all browsers, you might need a workaround), which is designed to do exactly this job:
SpecialCard.prototype = Object.create(Card.prototype, {
constructor: {value:SpecialCard}
});
SpecialCard.prototype.specialMethod = ... // now possible
In terms of the constructor, each card IS getting its own, unique copy of any methods defined inside of the constructor:
this.doStuffToMyPrivateVars = function () { };
or
var doStuffAsAPrivateFunction = function () {};
The reason they get their own unique copies is because only unique copies of functions, instantiated at the same time as the object itself, are going to have access to the enclosed values.
By putting them in the prototype chain, you:
Limit them to one copy (unless manually-overridden per-instance, after creation)
Remove the method's ability to access ANY private variables
Make it really easy to frustrate friends and family by changing prototype methods/properties on EVERY instance, mid-program.
The reality of the matter is that unless you're planning on making a game that runs on old Blackberries or an ancient iPod Touch, you don't have to worry too much about the extra overhead of the enclosed functions.
Also, in day-to-day JS programming, the extra security from properly-encapsulated objects, plus the extra benefit of the module/revealing-module patterns and sandboxing with closures VASTLY OUTWEIGHS the cost of having redundant copies of methods attached to functions.
Also, if you're really, truly that concerned, you might do to look at Entity/System patterns, where entities are pretty much just data-objects (with their own unique get/set methods, if privacy is needed)... ...and each of those entities of a particular kind is registered to a system which is custom made for that entity/component-type.
IE: You'd have a Card-Entity to define each card in a deck.
Each card has a CardValueComponent, a CardWorldPositionComponent, a CardRenderableComponent, a CardClickableComponent, et cetera.
CardWorldPositionComponent = { x : 238, y : 600 };
Each of those components is then registered to a system:
CardWorldPositionSystem.register(this.c_worldPos);
Each system holds ALL of the methods which would normally be run on the values stored in the component.
The systems (and not the components) will chat back and forth, as needed to send data back and forth, between components shared by the same entity (ie: the Ace of Spade's position/value/image might be queried from different systems so that everybody's kept up to date).
Then instead of updating each object -- traditionally it would be something like:
Game.Update = function (timestamp) { forEach(cards, function (card) { card.update(timestamp); }); };
Game.Draw = function (timestamp, renderer) { forEach(cards, function (card) { card.draw(renderer); }); };
Now it's more like:
CardValuesUpdate();
CardImagesUpdate();
CardPositionsUpdate();
RenderCardsToScreen();
Where inside of the traditional Update, each item takes care of its own Input-handling/Movement/Model-Updating/Spritesheet-Animation/AI/et cetera, you're updating each subsystem one after another, and each subsystem is going through each entity which has a registered component in that subsystem, one after another.
So there's a smaller memory-footprint on the number of unique functions.
But it's a very different universe in terms of thinking about how to do it.

Categories