I had a question. The following code has a memory leak:
let inMemoryCache = {};
app.get("/hello",(req, resp) => {
inMemoryCache[unixTimeStamp] = {"foo":"bar"}
resp.json({});
});
Isn't it? The size of the object inMemoryCache will keep on increasing with each request until it hits the ceiling and the heap size explodes.
What then is the best to implement in-memory-caches?
Manage the size of your cache somehow. For example, you could keep a fixed number of entries, and once the cache has reached its maximum size and a new entry comes in, delete the oldest (or least frequently used, or least recently used, or some other selection criteria).
Caching is harder than it seems :-)
Related
I am trying to understand how to prevent memory leaks when using timers. I believe the Chrome Dev Tools Performance tab (I am still learning about this feature) memory graph is showing me a bad pattern in terms of memory management. I believe it is a case of a "sawtooth" pattern each time a Timer is fired.
I have a simple test case (1. right click 'timer-adder.html', 2. click 'Preview' to get a preview of the loaded HTML) that involves using a constructor function to create objects that work as timers, in other words for each update, the DOM is changed inside a setInterval callback.
//...
start: function (initialTime, prefixedId) {
let $display = document.getElementById(prefixedId + '-display');
if ($display.style.display === 'none') { $display.style.display = 'block'; }
$display.textContent = TimerHandler.cycle(initialTime);
$display = null;
return setInterval(function () {
this.initialTime--;
(this.initialTime === 0 && this.pause());
document.getElementById(prefixedId + '-display').textContent = TimerHandler.cycle(this.initialTime);
}.bind(this), 1000);
},
/...
Attempts to minimize leakage although not directly related to what I believe to be the problem at hand:
assigning $display to null to make it collectible (garbage collection).
clearing the interval;
What I identify as a bad pattern:
View post on imgur.com (zoomable)
A fair memory usage would translate to DevTools showing a horizontal line? What would be a safer approach to this side effect? Because, as the test is right now, I think that in the long run, multiple active timers would overload memory and thus result in a noticeable decrease in performance.
PS: As a newcomer, I hope this is a fair presentation of the problem. Thank you.
As you can see from the last part, the Garbagge Collector was able to free that memory. Thus there is no memory leaking.
Garbagge Collection is a complicated task, thats why V8 tries to minimize the number of stop-the-world collections. Or in other words: It only cleans up memory if it needs it.
In your case it doesn't. There is still enough space available.
If (a noticible amount of) memory persists across GC calls, then you do have a memory leak.
assigning $display to null to make it collectible (garbage collection).
Thats just unneccessary. $display is not accessed in the inner function, therefore it is viable for gc'ing after init executed. Reassigning it doesn't change that.
I'm creating bot for online dynamic game. In this case dynamic means that hero ingame can move around and background is changing when moving. Global variables like monsters are changing dynamic when moving as well.
My bot is using puppeteer. Since I need this monsters object I have function which get those monsters from page context every 2 - 3 seconds (randomize for anti-detection).
This solution is far from being perfect. Two main downsides are:
My bot kill monster and I want to go to next one it still see this monster which has got killed because next refresh is for example 1500ms from that point of time.
Performance of getting monsters is bad.
To solve first one I could just execute function which download monsters every time after killing one. On the other hand then second downside will be even stronger because I will perform much more getting monster which already is slow.
It all comes to second issue which is performance. You may ask how do I know that performance is bad?
When hero is moving it's relatively smooth but when monsters are being downloaded I see micro lag, like for a part of second hero stop. It's really maybe 100ms of lag but I can see it with human eye and if I will perform getting monster more frequently this lag will get stronger (to be clear - lag will not be longer but more often).
Downloading object from global window is long. The reason why is that game maintainers developed it so monsters are within big object npc which contains everything that is in dashboard and contains even empty elements so the total amount of this npc object is between 100k-200k elements. I'm doing lots of filters in order to get final monster data which I care about.
I'll show how I'm actually getting this monsters. So I execute 3 async functions:
const allNpc = await getAllNpc(page);
let monsters = await filterMonsters(page, allNpc, monstersToHunt);
monsters.hunt = await deleteAvoidedMonsters(page, monsters.hunt, monstersToOmit);
First one getAllNpc just get entire npc object (this big one which I mentioned above)
return await page.evaluate(() => {
if(window.g) return g.npc;
});
second function filter actual monsters and monsters which I want to kill from npc:
return new Promise((resolve, reject) => {
const validNpc = allNpc.filter(el => !!el);
const allMonsters = validNpc.filter(e => e.lvl !== 0);
const names = new Set(monstersNames);
const huntMonsters = validNpc
.filter(it => names.has(it.nick))
.map(({ nick, x, y, grp, id }) => ({ nick, x, y, grp, id }));
resolve({all: allMonsters, hunt: huntMonsters});
});
I'm using Set here to get rid of O(n) / O(n^2) algorithms and I think this is fastest I can achieve with javascript. Third function is same as this one but additionally filtering special monsters which I want to avoid.
Now my questions are:
is there a way to get this object on every this object and only this object change? I know there is function in puppeteer that can watch for DOM changes but is there something to watch global window object?
can I do anything more to speed it up? I read about worker_threads in NodeJS, can it help getting rid of this micro lag or something else? Clustering?
What I realized after a while is that bot was "lagging" because I was passing this huge g.npc array as argument to function. By that I mean I was resolving it from getAllNpc and then passing it to filterMonsters.
When I changed it so I execute .filter in getAllNpc inside evaluated script within page context and then resolving and passing array with hundreds not millions elements it is much faster and do not cause any lag or freeze.
I am using offline storage and want to clear it when storage memory is full. All I want is to know that how to get used storage. I am using the following code to clear offline storage:
localStorage.clear();
You could use localStorage.length to get the count of elements in the storage. But it is unlikely that it will directly give away the size (in bytes) of how much storage is used - unless you are storing monotonic data that lets you predict the size based on the count of keys.
But you do get a QUOTA_EXCEEDED_ERR exception when setting values if you do exceed the available limit. So, just wrap your code in a try..catch and if you do get error.type as QUOTA_EXCEEDED_ERR, then you could clear the space.
You could also iterate over all items in the storage to get its full size. But I would not do that given that it might take a bit of time as the storage increases. In case you are interested, something like this would work:
var size = 0;
for (var i = 0, len = localStorage.length; i < len; ++i) {
size += localStorage.getItem(localStorage.key(i)).length;
}
console.log("Size: " + size);
Ref: What happens when localStorage is full?
PS: I tried to get the size by iterating over the localStorage keys, and adding the size, but the browser tab crashed. So I'd say relying on the QUOTA_EXCEEDED_ERR is better.
UPDATE
You could also get a rough estimate of size using JSON.stringify(localStorage).length. Which seems faster than iterating over the keys, and doesn't crash the browser. But keep in mind that it contains additional JSON characters (such as {,},, and ") as browser would use them to wrap the keys and values. So the size would be slightly higher than the actual value.
Let's say I have a javascript object.
var hash = { a : [ ] };
Now I want to edit the array hash.a 2 ways: first by accessing hash.a every time, second by making a pointer var arr = hash.a to store hash.a's memory address. Is second way faster, or they are the same.
Example:
// first way
hash.a.push(1);
hash.a.push(2);
hash.a.push(3);
hash.a.push(4);
hash.a.push(5);
//second way
var arr = hash.a;
arr.push(1);
arr.push(2);
arr.push(3);
arr.push(4);
arr.push(5);
Thanks a lot!
I don't think there would be any real performance gain, and if there is a slight gain it isn't worth it as you are hampering legibility and maintainability of code + use more memory (creation and garbage collection of another variable arr, also more chances for memory leaks too if you don't handle it properly). I wouldn't recommend it.
In a typical software project, only 20% of the time is development, rest 80% is testing and maintaining it.
You're doing the compiler's job in this situation - any modern compiler will optimize this code when interpreting it so both cases should be the same in terms of performance.
Even if these two cases weren't optimized by the compiler, the performance gains would be negligible. Focus on making your code as readable as possible, and let compiler handle these referencing optimizations.
Context: I'm building a little site that reads an rss feed, and updates/checks the feed in the background. I have one array to store data to display, and another which stores ID's of records that have been shown.
Question: How many items can an array hold in Javascript before things start getting slow, or sluggish. I'm not sorting the array, but am using jQuery's inArray function to do a comparison.
The website will be left running, and updating and its unlikely that the browser will be restarted / refreshed that often.
If I should think about clearing some records from the array, what is the best way to remove some records after a limit, like 100 items.
The maximum length until "it gets sluggish" is totally dependent on your target machine and your actual code, so you'll need to test on that (those) platform(s) to see what is acceptable.
However, the maximum length of an array according to the ECMA-262 5th Edition specification is bound by an unsigned 32-bit integer due to the ToUint32 abstract operation, so the longest possible array could have 232-1 = 4,294,967,295 = 4.29 billion elements.
No need to trim the array, simply address it as a circular buffer (index % maxlen). This will ensure it never goes over the limit (implementing a circular buffer means that once you get to the end you wrap around to the beginning again - not possible to overrun the end of the array).
For example:
var container = new Array ();
var maxlen = 100;
var index = 0;
// 'store' 1538 items (only the last 'maxlen' items are kept)
for (var i=0; i<1538; i++) {
container [index++ % maxlen] = "storing" + i;
}
// get element at index 11 (you want the 11th item in the array)
eleventh = container [(index + 11) % maxlen];
// get element at index 11 (you want the 11th item in the array)
thirtyfifth = container [(index + 35) % maxlen];
// print out all 100 elements that we have left in the array, note
// that it doesn't matter if we address past 100 - circular buffer
// so we'll simply get back to the beginning if we do that.
for (i=0; i<200; i++) {
document.write (container[(index + i) % maxlen] + "<br>\n");
}
Like #maerics said, your target machine and browser will determine performance.
But for some real world numbers, on my 2017 enterprise Chromebook, running the operation:
console.time();
Array(x).fill(0).filter(x => x < 6).length
console.timeEnd();
x=5e4 takes 16ms, good enough for 60fps
x=4e6 takes 250ms, which is noticeable but not a big deal
x=3e7 takes 1300ms, which is pretty bad
x=4e7 takes 11000ms and allocates an extra 2.5GB of memory
So around 30 million elements is a hard upper limit, because the javascript VM falls off a cliff at 40 million elements and will probably crash the process.
EDIT: In the code above, I'm actually filling the array with elements and looping over them, simulating the minimum of what an app might want to do with an array. If you just run Array(2**32-1) you're creating a sparse array that's closer to an empty JavaScript object with a length, like {length: 4294967295}. If you actually tried to use all those 4 billion elements, you'll definitely crash the javascript process.
You could try something like this to test and trim the length:
http://jsfiddle.net/orolo/wJDXL/
var longArray = [1, 2, 3, 4, 5, 6, 7, 8];
if (longArray.length >= 6) {
longArray.length = 3;
}
alert(longArray); //1, 2, 3
I have built a performance framework that manipulates and graphs millions of datasets, and even then, the javascript calculation latency was on order of tens of milliseconds. Unless you're worried about going over the array size limit, I don't think you have much to worry about.
It will be very browser dependant. 100 items doesn't sound like a large number - I expect you could go a lot higher than that. Thousands shouldn't be a problem. What may be a problem is the total memory consumption.
I have shamelessly pulled some pretty big datasets in memory, and altough it did get sluggish it took maybe 15 Mo of data upwards with pretty intense calculations on the dataset. I doubt you will run into problems with memory unless you have intense calculations on the data and many many rows. Profiling and benchmarking with different mock resultsets will be your best bet to evaluate performance.