Node.js CPU issue/Optimization - javascript

Me and my friend are making a node.js game, and we have been testing cpu, but after profiling this process called zlib is sucking most of the CPU/RAM
3 clients connected to a game is fine, but when 12~13 players are connected it uses 58% where zlib is using about 30% of this cpu.
inclusive self name
ticks total ticks total
64775 58.5% 64775 58.5% /lib/x86_64-linux-gnu/libc-2.19.so
25001 22.6% 224 0.2% LazyCompile: *callback zlib.js:409
//this one is a different zlib
7435 6.7% 82 0.1% LazyCompile: ~callback zlib.js:409
Is there any way to decrease the cpu usage from this? or is there a reason why it is increasing so much.
I have done some reading and I am told it is from socket.io so here is our section of socket sending most of the data.
for (var i = 0; i < users.length; i++) {
if (u.room == users[i].room && users[i].x + users[i].radius >= u.x - u.screenWidth / 2 - 20 && users[i].x - users[i].radius <= u.x + u.screenWidth / 2 + 20 && users[i].y + users[i].radius >= u.y - u.screenHeight / 2 - 20 && users[i].y - users[i].radius <= u.y + u.screenHeight / 2 + 20) {
if (users[i].id == u.id) {
visiblePlayers.push({
x: users[i].x,
y: users[i].y,
angle: users[i].angle,
hue: users[i].hue,
radius: users[i].radius,
squeeze: users[i].squeeze,
name: users[i].name,
dead: users[i].dead,
isPlayer: true,
kills: users[i].kills
});
} else {
visiblePlayers.push({
x: users[i].x,
y: users[i].y,
angle: users[i].angle,
hue: users[i].hue,
radius: users[i].radius,
squeeze: users[i].squeeze,
name: users[i].name,
dead: users[i].dead
});
}
// SEND DYING INFO: (FOR OFFLINE ANIMATION):
if (users[i].dying) {
visiblePlayers[visiblePlayers.length - 1].dying = true;
}
}
}
var visibleEnergy = [];
for (var i = 0; i < energies.length; i++) {
if (u.firstSend || (energies[i].updated && energies[i].room == u.room)) {
var anim = energies[i].animate;
if (u.firstSend)
anim = true;
visibleEnergy.push({
x: energies[i].x,
y: energies[i].y,
radius: energies[i].radius,
index: i,
animate: anim,
hue: energies[i].hue,
room: energies[i].room
});
}
}
// SEND PLAYER UPDATES TO CLIENTS:
sockets[u.id].emit('serverTellPlayerMove', visiblePlayers,
visibleEnergy);
Zlib is one of out problems, but also if there are any other optimisation methods to decrease server CPU.
Extra: the formatting of checking users has been answered in StackExchange at https://codereview.stackexchange.com/questions/107922/node-js-cpu-issue. We just need help if there are any methods to decrease the zlib cpu usage.
Thanks.

zlib is a widely used compression library. The most likely reason you're seeing lots of CPU usage there is a lot of compressed network traffic. That's desirable because CPU is generally 'cheaper' than network traffic. Other areas that are likely candidates for heavy zlib usage would be compressed files on disk being loaded through node.js's zlib module, or perhaps if you're doing server-side decompression of PNG images which also commonly uses zlib.
If you find the zlib 'tax' is too much for your particular application, perhaps you could offload the decompression to another process. You could use a few node.js services in front of your main server whose purpose is purely to connect to clients. Then have them talk to your server in uncompressed, 'raw' format which your server has an easier time understanding.
You don't mention if this is actually causing you problems yet. I wouldn't necessarily worry about it yet. This kind of single process scalability is a downside of using node.js, and the only solution is to split up your service into smaller parts that can be spread across multiple cores or even multiple servers.

Related

How to get responses to ut_metadata piece request ? (node.js Bit Torrent BEP 0009)

I'm building a Bittorrent client using Node.js and am failing at getting answer from peers over the PWP metadata extension (BEP 0009)
I get peers from the DHT (BEP 0005) (where i announce), then send Handshake and Extended Handshake over PWP using a net Socket.
buildHandshake = (torrent, ext) => { // torrent contains mainly infoHash
const buf = Buffer.alloc(68)
buf.writeUInt8(19, 0)
buf.write('BitTorrent protocol', 1)
if (ext) {
const big = new Uint64BE(1048576)
big.toBuffer().copy(buf, 20)
} else {
buf.writeUInt32BE(0, 20)
buf.writeUInt32BE(0, 24)
}
torrent.infoHashBuffer.copy(buf, 28)
anon.nodeId().copy(buf, 48) // tool that generates a nodeId once.
return buf
}
buildExtRequest = (id, msg) => {
const size = msg.length + 1
const buf = Buffer.alloc(size + 5)
buf.writeUInt32BE(size, 0)
buf.writeUInt8(20, 4)
buf.writeUInt8(id, 5)
msg.copy(buf, 6)
return buf
}
const client = new net.Socket()
client.connect(peer.port, peer.ip, () => {
client.write(buildHandshake(torrent, true))
const extHandshake = bencode.encode({
m: {
ut_metadata: 2,
},
metadata_size: self.metaDataSize, // 0 by default
p: client.localPort,
v: Buffer.from('Hypertube 0.1')
})
client.write(buildExtRequest(0, extHandshake))
})
From here, i get Handshakes and extended Hanshakes back (and sometimes Bitfields), then i require metadata pieces:
const req = bencode.encode({ msg_type: 0, piece: 0 })
// utMetadata is from extended Handshake dictionary m.ut_metadata
client.write(message.buildExtRequest(utMetadata, req))
After what, i don't hear from the peer anymore.
After 2mins without keeping alive, connection timeouts.
Has anybody got an idea why i don't get answered back ?
BitTorrent protocol message formating can be unclear if you're a first timer, like me.
message structure is always as follows (except for handshake):
<len><message>
where len is a UInt32 big endian of value message.length,
message is whatever you're sending except handshake.
For example:
Extended protocol piece request: ut_metadata piece message
<len><id><extId><ut_metadata dict>
where:
len is a UInt32 big endian of value: size of ()
Id is a Uint8 of value 20 (it's the protocol extension indicator)
extId is a UInt8. Its value depends on the extended handshake received from the peer (in which the extId of ut_metadata exchange is given)
ut_metadata dict is a bencoded dictionary:
{ 'msg_type': 0, 'piece': 0 }
d8:msg_typei0e5:piecei0ee
(here is on the first line the object - the dictionary - and on the second line is the same object once bencoded)
msg_type is 0 (it's the request message indicator for BEP 0009 piece request.
piece is the index of the piece you request (0 would be the first piece)
In general:
Not giving the right value to <len> will result in messages badly interpreted by peer, and therefore not getting the right answers, not getting any answer and eventually connection being closed (by the peer or through your own messages)

requestAnimationFrame() performance issue

I experienced performance issues with requestAnimationFrame().
Consider the following code. It's a simple loop which prints the time since the last frame every time this time delta is larger than 20ms.
const glob_time_info = {delta_time: 0.0, last_frame: performance.now()};
var render = function (timestamp) {
glob_time_info.delta_time = timestamp - glob_time_info.last_frame;
glob_time_info.last_frame = timestamp;
if(glob_time_info.delta_time > 20)
console.log(glob_time_info.delta_time);
requestAnimationFrame(render);
};
render(performance.now());
As I understood requestAnimationFrame this snippet should never print anything, because it tries to run 60 times a second (60Hz as my monitor).
Therefore time delta should always be somewhat around 16-17ms.
But it prints times around 33ms every few seconds.
Why?
I experienced this on windows 10 with Chrome 54 and Firefox 49. I own an i5-6600
UPDATE
Here the output of Nit's script for windows and ubuntu. Windows, what are you doing?
Windows 10 (PC):
WIndows 8 (same netbook as below):
Ubuntu (same netbook as above):
It's easy to test your hypothesis that the issue is related to the platform you're running on - measure the performance.
Shortly put, run requestAnimationFrame a number of times similar to how you did and note down a timestamp on each run. After that simply visualize the results.
var times = [];
var measure = function() {
times.push(new Date().getTime());
if (times.length > 100) return draw();
requestAnimationFrame(measure);
};
var draw = function() {
var dataset = {
x: [],
y: [],
type: 'bar'
};
var layout = {
xaxis: {
title: 'measurement #'
},
yaxis: {
title: 'iteration duration (ms)'
},
height: 250
};
var options = {
displayModeBar: false
};
times.reduce(function(previous, current, i) {
dataset.x.push(i);
dataset.y.push(current - previous);
return current;
}, times.shift());
Plotly.newPlot('target', [dataset], layout, options);
}
measure();
#target {
margin-top: -50px;
}
<script src="https://cdn.plot.ly/plotly-1.2.0.min.js"></script>
<div id="target"></div>
You can run the same simulation on different operating systems and different browsers to see if you can narrow down the issue further.
As Travis J stated in the comments it's related to the operating system.
This performance issue doesn't appear on linux. So there is nothing I (we) can do about it.

Node.js cluster - optimal number of workers

I have 4 cores and ran this code according to this example :
var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;
var id = 0;
if (cluster.isWorker) {
id = cluster.worker.id;
}
var iterations = 1000000000;
console.time('Function #' + id);
for (var i = 0; i < iterations; i++) {
var test = 0;
}
console.timeEnd('Function #' + id);
if (cluster.isMaster) {
// Fork workers.
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
}
With 4 fork (the code above), I got :
Function #0: 1698.801ms
Function #1: 3282.679ms
Function #4: 3290.384ms
Function #3: 3425.090ms
Function #2: 3424.922ms
With 3 fork, I got :
Function #0: 1695.155ms
Function #2: 1822.867ms
Function #3: 2444.156ms
Function #1: 2606.680ms
With 2 fork, I got :
Function #0: 1684.929ms
Function #1: 1682.897ms
Function #2: 1686.123ms
I don't understand these results. Isn't 1 fork/core the optimal number ? Here I see that 4 fork is not better than 2 fork.
My guess is that your hardware actually only has 2 physical cores. However, because of hyper-threading (HT), the OS will say that there are 4 (logical) cores present.
The workers in your code keep a (physical) core entirely occupied, which is something that HT can't deal with very well, so the performance when keeping all 4 logical cores busy will be worse than when you keep only the 2 physical cores busy.
My hardware (quad core, so 4 physical and 8 logical cores) shows the same pattern:
8 workers:
Function #5: 926ms
Function #3: 916ms
Function #1: 928ms
Function #4: 895ms
Function #7: 934ms
Function #6: 905ms
Function #8: 928ms
Function #2: 928ms
4 workers:
Function #3: 467ms
Function #2: 467ms
Function #1: 473ms
Function #4: 472ms
That said, the rule of thumb of making the number of workers equivalent to the number of logical cores in your hardware still makes sense if your workers are I/O bound (which most Node apps are).
If you really want to perform heavy, blocking, calculations, count one physical core per worker.

NodeJS with Socket.IO 1.0 - memory leak outside of heap

We've been trying to deploy a small NodeJS app using Socket.IO and have been running into a problem where while the heap size of the app remains fairly acceptable, the total memory used (rss) creeps up to over 2gb after around 2 hours, and continues to rise.
In an effort to make sure the problem wasn't in our code, we deployed a bare bones app with no custom logic apart from initializing Socket IO. We ran that against the same production traffic, and experienced the same issue.
Every 10 seconds we output the following data: rss memory usage, heap total, heap count, and connection count. Here's a sample of the output:
523898880 199490816 123040352 2001
537059328 209774080 163828336 2011
538578944 206714368 150879848 2031
535252992 199514880 156743280 2041
542162944 200522752 145077944 2039
539652096 195387136 129486792 2055
551006208 206726400 170918304 2070
553254912 205706496 156447496 2071
550584320 198482944 154005496 2076
564363264 209810176 140442920 2095
561176576 201578752 123214232 2118
562487296 200546816 110638376 2112
572096512 206714368 162713240 2133
569552896 200546816 147439016 2121
577777664 205682432 136653448 2115
582496256 207770368 121204056 2133
582909952 205706496 115449888 2153
597364736 215989760 164582600 2158
590491648 204686592 148962008 2158
598315008 209810176 137608840 2164
598249472 205718528 123472944 2188
607158272 211898112 160187496 2168
609869824 210866176 154986472 2161
618110976 214969856 142425488 2180
615014400 207782400 119745816 2188
623575040 214981888 163602944 2180
624717824 210842112 147051160 2189
627556352 210866176 142542800 2191
636477440 216013824 129968776 2203
643809280 221149440 162858408 2219
644407296 217057792 154994536 2224
642068480 211922176 141626008 2240
649084928 214969856 123126792 2267
662454272 224233216 166539024 2272
659439616 217045760 162742688 2258
662867968 217057792 137425392 2266
667013120 218065664 119616592 2261
673230848 220129536 172101080 2272
677904384 220129536 149771776 2267
676691968 217045760 129936448 2267
674639872 211898112 125941816 2277
689025024 223225344 163745856 2274
689991680 219109632 151478936 2282
698601472 225301248 137102712 2298
706170880 229428992 171321288 2306
705675264 224257280 160088496 2303
701198336 217033728 149326384 2313
701833216 216013824 129806072 2314
718053376 227365120 184078288 2335
718950400 223225344 157977312 2333
717037568 218065664 146137456 2354
714428416 210890240 136566344 2381
As you can see, in a fairly short amount of the time the total memory usage increased by 200mb, even though the connection count only increased by around 400. The heap usage remained roughly the same, just a bit higher to account for the higher connection count.
We're running on Debian Wheezy on 64bit. NodeJS version is 0.10.29, and Socket IO version is 1.0.6. The code we're using is:
var http = require('http'),
io = require('socket.io');
var app = http.createServer();
var server = io(app);
app.listen(80);
var connections = 0;
server.on('connection', function(socket) {
connections++;
socket.on('disconnect', function() {
connections--;
});
});
setInterval(function() {
var mem = process.memoryUsage();
console.log(mem.rss + ' ' + mem.heapTotal + ' ' + mem.heapUsed + ' ' + connections);
}, 10000);
Is there any way we can find out why Node is using so much memory in total, or any way to see what's happening outside of the heap to try and find the memory leak? We've already tried all of the usual tricks for checking heap usage and found nothing, but did not expect to since the problem doesn't seem to be with memory on the heap.
If you think a node.js module has a memory leak taking snapshots of the system memory from within the process will provide inaccurate results.
Your best bet it to use tools such as valgrind, gdb, prstat & dtrace. Joyent even provides a nice module to help you visualize the process in question.

jStorage limit in chrome? [duplicate]

Since localStorage (currently) only supports strings as values, and in order to do that the objects need to be stringified (stored as JSON-string) before they can be stored, is there a defined limitation regarding the length of the values.
Does anyone know if there is a definition which applies to all browsers?
Quoting from the Wikipedia article on Web Storage:
Web storage can be viewed simplistically as an improvement on cookies, providing much greater storage capacity (10 MB per origin in Google Chrome(https://plus.google.com/u/0/+FrancoisBeaufort/posts/S5Q9HqDB8bh), Mozilla Firefox, and Opera; 10 MB per storage area in Internet Explorer) and better programmatic interfaces.
And also quoting from a John Resig article [posted January 2007]:
Storage Space
It is implied that, with DOM Storage,
you have considerably more storage
space than the typical user agent
limitations imposed upon Cookies.
However, the amount that is provided
is not defined in the specification,
nor is it meaningfully broadcast by
the user agent.
If you look at the Mozilla source code
we can see that 5120KB is the default
storage size for an entire domain.
This gives you considerably more space
to work with than a typical 2KB
cookie.
However, the size of this storage area
can be customized by the user (so a
5MB storage area is not guaranteed,
nor is it implied) and the user agent
(Opera, for example, may only provide
3MB - but only time will tell.)
Actually Opera doesn't have 5MB limit. It offers to increase limit as applications requires more. User can even choose "Unlimited storage" for a domain.
You can easily test localStorage limits/quota yourself.
Here's a straightforward script for finding out the limit:
if (localStorage && !localStorage.getItem('size')) {
var i = 0;
try {
// Test up to 10 MB
for (i = 250; i <= 10000; i += 250) {
localStorage.setItem('test', new Array((i * 1024) + 1).join('a'));
}
} catch (e) {
localStorage.removeItem('test');
localStorage.setItem('size', i - 250);
}
}
Here's the gist, JSFiddle and blog post.
The script will test setting increasingly larger strings of text until the browser throws and exception. At that point it’ll clear out the test data and set a size key in localStorage storing the size in kilobytes.
Find the maximum length of a single string that can be stored in localStorage
This snippet will find the maximum length of a String that can be stored in localStorage per domain.
//Clear localStorage
for (var item in localStorage) delete localStorage[item];
window.result = window.result || document.getElementById('result');
result.textContent = 'Test running…';
//Start test
//Defer running so DOM can be updated with "test running" message
setTimeout(function () {
//Variables
var low = 0,
high = 2e9,
half;
//Two billion may be a little low as a starting point, so increase if necessary
while (canStore(high)) high *= 2;
//Keep refining until low and high are equal
while (low !== high) {
half = Math.floor((high - low) / 2 + low);
//Check if we can't scale down any further
if (low === half || high === half) {
console.info(low, high, half);
//Set low to the maximum possible amount that can be stored
low = canStore(high) ? high : low;
high = low;
break;
}
//Check if the maximum storage is no higher than half
if (storageMaxBetween(low, half)) {
high = half;
//The only other possibility is that it's higher than half but not higher than "high"
} else {
low = half + 1;
}
}
//Show the result we found!
result.innerHTML = 'The maximum length of a string that can be stored in localStorage is <strong>' + low + '</strong> characters.';
//Functions
function canStore(strLen) {
try {
delete localStorage.foo;
localStorage.foo = Array(strLen + 1).join('A');
return true;
} catch (ex) {
return false;
}
}
function storageMaxBetween(low, high) {
return canStore(low) && !canStore(high);
}
}, 0);
<h1>LocalStorage single value max length test</h1>
<div id='result'>Please enable JavaScript</div>
Note that the length of a string is limited in JavaScript; if you want to view the maximum amount of data that can be stored in localStorage when not limited to a single string, you can use the code in this answer.
Edit: Stack Snippets don't support localStorage, so here is a link to JSFiddle.
Results
Chrome (45.0.2454.101): 5242878 characters
Firefox (40.0.1): 5242883 characters
Internet Explorer (11.0.9600.18036): 16386 122066 122070 characters
I get different results on each run in Internet Explorer.
Don't assume 5MB is available - localStorage capacity varies by browser, with 2.5MB, 5MB and unlimited being the most common values.
Source: http://dev-test.nemikor.com/web-storage/support-test/
I wrote this simple code that is testing localStorage size in bytes.
https://github.com/gkucmierz/Test-of-localStorage-limits-quota
const check = bytes => {
try {
localStorage.clear();
localStorage.setItem('a', '0'.repeat(bytes));
localStorage.clear();
return true;
} catch(e) {
localStorage.clear();
return false;
}
};
Github pages:
https://gkucmierz.github.io/Test-of-localStorage-limits-quota/
I have the same results on desktop Google chrome, opera, firefox, brave and mobile chrome which is ~10Mbytes
And half smaller result in safari ~4Mbytes
You don't want to stringify large objects into a single localStorage entry. That would be very inefficient - the whole thing would have to be parsed and re-encoded every time some slight detail changes. Also, JSON can't handle multiple cross references within an object structure and wipes out a lot of details, e.g. the constructor, non-numerical properties of arrays, what's in a sparse entry, etc.
Instead, you can use Rhaboo. It stores large objects using lots of localStorage entries so you can make small changes quickly. The restored objects are much more accurate copies of the saved ones and the API is incredibly simple. E.g.:
var store = Rhaboo.persistent('Some name');
store.write('count', store.count ? store.count+1 : 1);
store.write('somethingfancy', {
one: ['man', 'went'],
2: 'mow',
went: [ 2, { mow: ['a', 'meadow' ] }, {} ]
});
store.somethingfancy.went[1].mow.write(1, 'lawn');
BTW, I wrote it.
I've condensed a binary test into this function that I use:
function getStorageTotalSize(upperLimit/*in bytes*/) {
var store = localStorage, testkey = "$_test"; // (NOTE: Test key is part of the storage!!! It should also be an even number of characters)
var test = function (_size) { try { store.removeItem(testkey); store.setItem(testkey, new Array(_size + 1).join('0')); } catch (_ex) { return false; } return true; }
var backup = {};
for (var i = 0, n = store.length; i < n; ++i) backup[store.key(i)] = store.getItem(store.key(i));
store.clear(); // (you could iterate over the items and backup first then restore later)
var low = 0, high = 1, _upperLimit = (upperLimit || 1024 * 1024 * 1024) / 2, upperTest = true;
while ((upperTest = test(high)) && high < _upperLimit) { low = high; high *= 2; }
if (!upperTest) {
var half = ~~((high - low + 1) / 2); // (~~ is a faster Math.floor())
high -= half;
while (half > 0) high += (half = ~~(half / 2)) * (test(high) ? 1 : -1);
high = testkey.length + high;
}
if (high > _upperLimit) high = _upperLimit;
store.removeItem(testkey);
for (var p in backup) store.setItem(p, backup[p]);
return high * 2; // (*2 because of Unicode storage)
}
It also backs up the contents before testing, then restores them.
How it works: It doubles the size until the limit is reached or the test fails. It then stores half the distance between low and high and subtracts/adds a half of the half each time (subtract on failure and add on success); honing into the proper value.
upperLimit is 1GB by default, and just limits how far upwards to scan exponentially before starting the binary search. I doubt this will even need to be changed, but I'm always thinking ahead. ;)
On Chrome:
> getStorageTotalSize();
> 10485762
> 10485762/2
> 5242881
> localStorage.setItem("a", new Array(5242880).join("0")) // works
> localStorage.setItem("a", new Array(5242881).join("0")) // fails ('a' takes one spot [2 bytes])
IE11, Edge, and FireFox also report the same max size (10485762 bytes).
You can use the following code in modern browsers to efficiently check the storage quota (total & used) in real-time:
if ('storage' in navigator && 'estimate' in navigator.storage) {
navigator.storage.estimate()
.then(estimate => {
console.log("Usage (in Bytes): ", estimate.usage,
", Total Quota (in Bytes): ", estimate.quota);
});
}
I'm doing the following:
getLocalStorageSizeLimit = function () {
var maxLength = Math.pow(2,24);
var preLength = 0;
var hugeString = "0";
var testString;
var keyName = "testingLengthKey";
//2^24 = 16777216 should be enough to all browsers
testString = (new Array(Math.pow(2, 24))).join("X");
while (maxLength !== preLength) {
try {
localStorage.setItem(keyName, testString);
preLength = testString.length;
maxLength = Math.ceil(preLength + ((hugeString.length - preLength) / 2));
testString = hugeString.substr(0, maxLength);
} catch (e) {
hugeString = testString;
maxLength = Math.floor(testString.length - (testString.length - preLength) / 2);
testString = hugeString.substr(0, maxLength);
}
}
localStorage.removeItem(keyName);
// Original used this.storageObject in place of localStorage. I can only guess the goal is to check the size of the localStorage with everything but the testString given that maxLength is then added.
maxLength = JSON.stringify(localStorage).length + maxLength + keyName.length - 2;
return maxLength;
};
I really like cdmckay's answer, but it does not really look good to check the size in a real time: it is just too slow (2 seconds for me). This is the improved version, which is way faster and more exact, also with an option to choose how big the error can be (default 250,000, the smaller error is - the longer the calculation is):
function getLocalStorageMaxSize(error) {
if (localStorage) {
var max = 10 * 1024 * 1024,
i = 64,
string1024 = '',
string = '',
// generate a random key
testKey = 'size-test-' + Math.random().toString(),
minimalFound = 0,
error = error || 25e4;
// fill a string with 1024 symbols / bytes
while (i--) string1024 += 1e16;
i = max / 1024;
// fill a string with 'max' amount of symbols / bytes
while (i--) string += string1024;
i = max;
// binary search implementation
while (i > 1) {
try {
localStorage.setItem(testKey, string.substr(0, i));
localStorage.removeItem(testKey);
if (minimalFound < i - error) {
minimalFound = i;
i = i * 1.5;
}
else break;
} catch (e) {
localStorage.removeItem(testKey);
i = minimalFound + (i - minimalFound) / 2;
}
}
return minimalFound;
}
}
To test:
console.log(getLocalStorageMaxSize()); // takes .3s
console.log(getLocalStorageMaxSize(.1)); // takes 2s, but way more exact
This works dramatically faster for the standard error; also it can be much more exact when necessary.
Once I developed Chrome (desktop browser) extension and tested Local Storage real max size for this reason.
My results:
Ubuntu 18.04.1 LTS (64-bit)
Chrome 71.0.3578.98 (Official Build) (64-bit)
Local Storage content size 10240 KB (10 MB)
More than 10240 KB usage returned me the error:
Uncaught DOMException: Failed to execute 'setItem' on 'Storage': Setting the value of 'notes' exceeded the quota.
Edit on Oct 23, 2020
For a Chrome extensions available chrome.storage API. If you declare the "storage" permission in manifest.js:
{
"name": "My extension",
...
"permissions": ["storage"],
...
}
You can access it like this:
chrome.storage.local.QUOTA_BYTES // 5242880 (in bytes)

Categories