Since localStorage (currently) only supports strings as values, and in order to do that the objects need to be stringified (stored as JSON-string) before they can be stored, is there a defined limitation regarding the length of the values.
Does anyone know if there is a definition which applies to all browsers?
Quoting from the Wikipedia article on Web Storage:
Web storage can be viewed simplistically as an improvement on cookies, providing much greater storage capacity (10 MB per origin in Google Chrome(https://plus.google.com/u/0/+FrancoisBeaufort/posts/S5Q9HqDB8bh), Mozilla Firefox, and Opera; 10 MB per storage area in Internet Explorer) and better programmatic interfaces.
And also quoting from a John Resig article [posted January 2007]:
Storage Space
It is implied that, with DOM Storage,
you have considerably more storage
space than the typical user agent
limitations imposed upon Cookies.
However, the amount that is provided
is not defined in the specification,
nor is it meaningfully broadcast by
the user agent.
If you look at the Mozilla source code
we can see that 5120KB is the default
storage size for an entire domain.
This gives you considerably more space
to work with than a typical 2KB
cookie.
However, the size of this storage area
can be customized by the user (so a
5MB storage area is not guaranteed,
nor is it implied) and the user agent
(Opera, for example, may only provide
3MB - but only time will tell.)
Actually Opera doesn't have 5MB limit. It offers to increase limit as applications requires more. User can even choose "Unlimited storage" for a domain.
You can easily test localStorage limits/quota yourself.
Here's a straightforward script for finding out the limit:
if (localStorage && !localStorage.getItem('size')) {
var i = 0;
try {
// Test up to 10 MB
for (i = 250; i <= 10000; i += 250) {
localStorage.setItem('test', new Array((i * 1024) + 1).join('a'));
}
} catch (e) {
localStorage.removeItem('test');
localStorage.setItem('size', i - 250);
}
}
Here's the gist, JSFiddle and blog post.
The script will test setting increasingly larger strings of text until the browser throws and exception. At that point it’ll clear out the test data and set a size key in localStorage storing the size in kilobytes.
Find the maximum length of a single string that can be stored in localStorage
This snippet will find the maximum length of a String that can be stored in localStorage per domain.
//Clear localStorage
for (var item in localStorage) delete localStorage[item];
window.result = window.result || document.getElementById('result');
result.textContent = 'Test running…';
//Start test
//Defer running so DOM can be updated with "test running" message
setTimeout(function () {
//Variables
var low = 0,
high = 2e9,
half;
//Two billion may be a little low as a starting point, so increase if necessary
while (canStore(high)) high *= 2;
//Keep refining until low and high are equal
while (low !== high) {
half = Math.floor((high - low) / 2 + low);
//Check if we can't scale down any further
if (low === half || high === half) {
console.info(low, high, half);
//Set low to the maximum possible amount that can be stored
low = canStore(high) ? high : low;
high = low;
break;
}
//Check if the maximum storage is no higher than half
if (storageMaxBetween(low, half)) {
high = half;
//The only other possibility is that it's higher than half but not higher than "high"
} else {
low = half + 1;
}
}
//Show the result we found!
result.innerHTML = 'The maximum length of a string that can be stored in localStorage is <strong>' + low + '</strong> characters.';
//Functions
function canStore(strLen) {
try {
delete localStorage.foo;
localStorage.foo = Array(strLen + 1).join('A');
return true;
} catch (ex) {
return false;
}
}
function storageMaxBetween(low, high) {
return canStore(low) && !canStore(high);
}
}, 0);
<h1>LocalStorage single value max length test</h1>
<div id='result'>Please enable JavaScript</div>
Note that the length of a string is limited in JavaScript; if you want to view the maximum amount of data that can be stored in localStorage when not limited to a single string, you can use the code in this answer.
Edit: Stack Snippets don't support localStorage, so here is a link to JSFiddle.
Results
Chrome (45.0.2454.101): 5242878 characters
Firefox (40.0.1): 5242883 characters
Internet Explorer (11.0.9600.18036): 16386 122066 122070 characters
I get different results on each run in Internet Explorer.
Don't assume 5MB is available - localStorage capacity varies by browser, with 2.5MB, 5MB and unlimited being the most common values.
Source: http://dev-test.nemikor.com/web-storage/support-test/
I wrote this simple code that is testing localStorage size in bytes.
https://github.com/gkucmierz/Test-of-localStorage-limits-quota
const check = bytes => {
try {
localStorage.clear();
localStorage.setItem('a', '0'.repeat(bytes));
localStorage.clear();
return true;
} catch(e) {
localStorage.clear();
return false;
}
};
Github pages:
https://gkucmierz.github.io/Test-of-localStorage-limits-quota/
I have the same results on desktop Google chrome, opera, firefox, brave and mobile chrome which is ~10Mbytes
And half smaller result in safari ~4Mbytes
You don't want to stringify large objects into a single localStorage entry. That would be very inefficient - the whole thing would have to be parsed and re-encoded every time some slight detail changes. Also, JSON can't handle multiple cross references within an object structure and wipes out a lot of details, e.g. the constructor, non-numerical properties of arrays, what's in a sparse entry, etc.
Instead, you can use Rhaboo. It stores large objects using lots of localStorage entries so you can make small changes quickly. The restored objects are much more accurate copies of the saved ones and the API is incredibly simple. E.g.:
var store = Rhaboo.persistent('Some name');
store.write('count', store.count ? store.count+1 : 1);
store.write('somethingfancy', {
one: ['man', 'went'],
2: 'mow',
went: [ 2, { mow: ['a', 'meadow' ] }, {} ]
});
store.somethingfancy.went[1].mow.write(1, 'lawn');
BTW, I wrote it.
I've condensed a binary test into this function that I use:
function getStorageTotalSize(upperLimit/*in bytes*/) {
var store = localStorage, testkey = "$_test"; // (NOTE: Test key is part of the storage!!! It should also be an even number of characters)
var test = function (_size) { try { store.removeItem(testkey); store.setItem(testkey, new Array(_size + 1).join('0')); } catch (_ex) { return false; } return true; }
var backup = {};
for (var i = 0, n = store.length; i < n; ++i) backup[store.key(i)] = store.getItem(store.key(i));
store.clear(); // (you could iterate over the items and backup first then restore later)
var low = 0, high = 1, _upperLimit = (upperLimit || 1024 * 1024 * 1024) / 2, upperTest = true;
while ((upperTest = test(high)) && high < _upperLimit) { low = high; high *= 2; }
if (!upperTest) {
var half = ~~((high - low + 1) / 2); // (~~ is a faster Math.floor())
high -= half;
while (half > 0) high += (half = ~~(half / 2)) * (test(high) ? 1 : -1);
high = testkey.length + high;
}
if (high > _upperLimit) high = _upperLimit;
store.removeItem(testkey);
for (var p in backup) store.setItem(p, backup[p]);
return high * 2; // (*2 because of Unicode storage)
}
It also backs up the contents before testing, then restores them.
How it works: It doubles the size until the limit is reached or the test fails. It then stores half the distance between low and high and subtracts/adds a half of the half each time (subtract on failure and add on success); honing into the proper value.
upperLimit is 1GB by default, and just limits how far upwards to scan exponentially before starting the binary search. I doubt this will even need to be changed, but I'm always thinking ahead. ;)
On Chrome:
> getStorageTotalSize();
> 10485762
> 10485762/2
> 5242881
> localStorage.setItem("a", new Array(5242880).join("0")) // works
> localStorage.setItem("a", new Array(5242881).join("0")) // fails ('a' takes one spot [2 bytes])
IE11, Edge, and FireFox also report the same max size (10485762 bytes).
You can use the following code in modern browsers to efficiently check the storage quota (total & used) in real-time:
if ('storage' in navigator && 'estimate' in navigator.storage) {
navigator.storage.estimate()
.then(estimate => {
console.log("Usage (in Bytes): ", estimate.usage,
", Total Quota (in Bytes): ", estimate.quota);
});
}
I'm doing the following:
getLocalStorageSizeLimit = function () {
var maxLength = Math.pow(2,24);
var preLength = 0;
var hugeString = "0";
var testString;
var keyName = "testingLengthKey";
//2^24 = 16777216 should be enough to all browsers
testString = (new Array(Math.pow(2, 24))).join("X");
while (maxLength !== preLength) {
try {
localStorage.setItem(keyName, testString);
preLength = testString.length;
maxLength = Math.ceil(preLength + ((hugeString.length - preLength) / 2));
testString = hugeString.substr(0, maxLength);
} catch (e) {
hugeString = testString;
maxLength = Math.floor(testString.length - (testString.length - preLength) / 2);
testString = hugeString.substr(0, maxLength);
}
}
localStorage.removeItem(keyName);
// Original used this.storageObject in place of localStorage. I can only guess the goal is to check the size of the localStorage with everything but the testString given that maxLength is then added.
maxLength = JSON.stringify(localStorage).length + maxLength + keyName.length - 2;
return maxLength;
};
I really like cdmckay's answer, but it does not really look good to check the size in a real time: it is just too slow (2 seconds for me). This is the improved version, which is way faster and more exact, also with an option to choose how big the error can be (default 250,000, the smaller error is - the longer the calculation is):
function getLocalStorageMaxSize(error) {
if (localStorage) {
var max = 10 * 1024 * 1024,
i = 64,
string1024 = '',
string = '',
// generate a random key
testKey = 'size-test-' + Math.random().toString(),
minimalFound = 0,
error = error || 25e4;
// fill a string with 1024 symbols / bytes
while (i--) string1024 += 1e16;
i = max / 1024;
// fill a string with 'max' amount of symbols / bytes
while (i--) string += string1024;
i = max;
// binary search implementation
while (i > 1) {
try {
localStorage.setItem(testKey, string.substr(0, i));
localStorage.removeItem(testKey);
if (minimalFound < i - error) {
minimalFound = i;
i = i * 1.5;
}
else break;
} catch (e) {
localStorage.removeItem(testKey);
i = minimalFound + (i - minimalFound) / 2;
}
}
return minimalFound;
}
}
To test:
console.log(getLocalStorageMaxSize()); // takes .3s
console.log(getLocalStorageMaxSize(.1)); // takes 2s, but way more exact
This works dramatically faster for the standard error; also it can be much more exact when necessary.
Once I developed Chrome (desktop browser) extension and tested Local Storage real max size for this reason.
My results:
Ubuntu 18.04.1 LTS (64-bit)
Chrome 71.0.3578.98 (Official Build) (64-bit)
Local Storage content size 10240 KB (10 MB)
More than 10240 KB usage returned me the error:
Uncaught DOMException: Failed to execute 'setItem' on 'Storage': Setting the value of 'notes' exceeded the quota.
Edit on Oct 23, 2020
For a Chrome extensions available chrome.storage API. If you declare the "storage" permission in manifest.js:
{
"name": "My extension",
...
"permissions": ["storage"],
...
}
You can access it like this:
chrome.storage.local.QUOTA_BYTES // 5242880 (in bytes)
Related
I am trying to write a program that gives me a warning if I used 80% of my ram. But I am not getting any response when I put the condition on the used memory percentage. Here is my program.
const os = require('os');
const CheckMemory = () =>{
const total_memory = os.totalmem();
const free_memory = os.freemem();
const used_memory = total_memory - free_memory;
const usedmemper = used_memory * 100 / total_memory;
return usedmemper.toFixed(2)+" %";
}
const Warning = (mem) =>{
console.log(mem)
if (mem <= 80 ) {
console.log("You are currently using 80% of your available memory");
}
}
const Usedram = CheckMemory();
Warning(Usedram);
The bug is that you are comparing strings with integers and JavaScript does not warn about that.
Let's say the memory is 99%. CheckMemory will return the string 99.00 % in your example, which you will then compare against the integer 80:
console.log("99.00 %" <= 80);
JavaScript will not complain, but return false. That is why you never see the warning printed.
To fix it, CheckMemory should return a number (between 0 and 100), not a human readable string (like 80.23 %).
I'm having trouble explaining why my performance test return significantly different results on 2 different types of run.
Steps to reproduce issue:
Get the code from gist:
https://gist.github.com/AVAVT/83685bfe5280efc7278465f90657b9ea
Run node practice1.generator
Run node practice1.performance-test
practice1.generator should generate a test-data.json file, and log some searching algorithm execution time into the console.
After that, practice1.performance-test reads from test-data.json, and perform the exact same evaluation function on that same data.
The output on my machine is consistently similar to this:
> node practice1.generator
Generate time: 9,307,061,368 nanoseconds
Total time using indexOf : 7,005,750 nanoseconds
Total time using for loop : 7,463,967 nanoseconds
Total time using binary search : 1,741,822 nanoseconds
Total time using interpolation search: 915,532 nanoseconds
> node practice1.performance-test
Total time using indexOf : 11,574,993 nanoseconds
Total time using for loop : 8,765,902 nanoseconds
Total time using binary search : 2,365,598 nanoseconds
Total time using interpolation search: 771,005 nanoseconds
Note the difference in execution time in the case of indexOf and binary search comparing to the other algorithms.
If I repeatedly run node practice1.generator or node practice1.performance-test, the result is quite consistent though.
Now this is so troubling, I can't find a way to figure out which result is credible, and why such differences occur. Is it caused by a difference between the generated test array vs JSON.parse-d test array; or is it caused by process.hrtime(); or is it some unknown reason I couldn't even fathom?
Update: I have traced the cause of the indexOf case to be because of JSON.parse. Inside practice1.generator, the tests array is the original generated array; while in practice1.performance-test the array is read from the json file and is probably different from the original array somehow.
If within practice1.generator I instead JSON.parse() a new array from the string:
var tests2 = JSON.parse(JSON.stringify(tests));
performanceUtil.performanceTest(tests2);
The execution time of indexOf is now consistent on both files.
> node practice1.generator
Generate time: 9,026,080,466 nanoseconds
Total time using indexOf : 11,016,420 nanoseconds
Total time using for loop : 8,534,540 nanoseconds
Total time using binary search : 1,586,780 nanoseconds
Total time using interpolation search: 742,460 nanoseconds
> node practice1.performance-test
Total time using indexOf : 11,423,556 nanoseconds
Total time using for loop : 8,509,602 nanoseconds
Total time using binary search : 2,303,099 nanoseconds
Total time using interpolation search: 718,723 nanoseconds
So at least I know indexOf run better on the original array and worse on a JSON.parse-d array. Still I only know the reason, no clue why.
The Binary search execution time remain different on 2 files, consistently taking ~1.7ms in practice1.generator (even when using a JSON.parse-d object) and ~2.3ms in practice1.performance-test.
Below is the same code as in the gist, provided for future reference purpose.
performance-utils.js:
'use strict';
const performanceTest = function(tests){
var tindexOf = process.hrtime();
tests.forEach(testcase => {
var result = testcase.input.indexOf(testcase.target);
if(result !== testcase.output) console.log("Errr", result, testcase.output);
});
tindexOf = process.hrtime(tindexOf);
var tmanual = process.hrtime();
tests.forEach(testcase => {
const arrLen = testcase.input.length;
var result = -1;
for(var i=0;i<arrLen;i++){
if(testcase.input[i] === testcase.target){
result = i;
break;
}
}
if(result !== testcase.output) console.log("Errr", result, testcase.output);
});
tmanual = process.hrtime(tmanual);
var tbinary = process.hrtime();
tests.forEach(testcase => {
var max = testcase.input.length-1;
var min = 0;
var check, num;
var result = -1;
while(max => min){
check = Math.floor((max+min)/2);
num = testcase.input[check];
if(num === testcase.target){
result = check;
break;
}
else if(num > testcase.target) max = check-1;
else min = check+1;
}
if(result !== testcase.output) console.log("Errr", result, testcase.output);
});
tbinary = process.hrtime(tbinary);
var tinterpolation = process.hrtime();
tests.forEach(testcase => {
var max = testcase.input.length-1;
var min = 0;
var result = -1;
var check, num;
while(max > min && testcase.target >= testcase.input[min] && testcase.target <= testcase.input[max]){
check = min + Math.round((max-min) * (testcase.target - testcase.input[min]) / (testcase.input[max]-testcase.input[min]));
num = testcase.input[check];
if(num === testcase.target){
result = check;
break;
}
else if(testcase.target > num) min = check + 1;
else max = check - 1;
}
if(result === -1 && testcase.input[max] == testcase.target) result = max;
if(result !== testcase.output) console.log("Errr", result, testcase.output);
});
tinterpolation = process.hrtime(tinterpolation);
console.log(`Total time using indexOf : ${(tindexOf[0] * 1e9 + tindexOf[1]).toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",")} nanoseconds`);
console.log(`Total time using for loop : ${(tmanual[0] * 1e9 + tmanual[1]).toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",")} nanoseconds`);
console.log(`Total time using binary search : ${(tbinary[0] * 1e9 + tbinary[1]).toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",")} nanoseconds`);
console.log(`Total time using interpolation search: ${(tinterpolation[0] * 1e9 + tinterpolation[1]).toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",")} nanoseconds`);
}
module.exports = { performanceTest }
practice1.generator.js:
'use strict';
require('util');
const performanceUtil = require('./performance-utils');
const fs = require('fs');
const path = require('path');
const outputFilePath = path.join(__dirname, process.argv[3] || 'test-data.json');
const AMOUNT_TO_GENERATE = parseInt(process.argv[2] || 1000);
// Make sure ARRAY_LENGTH_MAX < (MAX_NUMBER - MIN_NUMBER)
const ARRAY_LENGTH_MIN = 10000;
const ARRAY_LENGTH_MAX = 18000;
const MIN_NUMBER = -10000;
const MAX_NUMBER = 10000;
const candidates = Array.from(Array(MAX_NUMBER - MIN_NUMBER + 1), (item, index) => MIN_NUMBER + index);
function createNewTestcase(){
var input = candidates.slice();
const lengthToGenerate = Math.floor(Math.random()*(ARRAY_LENGTH_MAX - ARRAY_LENGTH_MIN + 1)) + ARRAY_LENGTH_MIN;
while(input.length > lengthToGenerate){
input.splice(Math.floor(Math.random()*input.length), 1);
}
const notfound = input.length === lengthToGenerate ?
input.splice(Math.floor(Math.random()*input.length), 1)[0] : MIN_NUMBER-1;
const output = Math.floor(Math.random()*(input.length+1)) - 1;
const target = output === -1 ? notfound : input[output];
return {
input,
target,
output
};
}
var tgen = process.hrtime();
var tests = [];
while(tests.length < AMOUNT_TO_GENERATE){
tests.push(createNewTestcase());
}
fs.writeFileSync(outputFilePath, JSON.stringify(tests));
var tgen = process.hrtime(tgen);
console.log(`Generate time: ${(tgen[0] * 1e9 + tgen[1]).toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",")} nanoseconds`);
performanceUtil.performanceTest(tests);
practice1.performance-test.js:
'use strict';
require('util');
const performanceUtil = require('./performance-utils');
const fs = require('fs');
const path = require('path');
const outputFilePath = path.join(__dirname, process.argv[2] || 'test-data.json');
var tests = JSON.parse(fs.readFileSync(outputFilePath));
performanceUtil.performanceTest(tests);
As you have already noticed, the performance difference leads to the comparison: generated array vs JSON.parsed. What we have in both cases: same arrays with same numbers? So the look up performance must be the same? No.
Every Javascript engine has various data type structures for representing same values(numbers, objects, arrays, etc). In most cases optimizer tries to find out the best data type to use. And also often generates some additional meta information, like hidden clases or tags for arrays.
There are several very nice articles about the data types:
v8-perf/data-types
v8 performance tips
So why arrays created by JSON.parse are slow? The parser, when creating values, doesn't properly optimize the data structures, and as the result we get untagged arrays with boxed doubles. But we can optimize the arrays afterwards with Array.from and in your case, same as the generated arrays, you get smi arrays with smi numbers. Here is an example based on your sample.
const fs = require('fs');
const path = require('path');
const outputFilePath = path.join(__dirname, process.argv[2] || 'test-data.json');
let tests = JSON.parse(fs.readFileSync(outputFilePath));
// for this demo we take only the first items array
var arrSlow = tests[0].input;
// `slice` copies array as-is
var arrSlow2 = tests[0].input.slice();
// array is copied and optimized
var arrFast = Array.from(tests[0].input);
console.log(%HasFastSmiElements(arrFast), %HasFastSmiElements(arrSlow), %HasFastSmiElements(arrSlow2));
//> true, false, false
console.log(%HasFastObjectElements(arrFast), %HasFastObjectElements(arrSlow), %HasFastObjectElements(arrSlow2));
//> false, true, true
console.log(%HasFastDoubleElements(arrFast), %HasFastDoubleElements(arrSlow), %HasFastDoubleElements(arrSlow2));
//> false, false, false
// small numbers and unboxed doubles in action
console.log(%HasFastDoubleElements([Math.pow(2, 31)]));
console.log(%HasFastSmiElements([Math.pow(2, 30)]));
Run it with node --allow-natives-syntax test.js
OK...first of all lets talk about testing strategy...
Running this tests several times gives incredible different results fluctuating a lot for each point... see results here
https://docs.google.com/spreadsheets/d/1Z95GtT85BljpNda4l-usPjNTA5lJtUmmcY7BVB8fFGQ/edit?usp=sharing
After test update (running 100 tests in a row and calculating average) I score that the main difference in execution times are:
the indexOf and for loops are working better in GENERATOR scenario
the binary search and interpolation search are working better in JSON-parse scenario
Please look at the google doc before...
OK.. Great... This thing is much more easier to explain... basicly we trapped into situation when RANDOM memory access(binary, interpolation search) and CONSECUTIVE memory access(indexOf, for) give different results
Hmmm. Lets go deeper inside the memory management model of NodeJS
First of all NodeJS has several array representations, I actually know only two - numberArray, objectArray(means array that can include value of any type)
Lets look at GENERATOR scenario:
During initial array creation NodeJS is ABLE to detect that your array contains only numbers, as array starting from only numbers, and nothing of other type is added to it. This results in using simple memory allocation strategy, just raw row of integers going one by one in memory...
Array is represented as array of raw numbers in memory, most likely only memory paging table has an effect here
This fact clearly explains why CONSECUTIVE memory access works better in this case.
Lets look at JSON-parse scenario:
During the JSON parsing structure of JSON is unpredictable(NodeJS uses a stream JSON parser(99.99% confidence)), every value is tracted as most cozy for JSON parsing, so...
Array is represented as array of references to the numbers in memory, just because while parsing JSON this solution is more productive in most cases (and nobody cares (devil) )
As far as we allocating memory in heap by small chunks memory gets filled in more fluid way
Also in this model RANDOM memory access gives better results, because NodeJS engine has no options - to optimize access time it creates good either prefix tree either hash map which gives constant access time in RANDOM memory access scenarios
And this is quite good explanation why JSON-parse scenario wins during binary, interpolation search
What is the correct way to implement a Peak Meter like those in Logic Pro with the Web Audio API AnalyserNode?
I know AnalyserNode.getFloatFrequencyData() returns decibel values, but how do you combine those values to get the one to be displayed in the meter? Do you just take the maximum value like in the following code sample (where analyserData comes from getFloatFrequencyData():
let peak = -Infinity;
for (let i = 0; i < analyserData.length; i++) {
const x = analyserData[i];
if (x > peak) {
peak = x;
}
}
Inspecting some output from just taking the max makes it look like this is not the correct approach. Am I wrong?
Alternatively, would it be a better idea to use a ScriptProcessorNode instead? How would that approach differ?
If you take the maximum of getFloatFrequencyData()'s results in one frame, then what you are measuring is the audio power at a single frequency (whichever one has the most power). What you actually want to measure is the peak at any frequency — in other words, you want to not use the frequency data, but the unprocessed samples not separated into frequency bins.
The catch is that you'll have to compute the decibels power yourself. This is fairly simple arithmetic: you take some number of samples (one or more), square them, and average them. Note that even a “peak” meter may be doing averaging — just on a much shorter time scale.
Here's a complete example. (Warning: produces sound.)
document.getElementById('start').addEventListener('click', () => {
const context = new(window.AudioContext || window.webkitAudioContext)();
const oscillator = context.createOscillator();
oscillator.type = 'square';
oscillator.frequency.value = 440;
oscillator.start();
const gain1 = context.createGain();
const analyser = context.createAnalyser();
// Reduce output level to not hurt your ears.
const gain2 = context.createGain();
gain2.gain.value = 0.01;
oscillator.connect(gain1);
gain1.connect(analyser);
analyser.connect(gain2);
gain2.connect(context.destination);
function displayNumber(id, value) {
const meter = document.getElementById(id + '-level');
const text = document.getElementById(id + '-level-text');
text.textContent = value.toFixed(2);
meter.value = isFinite(value) ? value : meter.min;
}
// Time domain samples are always provided with the count of
// fftSize even though there is no FFT involved.
// (Note that fftSize can only have particular values, not an
// arbitrary integer.)
analyser.fftSize = 2048;
const sampleBuffer = new Float32Array(analyser.fftSize);
function loop() {
// Vary power of input to analyser. Linear in amplitude, so
// nonlinear in dB power.
gain1.gain.value = 0.5 * (1 + Math.sin(Date.now() / 4e2));
analyser.getFloatTimeDomainData(sampleBuffer);
// Compute average power over the interval.
let sumOfSquares = 0;
for (let i = 0; i < sampleBuffer.length; i++) {
sumOfSquares += sampleBuffer[i] ** 2;
}
const avgPowerDecibels = 10 * Math.log10(sumOfSquares / sampleBuffer.length);
// Compute peak instantaneous power over the interval.
let peakInstantaneousPower = 0;
for (let i = 0; i < sampleBuffer.length; i++) {
const power = sampleBuffer[i] ** 2;
peakInstantaneousPower = Math.max(power, peakInstantaneousPower);
}
const peakInstantaneousPowerDecibels = 10 * Math.log10(peakInstantaneousPower);
// Note that you should then add or subtract as appropriate to
// get the _reference level_ suitable for your application.
// Display value.
displayNumber('avg', avgPowerDecibels);
displayNumber('inst', peakInstantaneousPowerDecibels);
requestAnimationFrame(loop);
}
loop();
});
<button id="start">Start</button>
<p>
Short average
<meter id="avg-level" min="-100" max="10" value="-100"></meter>
<span id="avg-level-text">—</span> dB
</p>
<p>
Instantaneous
<meter id="inst-level" min="-100" max="10" value="-100"></meter>
<span id="inst-level-text">—</span> dB
</p>
Do you just take the maximum value
For a peak meter, yes. For a VU meter, there's all sorts of considerations in measuring the power, as well as the ballistics of an analog meter. There's also RMS power metering.
In digital land, you'll find a peak meter to be most useful for many tasks, and by far the easiest to compute.
A peak for any given set of samples is the highest absolute value in the set. First though, you need that set of samples. If you call getFloatFrequencyData(), you're not getting sample values, you're getting the spectrum. What you want instead is getFloatTimeDomainData(). This data is a low resolution representation of the samples. That is, you might have 4096 samples in your window, but your analyser might be configured with 256 buckets... so those 4096 samples will be resampled down to 256 samples. This is generally acceptable for a metering task.
From there, it's just Math.max(-Math.min(samples), Math.max(samples)) to get the max of the absolute value.
Suppose you wanted a higher resolution peak meter. For that, you need all the raw samples you can get. That's where a ScriptProcessorNode comes in handy. You get access to the actual sample data.
Basically, for this task, AnalyserNode is much faster, but slightly lower resolution. ScriptProcessorNode is much slower, but slightly higher resolution.
How can I know that how much data is transferred over wire in terms of size in kilobytes, megabytes?
Take for example
{
'a': 1,
'b': 2
}
How do I know what is the size of this payload is and not the length or items in the object
UPDATE
content-encoding:gzip
content-type:application/json
Transfer-Encoding:chunked
vary:Accept-Encoding
An answer to the actual question should include the bytes spent on the headers and should include taking gzip compression into account, but I will ignore those things.
You have a few options. They all output the same answer when run:
If Using a Browser or Node (Not IE)
const size = new TextEncoder().encode(JSON.stringify(obj)).length
const kiloBytes = size / 1024;
const megaBytes = kiloBytes / 1024;
If you need it to work on IE, you can use a pollyfill
If Using Node
const size = Buffer.byteLength(JSON.stringify(obj))
(which is the same as Buffer.byteLength(JSON.stringify(obj), "utf8")).
Shortcut That Works in IE, Modern Browsers, and Node
const size = encodeURI(JSON.stringify(obj)).split(/%..|./).length - 1;
That last solution will work in almost every case, but that last solution will throw a URIError: URI malformed exception if you feed it input containing a string that should not exist, like let obj = { partOfAnEmoji: "👍🏽"[1] }. The other two solutions I provided will not have that weakness.
(Credits: Credit for the first solution goes here.
Credit for the second solution goes to the utf8-byte-length package (which is good, you could use that instead).
Most of the credit for that last solution goes to here, but I simplified it a bit.
I found the test suite of the utf8-byte-length package super helpful, when researching this.)
For ascii, you can count the characters if you do...
JSON.stringify({
'a': 1,
'b': 2
}).length
If you have special characters too, you can pass through a function for calculating length of UTF-8 characters...
function lengthInUtf8Bytes(str) {
// Matches only the 10.. bytes that are non-initial characters in a multi-byte sequence.
var m = encodeURIComponent(str).match(/%[89ABab]/g);
return str.length + (m ? m.length : 0);
}
Should be accurate...
var myJson = JSON.stringify({
'a': 1,
'b': 2,
'c': 'Máybë itß nºt that sîmple, though.'
})
// simply measuring character length of string is not enough...
console.log("Inaccurate for non ascii chars: "+myJson.length)
// pass it through UTF-8 length function...
console.log("Accurate for non ascii chars: "+ lengthInUtf8Bytes(myJson))
/* Should echo...
Inaccurate for non ascii chars: 54
Accurate for non ascii chars: 59
*/
Working demo
Here is a function which does the job.
function memorySizeOf(obj) {
var bytes = 0;
function sizeOf(obj) {
if(obj !== null && obj !== undefined) {
switch(typeof obj) {
case 'number':
bytes += 8;
break;
case 'string':
bytes += obj.length * 2;
break;
case 'boolean':
bytes += 4;
break;
case 'object':
var objClass = Object.prototype.toString.call(obj).slice(8, -1);
if(objClass === 'Object' || objClass === 'Array') {
for(var key in obj) {
if(!obj.hasOwnProperty(key)) continue;
sizeOf(obj[key]);
}
} else bytes += obj.toString().length * 2;
break;
}
}
return bytes;
};
function formatByteSize(bytes) {
if(bytes < 1024) return bytes + " bytes";
else if(bytes < 1048576) return(bytes / 1024).toFixed(3) + " KiB";
else if(bytes < 1073741824) return(bytes / 1048576).toFixed(3) + " MiB";
else return(bytes / 1073741824).toFixed(3) + " GiB";
};
return formatByteSize(sizeOf(obj));
};
console.log(memorySizeOf({"name": "john"}));
I got the snippet from following URL
https://gist.github.com/zensh/4975495
Buffer.from(JSON.stringify(obj)).length will give you the number of bytes.
JSON-objects are Javascript-objects, this SO question already shows a way to calculate the rough size. <- See Comments/Edits
EDIT: Your actual question is quite difficult as even if you could access the headers, content-size doesn't show the gzip'ed value as far as I know.
If you're lucky enough to have a good html5 browser, this contains a nice piece of code:
var xhr = new window.XMLHttpRequest();
//Upload progress
xhr.upload.addEventListener("progress", function(evt){
if (evt.lengthComputable) {
var percentComplete = evt.loaded / evt.total;
//Do something with upload progress
console.log(percentComplete);
}
}
Therefore the following should work (can't test right now):
var xhr = new window.XMLHttpRequest();
if (xhr.lengthComputable) {
alert(xhr.total);
}
Note that XMLHttpRequest is native and doesn't require jQuery.
Next EDIT:
I found a kind-of duplicate (slightly diffrent question, pretty much same answer). The important part for this question is
content_length = jqXHR.getResponseHeader("X-Content-Length");
Where jqXHR is your XmlHttpRequest.
I need to generate unique ids in the browser. Currently, I'm using this:
Math.floor(Math.random() * 10000000000000001)
I'd like to use the current UNIX time ((new Date).getTime()), but I'm worried that if two clients generate ids at the exact same time, they wouldn't be unique.
Can I use the current UNIX time (I'd like to because that way ids would store more information)? If not, what's the best way to do this (maybe UNIX time + 2 random digits?)
you can create a GUID using the following links:
http://softwareas.com/guid0-a-javascript-guid-generator
Create GUID / UUID in JavaScript?
This will maximise your chance of "uniqueness."
Alternatively, if it is a secure page, you can concatenate the date/time with the username to prevent multiple simultaneous generated values.
https://github.com/uuidjs/uuid provides RFC compliant UUIDs based on either timestamp or random #'s. Single-file with no dependencies, supports timestamp or random #-based UUIDs, uses native APIs for crypto-quality random numbers if available, plus other goodies.
In modern browser you can use crypto:
var array = new Uint32Array(1);
window.crypto.getRandomValues(array);
console.log(array);
var c = 1;
function cuniq() {
var d = new Date(),
m = d.getMilliseconds() + "",
u = ++d + m + (++c === 10000 ? (c = 1) : c);
return u;
}
Here is my javascript code to generate guid. It does quick hex mapping and very efficient:
AuthenticationContext.prototype._guid = function () {
// RFC4122: The version 4 UUID is meant for generating UUIDs from truly-random or
// pseudo-random numbers.
// The algorithm is as follows:
// Set the two most significant bits (bits 6 and 7) of the
// clock_seq_hi_and_reserved to zero and one, respectively.
// Set the four most significant bits (bits 12 through 15) of the
// time_hi_and_version field to the 4-bit version number from
// Section 4.1.3. Version4
// Set all the other bits to randomly (or pseudo-randomly) chosen
// values.
// UUID = time-low "-" time-mid "-"time-high-and-version "-"clock-seq-reserved and low(2hexOctet)"-" node
// time-low = 4hexOctet
// time-mid = 2hexOctet
// time-high-and-version = 2hexOctet
// clock-seq-and-reserved = hexOctet:
// clock-seq-low = hexOctet
// node = 6hexOctet
// Format: xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx
// y could be 1000, 1001, 1010, 1011 since most significant two bits needs to be 10
// y values are 8, 9, A, B
var guidHolder = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx';
var hex = '0123456789abcdef';
var r = 0;
var guidResponse = "";
for (var i = 0; i < 36; i++) {
if (guidHolder[i] !== '-' && guidHolder[i] !== '4') {
// each x and y needs to be random
r = Math.random() * 16 | 0;
}
if (guidHolder[i] === 'x') {
guidResponse += hex[r];
} else if (guidHolder[i] === 'y') {
// clock-seq-and-reserved first hex is filtered and remaining hex values are random
r &= 0x3; // bit and with 0011 to set pos 2 to zero ?0??
r |= 0x8; // set pos 3 to 1 as 1???
guidResponse += hex[r];
} else {
guidResponse += guidHolder[i];
}
}
return guidResponse;
};
You can always run a test against existing IDs in the set to accept or reject the generated random number recursively.
for example:
const randomID = function(){
let id = Math.floor(Math.random() * 10000000000000001) + new Date();
if (idObjectArray.contains(id)) {
randomID;
} else {
idObjectArray.push(id);
}
};
This example assumes you would just be pushing the id into a 1D array, but you get the idea. There shouldn't be many collisions given the uniqueness of the random number with the date, so it should be efficient.
There are two ways to achieve this
js const id = Date.now().toString()
While this does not guarantee uniqueness (When you are creating multiple objects within 1ms), this will work on a practical level, since it is usually not long before the objects on the client are sent to a real server.
If you wanted to create multiple records withing 1ms, I suggest using the code below
const { randomBytes } = require("crypto");
// 32 Characters
const id = randomBytes(16).toString("hex");
It works similar to a uuid4 without needing to add an external library (Assuming you have access to NodeJs at some point)