Just to train myself a bit of Typescript I wrote a simple ES6 Map+Set-like implementation based on plain JS Object. It works only for primitive keys, so no buckets, no hash-codes, etc. The problem I encountered is implementing delete method. Using plain delete is just unacceptably slow. For large maps it's about 300-400x slower than ES6 Map delete. I noticed the huge performance degradation if size of the object is large. On Node JS 7.9.0 (and Chrome 57 for example) if object has 50855 properties delete performance is the same as ES6 Map. But for 50856 properties the ES6 Map is faster on 2 orders of magnitude. Here is the simple code to reproduce:
// for node 6: 76300
// for node 7: 50855
const N0 = 50855;
function fast() {
const N = N0
const o = {}
for ( let i = 0; i < N; i++ ) {
o[i] = i
}
const t1 = Date.now()
for ( let i = 0; i < N; i++ ) {
delete o[i]
}
const t2 = Date.now()
console.log( N / (t2 - t1) + ' KOP/S' )
}
function slow() {
const N = N0 + 1 // adding just 1
const o = {}
for ( let i = 0; i < N; i++ ) {
o[i] = i
}
const t1 = Date.now()
for ( let i = 0; i < N; i++ ) {
delete o[i]
}
const t2 = Date.now()
console.log( N / (t2 - t1) + ' KOP/S' )
}
fast()
slow()
I guess I could instead of delete properties just set them to undefined or some guard object, but this will mess the code, because hasOwnProperty will not work correctly, for...in loops will need additional check and so on. Are there more nice solutions?
P.S. I'm using node 7.9.0 on OSX Sierra
Edited
Thanks for comments guys, I fixed OP/S => KOP/S. I think I asked rather badly specified question, so I changed the title. After some investigation I found out that for example in Firefox there is no such problems -- deleting cost grows linearly. So it's problem of super smart V8. And I think it's just a bug:(
(V8 developer here.) Yes, this is a known issue. The underlying problem is that objects should switch their elements backing store from a flat array to a dictionary when they become too sparse, and the way this has historically been implemented was for every delete operation to check if enough elements were still present for that transition not to happen yet. The bigger the array, the more time this check took. Under certain conditions (recently created objects below a certain size), the check was skipped -- the resulting impressive speedup is what you're observing in the fast() case.
I've taken this opportunity to fix the (frankly quite silly) behavior of the regular/slow path. It should be sufficient to check every now and then, not on every single delete. The fix will be in V8 6.0, which should be picked up by Node in a few months (I believe Node 8 is supposed to get it at some point).
That said, using delete causes various forms and magnitudes of slowdown in many situations, because it tends to make things more complicated, forcing the engine (any engine) to perform more checks and/or fall off various fast paths. It is generally recommended to avoid using delete whenever possible. Since you have ES6 maps/sets, use them! :-)
To answer the question "why adding 1 to N slows the delete operation".
My guess: the slowness comes from the way the memory is allocated for your Object.
Try changing your code to this:
(() => {
const N = 50855
const o = {}
for ( let i = 0; i < N; i++ ) {
o[i] = i
}
// Show the heap memory allocated
console.log(process.memoryUsage().heapTotal);
const t1 = Date.now()
for ( let i = 0; i < N; i++ ) {
delete o[i]
}
const t2 = Date.now()
console.log( N / (t2 - t1) + ' OP/S' )
})();
Now, when you run with N = 50855 the memory allocated is: "8306688 bytes" (8.3MB)
When you run with N = 50856 the memory allocated is: "8929280 bytes" (8.9MB).
So you got a 600kb increase in the size of the memory allocated, only by adding one more key to your Object.
Now, I say I "guess" that this is where the slowness come from, but I think it makes sense for the delete function to be slower as the size of your Object increases.
If you try with N = 70855 you would still have same 8.9MB used. This is because usually memory allocators allocate memory in fixed "batches" while increasing the size of an Array/Object in order to reduce the number of memory allocations they do.
Now, same thing might happen with delete and the GC. The memory you delete has to be picked up by the GC, and if the Object size is larger the GC will be slower. Also the memory might be released if the number of keys goes under a specific number.
(You should read about memory allocation for dynamic arrays if you want to know more; there was a cool article on what increase rate you should use for memory allocation, I can't find it atm :( )
PS: delete is not "extremely slow", you just compute the op/s wrong. The time passed is in milliseconds, not in seconds so you have to multiply by 1000.
Related
I need some help to optimize the code below. I can't understand how to rewrite the code to avoid the deoptimizations.
Below is the code that works very slow on the Node platform. I took it from benchmarksgame-team binary-trees benchmark and added minor changes.
When it is run with --trace-deopt it shows that the functions in the hot path are depotimized, i.e.
[bailout (kind: deopt-lazy, reason: (unknown)): begin. deoptimizing 0x02117de4aa11 <JSFunction bottomUpTree2 (sfi = 000002F24C7D7539)>, opt id 3, bytecode offset 9, deopt exit 17, FP to SP delta 80, caller SP 0x00807d7fe6f0, pc 0x7ff6afaca78d]
The benchmark, run it using node --trace-deopt a 20
function mainThread() {
const maxDepth = Math.max(6, parseInt(process.argv[2]));
const stretchDepth = maxDepth + 1;
const check = itemCheck(bottomUpTree(stretchDepth));
console.log(`stretch tree of depth ${stretchDepth}\t check: ${check}`);
const longLivedTree = bottomUpTree(maxDepth);
for (let depth = 4; depth <= maxDepth; depth += 2) {
const iterations = 1 << maxDepth - depth + 4;
work(iterations, depth);
}
console.log(`long lived tree of depth ${maxDepth}\t check: ${itemCheck(longLivedTree)}`);
}
function work(iterations, depth) {
let check = 0;
for (let i = 0; i < iterations; i++) {
check += itemCheck(bottomUpTree(depth));
}
console.log(`${iterations}\t trees of depth ${depth}\t check: ${check}`);
}
function TreeNode(left, right) {
return {left, right};
}
function itemCheck(node) {
if (node.left === null) {
return 1;
}
return 1 + itemCheck2(node);
}
function itemCheck2(node) {
return itemCheck(node.left) + itemCheck(node.right);
}
function bottomUpTree(depth) {
return depth > 0
? bottomUpTree2(depth)
: new TreeNode(null, null);
}
function bottomUpTree2(depth) {
return new TreeNode(bottomUpTree(depth - 1), bottomUpTree(depth - 1))
}
console.time();
mainThread();
console.timeEnd();
(V8 developer here.)
The premise of this question is incorrect: a few deopts don't matter, and don't move the needle regarding performance. Trying to avoid them is an exercise in futility.
The first step when trying to improve performance of something is to profile it. In this case, a profile reveals that the benchmark is spending:
about 46.3% of the time in optimized code (about 4/5 of that for tree creation and 1/5 for tree iteration)
about 0.1% of the time in unoptimized code
about 52.8% of the time in the garbage collector, tracing and freeing all those short-lived objects.
This is as artificial a microbenchmark as they come. 50% GC time never happens in real-world code that does useful things aside from allocating multiple gigabytes of short-lived objects as fast as possible.
In fact, calling them "short-lived objects" is a bit inaccurate in this case. While the vast majority of the individual trees being constructed are indeed short-lived, the code allocates one super-large long-lived tree early on. That fools V8's adaptive mechanisms into assuming that all future TreeNodes will be long-lived too, so it allocates them in "old space" right away -- which would save time if the guess was correct, but ends up wasting time because the TreeNodes that follow are actually short-lived and would be better placed in "new space" (which is optimized for quickly freeing short-lived objects). So just by reshuffling the order of operations, I can get a 3x speedup.
This is a typical example of one of the general problems with microbenchmarks: by doing something extreme and unrealistic, they often create situations that are not at all representative of typical real-world scenarios. If engine developers optimized for such microbenchmarks, engines would perform worse for real-world code. If JavaScript developers try to derive insights from microbenchmarks, they'll write code that performs worse under realistic conditions.
Anyway, if you want to optimize this code, avoid as many of those object allocations as you can.
Concretely:
An artificial microbenchmark like this, by its nature, intentionally does useless work (such as: computing the same value a million times). You said you wanted to optimize it, which means avoiding useless work, but you didn't specify which parts of the useless work you'd like to preserve, if any. So in the absence of a preference, I'll assume that all useless work is useless. So let's optimize!
Looking at the code, it creates perfect binary trees of a given depth and counts their nodes. In other words, it sums up the 1s in these examples:
depth=0:
1
depth=1:
1
/ \
1 1
depth=2:
1
/ \
1 1
/ \ / \
1 1 1 1
and so on. If you think about it for a bit, you'll realize that such a tree of depth N has (2 ** (N+1)) - 1 nodes. So we can replace:
itemCheck(bottomUpTree(depth));
with
(2 ** (depth+1)) - 1
(and analogously for the "stretchDepth" line).
Next, we can take care of the useless repetitions. Since x + x + x + x + ... N times is the same as x*N, we can replace:
let check = 0;
for (let i = 0; i < iterations; i++) {
check += (2 ** (depth + 1)) - 1;
}
with just:
let check = ((2 ** (depth + 1)) - 1) * iterations;
With that we're from 12 seconds down to about 0.1 seconds. Not bad for five minutes of work, eh?
And that remaining time is almost entirely due to the longLivedTree. To apply the same optimizations to the operations creating and iterating that tree, we'd have to move them together, getting rid of its "long-livedness". Would you find that acceptable? You could get the overall time down to less than a millisecond! Would that make the benchmark useless? Not actually any more useless than it was to begin with, just more obviously so.
In the case of a JS array, it's possible to create one with predefined length. If we pass the length to the constructor, like new Array(itemCount), a JS engine could pre-allocate memory for the array, so it won't be needed to reallocate it while new items are added to the array.
Is it possible to pre-allocate memory for a Map or Set? It doesn't accept the length in the constructor like array does. If you know that a map will contain 10 000 items, it should be much more efficient to allocate the memory once than reallocate it multiple times.
Here is the same question about arrays.
There is discussion in the comments, whether the predefined array size has an impact on the performance or not. I've created a simple test to check it, the result is:
Chrome - filling an array with a predefined size is about 3 times faster
Firefox - there is no much difference, but the array with predefined length is a little bit faster
Edge - filling of a predefined size array is about 1.5 times faster
There is a suggestion to create the entries array and pass it to the Map, so the map can take the length from the array. I've created such test too, the result is:
Chrome - construction of a map which accepts entries array is about 1.5 times slower
Firefox - there is no difference
Edge - passing of entries array to the Map constructor is about 1.5 times faster
There is no way to influence the way the engine allocates memory, so whatever you do, you cannot guarantee that the trick you are using works on every engine or even at another version of the same engine.
There are however typed arrays in JS, they are the only datastructures with a fixed size. With them, you can implement your own hashmap with fixed size.
A very naive implementation is slightly faster² on inserting, not sure how it behaves on reads and others (also it can only store 8bit integers > 0):
function FixedMap(size) {
const keys = new Uint8Array(size),
values = new Uint8Array(size);
return {
get(key) {
let pos = key % size;
while(keys[pos] !== key) {
if(keys[pos] % size !== key % size)
return undefined;
pos = (pos + 1) % size;
}
return values[pos];
},
set(key, value) {
let pos = key % size;
while(key[pos] && key[pos] !== key) pos = (pos + 1) % size;
keys[pos] = value;
values[pos] = value;
}
};
}
console.time("native");
const map = new Map();
for(let i = 0; i < 10000; i++)
map.set(i, Math.floor(Math.random() * 1000));
console.timeEnd("native");
console.time("self");
const fixedMap = FixedMap(10000);
for(let i = 0; i < 10000; i++)
fixedMap.set(i, Math.floor(Math.random() * 1000));
console.timeEnd("self");
² Marketing would say It is up to 20% faster! and I would say It is up to 2ms faster, and I spent more than 10minutes on this ...
No, this is not possible. It's doubtful even that the Array(length) trick still works, engines have become much better at optimising array assignments and might have chosen a different strategy to (pre)determine the size of memory to allocate.
For Maps and Sets, no such trick exists. Your best bet will be creating them by passing an iterable to their constructor, like new Map(array), instead of using the set/add methods. This does at least offer the chance for the engine to take the iterable size as a hint - although filtering out duplicates can still lead to differences.
Disclaimer: I don't know whether any engines implement, or plan to implement, such an optimisation. It might not be worth the effort, especially if the current memory allocation strategy is already good enough to diminish any improvements.
I'm working on some kind of 1:1 chat system, the environment is Node.JS
For each country, there is a country room (lobby), for each socket client there is a js class/object is being created and each object is in a list with their unique user id.
This unique id is preserved even users logged in from different browser tabs etc..
Each object stored in collections like: "connections" (all of them), "operators"(only operators), "{countryISO}_clients" (users) and the reference key is their unique id.
In some circumstances, I need to access these connections by their socket ids.
At this point, I can think of 2 resolutions.
Using a for each loop to find the desired object
Creating another collection, this time instead of unique id use socket id (or something else.)
Which one makes sense? Because in JS since this collection will be a reference list instead of a copy, it feels like it makes sense (and beautiful looking) but I can't be sure. Which one is expensive in memory/performance terms?
I can't make thorough tests since I don't know how to create dummy (simultaneous) socket connections.
Expected connected socket client count: 300 - 1000 (depends on the time of the day)
e.g. user:
"qk32d2k":{
"uid":"qk32d2k",
"name":"Josh",
"socket":"{socket.io's socket reference}",
"role":"user",
"rooms":["room1"],
"socketids":["sid1"]
"country":"us",
...
info:() => { return gatherSomeData(); },
update:(obj) => { return updateSomeData(obj); },
send:(data)=>{ /*send data to this user*/ }
}
e.g. Countries collection:
{
us:{
"qk32d2k":{"object above."}
"l33t71":{"another user object."}
},
ca:{
"asd231":{"other user object."}
}
}
Pick Simple Design First that Optimizes for Most Common Access
There is no ideal answer here in the absolute. CPUs are wicked fast these days, so if I were you I'd start out with one simple mechanism of storing the sockets that you can access both ways you want, even if one way is kind of a brute force search. Pick the data structure that optimizes the access mechanism that you expect to be either most common or most sensitive to performance.
So, if you are going to be looking up by userID the most, then I'd probably store the sockets in a Map object with the userID as the key. That will give you fast, optimized access to get the socket for a given userID.
For finding a socket by some other property of the socket, you will just iterate the Map item by item until you find the desired match on some other socket property. I'd probably use a for/of loop because it's both fast and easy to bail out of the loop when you've found your match (something you can't do on a Map or Array object with .forEach()). You can obviously make yourself a little utility function or method that will do the brute force lookup for you and that will allow you to modify the implementation later without changing much calling code.
Measure and Add Further Optimization Later (if data shows you need to)
Then, once you get up to scale (or simulated scale in pre-production test), you take a look at the performance of your system. If you have loads of room to spare, you're done - no need to look further. If you have some operations that are slower than desired or higher than desired CPU usage, then you profile your system and find out where the time is going. It's most likely that your performance bottlenecks will be elsewhere in your system and you can then concentrate on those aspects of the system. If, in your profiling, you find that the linear lookup to find the desired socket is causing some of your slow-down, then you can make a second parallel lookup Map with the socketID as the key in order to optimize that type of lookup.
But, I would not recommend doing this until you've actually shown that it is an issue. Premature optimization before you have actual metrics that prove it's worth optimizing something just add complexity to a program without any proof that it is required or even anywhere close to a meaningful bottleneck in your system. Our intuition about what the bottlenecks are is often way, way off. For that reasons, I tend to pick an intelligent first design that is relatively simple to implement, maintain and use and then, only when we have real usage data by which we can measure actual performance metrics would I spend more time optimizing it or tweaking it or making it more complicated in order to make it faster.
Encapsulate the Implementation in Class
If you encapsulate all operations here in a class:
Adding a socket to the data structure.
Removing a socket from the data structure.
Looking up by userID
Looking up by socketID
Any other access to the data structure
Then, all calling code will access this data structure via the class and you can tweak the implementation some time in the future (to optimize based on data) without having to modify any of the calling code. This type of encapsulation can be very useful if you suspect future modifications or change of modifications to the way the data is stored or accessed.
If You're Still Worried, Design a Quick Bench Measurement
I created a quick snippet that tests how long a brute force lookup is in a 1000 element Map object (when you want to find it by something other than what the key is) and compared it to an indexed lookup.
On my computer, a brute force lookup (non-indexed lookup) takes about 0.002549 ms per lookup (that's an average time when doing 1,000,000 lookups. For comparison an indexed lookup on the same Map takes about 0.000017 ms. So you save about 0.002532 ms per lookup. So, this is fractions of a millisecond.
function addCommas(str) {
var parts = (str + "").split("."),
main = parts[0],
len = main.length,
output = "",
i = len - 1;
while(i >= 0) {
output = main.charAt(i) + output;
if ((len - i) % 3 === 0 && i > 0) {
output = "," + output;
}
--i;
}
// put decimal part back
if (parts.length > 1) {
output += "." + parts[1];
}
return output;
}
let m = new Map();
// populate the Map with objects that have a property that
// you have to do a brute force lookup on
function rand(min, max) {
return Math.floor((Math.random() * (max - min)) + min)
}
// keep all randoms here just so we can randomly get one
// to try to find (wouldn't normally do this)
// just for testing purposes
let allRandoms = [];
for (let i = 0; i < 1000; i++) {
let r = rand(1, 1000000);
m.set(i, {id: r});
allRandoms.push(r);
}
// create a set of test lookups
// we do this ahead of time so it's not part of the timed
// section so we're only timing the actual brute force lookup
let numRuns = 1000000;
let lookupTests = [];
for (let i = 0; i < numRuns; i++) {
lookupTests.push(allRandoms[rand(0, allRandoms.length)]);
}
let indexTests = [];
for (let i = 0; i < numRuns; i++) {
indexTests.push(rand(0, allRandoms.length));
}
// function to brute force search the map to find one of the random items
function findObj(targetVal) {
for (let [key, val] of m) {
if (val.id === targetVal) {
return val;
}
}
return null;
}
let startTime = Date.now();
for (let i = 0; i < lookupTests.length; i++) {
// get an id from the allRandoms to search for
let found = findObj(lookupTests[i]);
if (!found) {
console.log("!!didn't find brute force target")
}
}
let delta = Date.now() - startTime;
//console.log(`Total run time for ${addCommas(numRuns)} lookups: ${delta} ms`);
//console.log(`Avg run time per lookup: ${delta/numRuns} ms`);
// Now, see how fast the same number of indexed lookups are
let startTime2 = Date.now();
for (let i = 0; i < indexTests.length; i++) {
let found = m.get(indexTests[i]);
if (!found) {
console.log("!!didn't find indexed target")
}
}
let delta2 = Date.now() - startTime2;
//console.log(`Total run time for ${addCommas(numRuns)} lookups: ${delta2} ms`);
//console.log(`Avg run time per lookup: ${delta2/numRuns} ms`);
let results = `
Total run time for ${addCommas(numRuns)} brute force lookups: ${delta} ms<br>
Avg run time per brute force lookup: ${delta/numRuns} ms<br>
<hr>
Total run time for ${addCommas(numRuns)} indexed lookups: ${delta2} ms<br>
Avg run time per indexed lookup: ${delta2/numRuns} ms<br>
<hr>
Net savings of an indexed lookup is ${(delta - delta2)/numRuns} ms per lookup
`;
document.body.innerHTML = results;
I've been looking for an efficient way to process large lists of vectors in javascript. I created a suite of performance tests that perform in-place scalar-vector multiplication using different data structures:
AoS implementation:
var vectors = [];
//
var vector;
for (var i = 0, li=vectors.length; i < li; ++i) {
vector = vectors[i];
vector.x = 2 * vector.x;
vector.y = 2 * vector.y;
vector.z = 2 * vector.z;
}
SoA implementation:
var x = new Float32Array(N);
var y = new Float32Array(N);
var z = new Float32Array(N);
for (var i = 0, li=x.length; i < li; ++i) {
x[i] = 2 * x[i];
y[i] = 2 * y[i];
z[i] = 2 * z[i];
}
The AoS implementation is at least 5 times faster. This caught me by surprise. The AoS implementation uses one more index lookup per iteration than the SoA implementation, and the engine has to work without a guaranteed data type.
Why does this occur? Is this due to browser optimization? Cache misses?
On a side note, SoA is still slightly more efficient when performing addition over a list of vectors:
AoS:
var AoS1 = [];
var AoS2 = [];
var AoS3 = [];
//code for populating arrays
for (var i = 0; i < N; ++i) {
AoS3[i].x = AoS1[i].x + AoS2[i].x;
}
SoA:
var x1 = new Float32Array(N);
var x2 = new Float32Array(N);
var x3 = new Float32Array(N);
for (var i = 0; i < N; ++i) {
x3[i] = x1[i] + x2[i];
}
Is there a way I can tell when an operation is going to be more/less efficient for a given data structure?
EDIT: I failed to emphasize the SoA implementation made use of typed arrays, which is why this performance behavior struck me as odd. Despite having the guarantee of data type provided by typed arrays, the plain-old array of associative arrays is faster. I have yet to see a duplicate of this question.
EDIT2: I've discovered the behavior no longer occurs when the declaration for vector is moved to preparation code. AoS is ostensibly faster when vector is declared right next to the for loop. This makes little sense to me, particularly since the engine should just anchor it to the top of the scope, anyways. I'm not going to question this further since I suspect an issue with the testing framework.
EDIT3: I got a response from the developers of the testing platform, and they've confirmed the performance difference is due to outer scope lookup. SoA is still the most efficient, as expected.
The structure of the tests being used for benchmarking seem to have overlapped each other causing either undefined or undesired behavior. A cleaner test (https://www.measurethat.net/Benchmarks/Show/474/0/soa-vs-aos) shows little difference between the two, and has SOA executing slightly (30%) faster.
However, none of this matters to the bottom line when it comes to performance. This is an effort in micro-optimization. What you are essentially comparing is O(n) to O(n) with nuance involved. The small percent difference will not have an effect overall as O(n) is considered to be an acceptable time complexity.
Is it a fair assumption that in v8 implementation retrieval / lookup is O(1)?
(I know that the standard doesn't guarantee that)
Is it a fair assumption that in v8 implementation retrieval / lookup is O(1)?
Yes. V8 uses a variant of hash tables that generally have O(1) complexity for these operations.
For details, you might want to have a look at https://codereview.chromium.org/220293002/ where OrderedHashTable is implemented based on https://wiki.mozilla.org/User:Jorend/Deterministic_hash_tables.
For people don't want to dig into the rabbit hole too deep:
1: We can assume good hash table implementations have practically O(1) time complexity.
2: Here is a blog posted by V8 team explains how some memory optimization was done on its hashtable implementation for Map, Set, WeakSet, and WeakMap: Optimizing hash tables: hiding the hash code
Based on 1 and 2: V8's Set and Map's get & set & add & has time complexity practically is O(1).
The original question is already answered, but O(1) doesn't tell a lot about how efficient the implementation is.
First of all, we need to understand what variation of hash table is used for Maps. "Classical" hash tables won't do as they do not provide any iteration order guarantees, while ES6 spec requires insertion to be in the iteration order. So, Maps in V8 are built on top of so-called deterministic hash tables. The idea is similar to the classical algorithm, but there is another layer of indirection for buckets and all entries are inserted and stored in a contiguous array of a fixed size. Deterministic hash tables algorithm indeed guarantees O(1) time complexity for basic operations, like set or get.
Next, we need to know what's the initial size of the hash table, the load factor, and how (and when) it grows/shrinks. The short answer is: initial size is 4, load factor is 2, the table (i.e. the Map) grows x2 as soon as its capacity is reached, and shrinks as soon as there is more than 1/2 of deleted entries.
Let’s consider the worst-case when the table has N out of N entries (it's full), all entries belong to a single bucket, and the required entry is located at the tail. In such a scenario, a lookup requires N moves through the chain elements.
On the other hand, in the best possible scenario when the table is full, but each bucket has 2 entries, a lookup will require up to 2 moves.
It is a well-known fact that while individual operations in hash tables are “cheap”, rehashing is not. Rehashing has O(N) time complexity and requires allocation of the new hash table on the heap. Moreover, rehashing is performed as a part of insertion or deletion operations, when necessary. So, for instance, a map.set() call could be more expensive than you would expect. Luckily, rehashing is a relatively infrequent operation.
Apart from that, such details as memory layout or hash function also matter, but I'm not going to go into these details here. If you're curious how V8 Maps work under the hood, you may find more details here. I was interested in the topic some time ago and tried to share my findings in a readable manner.
let map = new Map();
let obj = {};
const benchMarkMapSet = size => {
console.time("benchMarkMapSet");
for (let i = 0; i < size; i++) {
map.set(i, i);
}
console.timeEnd("benchMarkMapSet");
};
const benchMarkMapGet = size => {
console.time("benchMarkMapGet");
for (let i = 0; i < size; i++) {
let x = map.get(i);
}
console.timeEnd("benchMarkMapGet");
};
const benchMarkObjSet = size => {
console.time("benchMarkObjSet");
for (let i = 0; i < size; i++) {
obj[i] = i;
}
console.timeEnd("benchMarkObjSet");
};
const benchMarkObjGet = size => {
console.time("benchMarkObjGet");
for (let i = 0; i < size; i++) {
let x = obj[i];
}
console.timeEnd("benchMarkObjGet");
};
let size = 2e6;
benchMarkMapSet(size);
benchMarkObjSet(size);
benchMarkMapGet(size);
benchMarkObjGet(size);
benchMarkMapSet: 382.935ms
benchMarkObjSet: 76.077ms
benchMarkMapGet: 125.478ms
benchMarkObjGet: 2.764ms
benchMarkMapSet: 373.172ms
benchMarkObjSet: 77.192ms
benchMarkMapGet: 123.035ms
benchMarkObjGet: 2.638ms
Why don't we just test.
var size_1 = 1000,
size_2 = 1000000,
map_sm = new Map(Array.from({length: size_1}, (_,i) => [++i,i])),
map_lg = new Map(Array.from({length: size_2}, (_,i) => [++i,i])),
i = size_1,
j = size_2,
s;
s = performance.now();
while (i) map_sm.get(i--);
console.log(`Size ${size_1} map returns an item in average ${(performance.now()-s)/size_1}ms`);
s = performance.now();
while (j) map_lg.get(j--);
console.log(`Size ${size_2} map returns an item in average ${(performance.now()-s)/size_2}ms`);
So to me it seems like it converges to O(1) as the size grows. Well lets call it O(1) then.