How efficient is the "with" statement? - javascript

It is hard to Google for some keywords like "with" word, so I am testing to ask here.
Is the with statement in JavaScript inefficient?
For instance, say I have:
with(obj3) {
with(obj2) {
with(obj1) {
with(obj0) {
eval("(function() { console.log(aproperty) })();");
}
}
}
}
Would the above be more or less efficient, if for instance, I walked over obj0, obj1, obj2, obj3 and merged them together, and then used either:
One with statement alone
Created a parameters string with the keys of obj0, obj1, obj2 and obj3, and an args array for the values and used:
eval("function fn(aproperty, bproperty) { console.log(aproperty); }")
fn.apply(undefined, args);
Which of these three approaches can be deemed to be quicker? I am guessing on with statements but so many with's makes me think I can optimize it further.

If you're looking for options, then you may want to consider a third approach, which would be to create (on the fly if needed) a prototype chain of objects.
EDIT: My solution was broken. It requres the non-standard __proto__ property. I'm updating to fix it, but be aware that this isn't supported in all environments.
var objs = [null,obj3,obj2,obj1,obj0];
for (var i = 1; i < objs.length; i++) {
objs[i].__proto__ = Object.create(objs[i-1]);
}
var result = objs.pop();
This avoids with and should be quicker than merging, though only testing will tell.
And then if all you needed was a product of certain properties, this will be very quick.
var props = ["x2","b1","a3"];
var product = result.y3;
for (var i = 0; i < props.length; i++)
product *= result[props[i]];

Newer browsers have an internal tokening mechanism to make the javascript interpretation cheaper. It is very like JIT in the newer JVMs. I think there isn't a much problem with your deeply embedded with-s, practically it will be some like
__get_aproperty() {
if (obj0.has("aproperty")) return obj0.aproperty;
if (obj1.has("aproperty")) return obj1.aproperty;
if (obj2.has("aproperty")) return obj2.aproperty;
if (obj3.has("aproperty")) return obj3.aproperty;
}
So, the structure of your js is highly embedded, but the structure of the real execution in the JS-engine of the browsers, will be simple and linear.
But the tokenization of the JS, that is costly. And if the JS-engine finds an eval, needs to tokenize.
I voted for the first version.

With statement will make your code run like it's 1980 - literally every optimization implemented in a JIT cannot be used when it's in effect.

Related

TypeScript transpile - for loop vs Array slice

In ES6 we can use a rest parameter, effectively creating an Array of arguments. TypeScript transpiles this to ES5 using a for loop. I was wondering is there any scenarios where using the for loop approach is a better option than using Array.prototype.slice? Maybe there are edge cases that the slice option does not cover?
// Written in TypeScript
/*
const namesJoinTS = function (firstName, ...args) {
return [firstName, ...args].join(' ');
}
const res = namesJoinTS('Dave', 'B', 'Smith');
console.log(res)
*/
// TypeScript above transpiles to this:
var namesJoinTS = function (firstName) {
var args = [];
for (var _i = 1; _i < arguments.length; _i++) {
args[_i - 1] = arguments[_i];
}
return [firstName].concat(args).join(' ');
};
var res = namesJoinTS('Dave', 'B', 'Smith');
console.log(res); //Dave B Smith
// Vanilla JS
var namesJoinJS = function (firstName) {
var args = [].slice.call(arguments, 1);
return [firstName].concat(args).join(' ');
};
var res = namesJoinJS('Dave', 'B', 'Smith');
console.log(res); // //Dave B Smith
This weird transpilation is a side effect of the biased optimization older versions of V8 had (and might still have). They optimize(d) some certain patterns greatly but did not care about the overall performance, therefore some strange patterns (like a for loop to copy arguments into an array *) did run way faster. Therefore the maintainers of libraries & transpilers started searching for ways to optimize their code acording to that, as their code runs on millions of devices and every millisecond counts. Now as the optimizations in V8 got more mature and are focused on the average performance, most of these tricks don't work anymore. It is a matter of time till they get refactored out of the codebase.
Additionally JavaScript is moving towards a language that can be optimized more easily, older features like arguments are replaced with newer ones (rest properties) that are more strict, and therefore more performant. Use them to achieve good performance with good looking code, arguments is a mistake of the past.
I was wondering is there any scenarios where using the for loop approach is a better option than using Array.prototype.slice?
Well it is faster on older V8 versions, wether that is still the case has to be tested. If you write the code for your project I would always choose the more elegant solution, the millisecond you might theoretically loose doesn't matter in 99% of the cases.
Maybe there are edge cases that the slice option does not cover?
No (AFAIK).
*you might ask "why is it faster though?", well that's because:
arguments itself is hard to optimize as
1) it can be reassigned (arguments = 3)
2) it has to be "live", changing arguments will get reflected to arguments
Therefore it can only be optimized if you directly access it, as the compiler then might replace the arraylike accessor with a variable reference:
function slow(a) {
console.log(arguments[0]);
}
// can be turned into this by the engine:
function fast(a) {
console.log(a);
}
This also works for loops if you inline them and fall back to another (maybe slower) version if the number of arguments changes:
function slow() {
for(let i = 0; i < arguments.length; i++) {
console.log(arguments[i]);
}
}
slow(1, 2, 3);
slow(4, 5, 6);
slow("what?");
// can be optimized to:
function fast(a, b, c) {
console.log(a);
console.log(b);
console.log(c);
}
function fast2(a) {
console.log(a);
}
fast(1,2,3);
fast(4, 5, 6);
fast2("what?");
Now if you however call another function and pass in arguments things get really complicated:
var leaking;
function cantBeOptimized(a) {
leak(arguments); // uurgh
a = 1; // this has to be reflected to "leaking" ....
}
function leak(stuff) { leaking = stuff; }
cantBeOptimized(0);
console.log(leaking[0]); // has to be 1
This can't be really optimized, it is a performance nighmare.
Therefore calling a function and passing arguments is a bad idea performance wise.

Memory handling vs. performance

I'm building a WebGL game and I've come so far that I've started to investigate performance bottlenecks. I can see there are a lot of small dips in FPS when there are GC going on. Hence, I created a small memory pool handler. I still see a lot of GC after I've started to use it and I might suspect that I've got something wrong.
My memory pool code looks like this:
function Memory(Class) {
this.Class = Class;
this.pool = [];
Memory.prototype.size = function() {
return this.pool.length;
};
Memory.prototype.allocate = function() {
if (this.pool.length === 0) {
var x = new this.Class();
if(typeof(x) == "object") {
x.size = 0;
x.push = function(v) { this[this.size++] = v; };
x.pop = function() { return this[--this.size]; };
}
return x;
} else {
return this.pool.pop();
}
};
Memory.prototype.free = function(object) {
if(typeof(object) == "object") {
object.size = 0;
}
this.pool.push(object);
};
Memory.prototype.gc = function() {
this.pool = [];
};
}
I then use this class like this:
game.mInt = new Memory(Number);
game.mArray = new Memory(Array); // this will have a new push() and size property.
// Allocate an number
var x = game.mInt.allocate();
<do something with it, for loop etc>
// Free variable and push into mInt pool to be reused.
game.mInt.free(x);
My memory handling for an array is based on using myArray.size instead of length, which keeps track of the actual current array size in an overdimensioned array (that has been reused).
So to my actual question:
Using this approach to avoid GC and keep memory during play-time. Will my variables I declare with "var" inside functions still be GC even though they are returned as new Class() from my Memory function?
Example:
var x = game.mInt.allocate();
for(x = 0; x < 100; x++) {
...
}
x = game.mInt.free(x);
Will this still cause memory garbage collection of the "var" due to some memcopy behind the scenes? (which would make my memory handler useless)
Is my approach good/meaningful in my case with a game that I'm trying to get high FPS in?
So you let JS instantiate a new Object
var x = new this.Class();
then add anonymous methods to this object and therefore make it a one of a kind
x.push = function...
x.pop = function...
so that now every place you're using this object is harder to optimize by the JS engine, because they have now distinct interfaces/hidden classes (equal ain't identical)
Additionally, every place you use these objects, will have to implement additional typecasts, to convert the Number Object back into a primitive, and typecasts ain't for free either. Like, in every iteration of a loop? maybe even multiple times?
And all this overhead just to store a 64bit float?
game.mInt = new Memory(Number);
And since you cannot change the internal State and therefore the value of a Number object, these values are basically static, like their primitive counterpart.
TL;DR:
Don't pool native types, especially not primitives. These days, JS is pretty good at optimizing the code if it doesn't have to deal with surprizes. Surprizes like distinct objects with distinct interfaces that first have to be cast to a primitive value, before they can be used.
Array resizing ain't for free either. Although JS optimizes this and usually pre-allocates more memory than the Array may need, you may still hit that limit, and therefore enforce the engine to allocate new memory, move all the values to that new memory and free the old one.
I usually use Linked lists for pools.
Don't try to pool everything. Think about wich objects can really be reused, and wich you are bending to fit them into this narrative of "reusability".
I'd say: If you have to do as little as adding a single new property to an object (after it has been constructed), and therefore you'd need to delete this property for clean up, this object should not be pooled.
Hidden Classes: When talking about optimizations in JS you should know this topic at least at a very basic level
summary:
don't add new properties after an object has been constructed.
and to extend this first point, no deletes!
the order in wich you add properties matters
changing the value of a property (even its type) doesn't matter! Except when we talk about properties that contain functions (aka. methods). The optimizer may be a bit picky here, when we're talking about functions attached to objects, so avoid it.
And last but not least: Distinct between optimized and "dictionary" objects. First in your concepts, then in your code.
There's no benefit in trying to fit everything into a pattern with static interfaces (this is JS, not Java). But static types make the life easier for the optimizer. So compose the two.

Javascript for loop syntax

As javascript developers we all have to write a lot of for loops. Before a couple of months I saw an alternative syntax, which I really liked. However, I'm now interested, is there any other nice way.
Let's say that I have an array of data representing users in a system. What I did before is:
var users = [
{ name: "A"},
{ name: "B"},
{ name: "C"},
{ name: "D"},
{ name: "E"}
];
var numOfUsers = users.length;
for(var i=0; i<numOfUsers; i++) {
var user = users[i];
// ...
}
There is one additional row var user = users[i];. Normally I feel more comfortable if I have user instead of users[i]. So, the new way:
for(var i=0; user=users[i]; i++) {
// ...
}
I'm also wondering if the second approach produces problems in some of the browsers. One of my colleagues reported that this syntax is a little bit buggy under IE.
Edit:
Thankfully, the answers below pointed me out to the right direction. If some of the elements of the array is falsy then the loop will stop. There is some kind of solution:
for(var i=0; typeof (user=users[i]) !== "undefined"; i++) {
// ...
}
But that's too much for me. So, I guess that I'll use this syntax only when I'm 100% sure that all the elements are truly (which means never :)).
In your “new” approach, you don’t need numOfUsers any more.
As for the potential problems: This approach relies on all users[i] having values evaluating to true for the loop to continue (and user becoming undefined, equal to false and therefor ending the loop after the last user is processed) – but sometimes you might have data where not every record evaluates to true, but “false-y” values might also occur in the data – and in that case, this approach of course fails.
The problem with this approach:
for(var i=0; user=users[i]; i++) {
// ...
}
...is that it assumes user won't be "falsey" (0, "", null, undefined, NaN, or of course false) until you've gone past the end of the array. So it'll work well with an array of non-null object references, but if you then get in the habit of using it, it will bite you when you have an array of numbers, or strings, or such.
The other reason not to declare variables within the for construct is that it's misleading: Those variables are not scoped to the for loop, they're function-wide. (JavaScript's var doesn't have block scope, only function or global scope; ES6 will get let which will have block scope.)
On modern JavaScript engines (or with an "ES5 shim"), you can of course do this:
users.forEach(function(user) {
// ...
});
...which has the advantage of brevity and not having to declare i or numUsers or even user (since it's an argument to the iteration callback, and nicely scoped to that). If you're worried about the runtime cost of doing a function call for each entry, don't be. It'll be washed out by whatever actual work you're doing in the function.
I'm amazed if the second syntax works at all your middle operation should evaluate to true for each loop you want to complete and false as soon as you want to be done looping. As for any issues with your first for loop, a JavaScript is function scoped so that inner var statement will still leak to the containing function (as well as that i). This is different than most other languages that have block scoping. It's not so much of a problem but something to keep in mind if you are debugging.
If you are already using jQuery, you can use the jQuery.each function to loop over your arrays.
In any case you can look at the source code of that function and copy the relevant parts for your own foreach function: http://james.padolsey.com/jquery/#v=1.10.2&fn=jQuery.each

javascript functions and arguments object, is there a cost involved

It is common place to see code like that around the web and in frameworks:
var args = Array.prototype.slice.call(arguments);
In doing so, you convert the arguments Object into a real Array (as much as JS has real arrays anyway) and it allows for whatever array methods you have in your Array prototypes to be applied to it, etc etc.
I remember reading somewhere that accessing the arguments Object directly can be significantly slower than an Array clone or than the obvious choice of named arguments. Is there any truth to that and under what circumstances / browsers does it incur a performance penalty to do so? Any articles on the subject you know of?
update interesting find from http://bonsaiden.github.com/JavaScript-Garden/#function.arguments that invalidates what I read previously... Hoping the question gets some more answers from the likes of #Ivo Wetzel who wrote this.
At the bottom of that section it says:
Performance myths and truths
The arguments object is always created
with the only two exceptions being the
cases where it is declared as a name
inside of a function or one of its
formal parameters. It does not matter
whether it is used or not.
this goes in conflict with http://www.jspatterns.com/arguments-considered-harmful/, which states:
However, it's not a good idea to use
arguments for the reasons of :
performance
security
The arguments object is not automatically created every time the function is called, the JavaScript engine will only create it on-demand, if it's used. And that creation is not free in terms of performance. The difference between using arguments vs. not using it could be anywhere between 1.5 times to 4 times slower, depending on the browser
clearly, can't both be correct, so which one is it?
ECMA die-hard Dmitrty Soshnikov said:
Which exactly “JavaScript engine” is
meant? Where did you get this exact
info? Although, it can be true in some
implementations (yep, it’s the good
optimization as all needed info about
the context is available on parsing
the code, so there’s no need to create
arguments object if it was not found
on parsing), but as you know
ECMA-262-3 statements, that arguments
object is created each time on
entering the execution context.
Here's some q&d testing. Using predefined arguments seems to be the fastest, but it's not always feasible to do this. If the arity of the function is unknown beforehand (so, if a function can or must receive a variable amount of arguments), I think calling Array.prototype.slice once would be the most efficient way, because in that case the performance loss of using the arguments object is the most minimal.
The arguments has two problems: one is that it's not a real array. The second one is that it can only include all of the arguments, including the ones that were explicitly declared. So for example:
function f(x, y) {
// arguments also include x and y
}
This is probably the most common problem, that you want to have the rest of the arguments, without the ones that you already have in x and y, so you would like to have something like that:
var rest = arguments.slice(2);
but you can't because it doesn't have the slice method, so you have to apply the Array.prototype.slice manually.
I must say that I haven't seen converting all of the arguments to a real array just for the sake of performance, only as a convenience to call Array methods. I'd have to do some profiling to know what is actually faster, and it may also depend faster for what, but my guess would be that there's not much of a difference if you don't want to call the Array methods in which case you have no choice but to convert it to a real array or apply the methods manually using call or apply.
The good news is that in new versions of ECMAScript (Harmony?) we'll be able to write just this:
function f(x, y, ...rest) {
// ...
}
and we'll be able to forget all of those ugly workarounds.
No one's done testing on this in a while, and all the links are dead. Here's some fresh results:
var res = []
for(var i = 0, l = arguments.length; i < l; i++){
res.push(arguments[i])
}
}
function loop_variable(){
var res = []
var args = arguments
for(var i = 0, l = args.length; i < l; i++){
res.push(args[i])
}
return res
}
function slice(){
return Array.prototype.slice.call(arguments);
}
function spread(){
return [...arguments];
}
function do_return(){
return arguments;
}
function literal_spread(){
return [arguments[0],arguments[1],arguments[2],arguments[3],arguments[4],arguments[5],arguments[6],arguments[7],arguments[8],arguments[9]];
}
function spread_args(...args){
return args;
}
I tested these here: https://jsben.ch/bB11y, as do_return(0,1,2,3,4,5,6,7,8,9) and so on. Here are my results on my Ryzen 2700X, on Linux 5.13:
Firefox 90.0
Chromium 92.0
do_return
89%
100%
loop_variable
74%
77%
spread
63%
29%
loop
73%
94%
literal_spread
86%
100%
slice
68%
81%
spread_args
100%
98%
I would argue against the accepted answer.
I edited the tests, see here: http://jsperf.com/arguments-performance/6
I added the test for slice method and a test for memory copy to preallocated array. The latter is multiple times more efficient in my computer.
As You can see, the first two memory copy methods in that performance test page are slow not due to loops, but due to the push call instead.
In conclusion, the slice seems almost the worst method for working with arguments (not counting the push methods since they are even not much shorter in code than the much more efficient preallocation method).
Also it might be of interest, that apply function behaves quite well and does not have much performance hit by itself.
First existing test:
function f1(){
for(var i = 0, l = arguments.length; i < l; i++){
res.push(arguments[i])
}
}
Added tests:
function f3(){
var len = arguments.length;
res = new Array(len);
for (var i = 0; i < len; i++)
res[i] = arguments[i];
}
function f4(){
res = Array.prototype.slice.call(arguments);
}
function f5_helper(){
res = arguments;
}
function f5(){
f5_helper.apply(null, arguments);
}
function f6_helper(a, b, c, d){
res = [a, b, c, d];
}
function f6(){
f6_helper.apply(null, arguments);
}

Number of elements in a javascript object

Is there a way to get (from somewhere) the number of elements in a Javascript object?? (i.e. constant-time complexity).
I can't find a property or method that retrieves that information. So far I can only think of doing an iteration through the whole collection, but that's linear time.
It's strange there is no direct access to the size of the object, don't you think.
EDIT:
I'm talking about the Object object (not objects in general):
var obj = new Object ;
Although JS implementations might keep track of such a value internally, there's no standard way to get it.
In the past, Mozilla's Javascript variant exposed the non-standard __count__, but it has been removed with version 1.8.5.
For cross-browser scripting you're stuck with explicitly iterating over the properties and checking hasOwnProperty():
function countProperties(obj) {
var count = 0;
for(var prop in obj) {
if(obj.hasOwnProperty(prop))
++count;
}
return count;
}
In case of ECMAScript 5 capable implementations, this can also be written as (Kudos to Avi Flax)
function countProperties(obj) {
return Object.keys(obj).length;
}
Keep in mind that you'll also miss properties which aren't enumerable (eg an array's length).
If you're using a framework like jQuery, Prototype, Mootools, $whatever-the-newest-hype, check if they come with their own collections API, which might be a better solution to your problem than using native JS objects.
To do this in any ES5-compatible environment
Object.keys(obj).length
(Browser support from here)
(Doc on Object.keys here, includes method you can add to non-ECMA5 browsers)
if you are already using jQuery in your build just do this:
$(yourObject).length
It works nicely for me on objects, and I already had jQuery as a dependancy.
function count(){
var c= 0;
for(var p in this) if(this.hasOwnProperty(p))++c;
return c;
}
var O={a: 1, b: 2, c: 3};
count.call(O);
AFAIK, there is no way to do this reliably, unless you switch to an array. Which honestly, doesn't seem strange - it's seems pretty straight forward to me that arrays are countable, and objects aren't.
Probably the closest you'll get is something like this
// Monkey patching on purpose to make a point
Object.prototype.length = function()
{
var i = 0;
for ( var p in this ) i++;
return i;
}
alert( {foo:"bar", bar: "baz"}.length() ); // alerts 3
But this creates problems, or at least questions. All user-created properties are counted, including the _length function itself! And while in this simple example you could avoid it by just using a normal function, that doesn't mean you can stop other scripts from doing this. so what do you do? Ignore function properties?
Object.prototype.length = function()
{
var i = 0;
for ( var p in this )
{
if ( 'function' == typeof this[p] ) continue;
i++;
}
return i;
}
alert( {foo:"bar", bar: "baz"}.length() ); // alerts 2
In the end, I think you should probably ditch the idea of making your objects countable and figure out another way to do whatever it is you're doing.
The concept of number/length/dimensionality doesn't really make sense for an Object, and needing it suggests you really want an Array to me.
Edit: Pointed out to me that you want an O(1) for this. To the best of my knowledge no such way exists I'm afraid.
With jquery :
$(parent)[0].childElementCount

Categories