This question already has answers here:
Is a closure for dereferencing variables useful?
(3 answers)
Closed 9 years ago.
According to this answer to 'Is object empty?':
// Speed up calls to hasOwnProperty
var hasOwnProperty = Object.prototype.hasOwnProperty;
I've seen several implementations of something similar in small JavaScript libraries, like:
var slice = Array.prototype.slice;
//or
function slice(collection) {
return Array.prototype.slice.call(collection);
}
I did a quick jsperf to test this sort of thing, and caching looked a bit quicker overall than not caching, but my test could be flawed.
(I am using the word 'cache' to mean storing the method inside a variable.)
The context of this question is when a developer needs to call the native method multiple times, and what the observable difference would be.
Does caching the native method prevent the engine from having to look inside the object for the method every time the method is called, thus making caching a faster way to call native methods whenever the developer needs to call the same native method more than once?
When you're using Array.prototype.slice a lot in, say, a library, it makes sense to create a variable holding that function (var slice = Array.prototype.slice;) because the variable can be minified by a JavaScript minifier which it can't otherwise.
Assigning the function to a variable also avoids having to traverse the object's prototype chain, which might result in a slightly better performance.
Note that this is micro-optimization, which you (generally speaking) shouldn't concern yourself too much with – leave that up to a modern JavaScript engine.
Saving the value in a variable presents some optimization opportunities because if its a local variable the interpreter could do analyisis to realize that the vatiable never gets mutated. On the other hand, you always need to dereference globals like Array, since everyone could potentially change them at any time.
That said, I have no idea if this is going to matter for performance, expecially once you consider the JIT optimizations.
Usually, the biggest reason people use var slice is to keep the source code short.
Related
What are the advantages/disadvantages in terms of memory management of the following?
Assigning to a variable then passing it to a function
const a = {foo: 'bar'}; // won't be reused anywhere else, for readability
myFunc(a);
Passing directly to a function
myFunc({foo: 'bar'});
The first and second code have absolutely no difference between them (unless you also need to use a later in your code) in the way the variable is passed.
The are only 2 cases in which the first might be preferred over the second.
You need to use the variable elsewhere
The variable declaration is too long and you want to split it in two lines or you are using a complex algorithm and want to give a name to each step for readability.
It depends on the implementation of the JavaScript engine. One engine might allocate memory for the variable in the first example and not allocate memory in the directly-passed example, while another implementation might be smart enough to compile the code in such a way that it makes the first example not allocate memory for a variable and thus leaves the first example behaving as the directly-passed example.
I don't know enough about the specific engines to tell you what each one does specifically. You'd have to take a look at each JS engine (or ask authors of each) to get a more conclusive answer.
It gets said a lot that local variables are faster than globals in JavaScript Eg:
function ()
{
// Local variable
var a = 9
}
For instance, I've been thinking of aliasing the global Math object to a local Mathl variable or alternatively aliasing specific (much used) functions to a local variable/function like Mathround() instead of using Math.round().
Now the things that I'm thinking of doing this with (eg Math.round() ) can be used plenty of times per animation frame (50ish) and there could be 60 frames per second. So quite a few lookups would be avoided if I do this. - and that's just one example of something I could do it with. There'd be lots of similar variables I could alias.
So my question is - is this really worth it? Is the difference tangible when there's so many lookups being avoided?
If you don't know whether it's worth it, then it's probably not. In other words, you should address performance issues when they happen by identifying, measuring, and testing situation-specific alternatives.
It's hard enough to write clear, readable, maintainable code when you set out to write something humans can understand. It's a lot harder if you set out trying to write something that computers can execute faster (before you even know what faster means in terms of your application). Focus on clarity and address performance problems as they arise.
As for your specific example. If you're dying to know the answer, then test it.
Lookups will be faster if you scope parent-scope variables locally. How much faster? Is the difference noticable? Only you can tell by measuring the performance of your code.
It isin't that rare to see the document being aliased to a local variable.
Have a look at this part of the ExtJS library for instance.
(function() {
var DOC = document,
However like already stated, beware blindly aliasing object member functions because you might run into this value issues.
E.g.
var log = console.log;
log('test'); //TypeError: Illegal invocation
//you could use bind however
log = console.log.bind(console);
Say you have this code:
for(...) {
someFunc();
var y = Math.cos(x);
...
}
To execute the Math.cos(x) the JS VM must a) compute location of global Math object and b) get cos property of it. Just in case someone inside that someFunc() doesn't do crazy things as this for example:
Math = {};
In general in JS access to local variable is similar (if not exact) to access to array element by known index. Access to global objects is pretty much always look-up by key in maps.
We have been debating how best to handle objects in our JS app, studying Stoyan Stefanov's book, reading endless SO posts on 'new', 'this', 'prototype', closures etc. (The fact that there are so many, and they have so many competing theories, suggests there is no completely obvious answer).
So let's assume the we don't care about private data. We are content to trust users and developers not to mess around in objects outside the ways we define.
Given this, what (other than it seeming to defy decades of OO style and history) would be wrong with this technique?
// namespace to isolate all PERSON's logic
var PERSON = {};
// return an object which should only ever contain data.
// The Catch: it's 100% public
PERSON.constructor = function (name) {
return {
name: name
}
}
// methods that operate on a Person
// the thing we're operating on gets passed in
PERSON.sayHello = function (person) {
alert (person.name);
}
var p = PERSON.constructor ("Fred");
var q = PERSON.constructor ("Me");
// normally this coded like 'p.sayHello()'
PERSON.sayHello(p);
PERSON.sayHello(q);
Obviously:
There would be nothing to stop someone from mutating 'p' in unholy
ways, or simply the logic of PERSON ending up spread all over the place. (That is true with the canonical 'new' technique as well).
It would be a minor hassle to pass 'p' in to every function that you
wanted to use it.
This is a weird approach.
But are those good enough reasons to dismiss it? On the positive side:
It is efficient, as (arguably) opposed to closures with repetitive function declaration.
It seems very simple and understandable, as opposed to fiddling with
'this' everywhere.
The key point is the foregoing of privacy. I know I will get slammed for this, but, looking for any feedback. Cheers.
There's nothing inherently wrong with it. But it does forgo many advantages inherent in using Javascript's prototype system.
Your object does not know anything about itself other than that it is an object literal. So instanceof will not help you to identify its origin. You'll be stuck using only duck typing.
Your methods are essentially namespaced static functions, where you have to repeat yourself by passing in the object as the first argument. By having a prototyped object, you can take advantage of dynamic dispatch, so that p.sayHello() can do different things for PERSON or ANIMAL depending on the type Javascript knows about. This is a form of polymorphism. Your approach requires you to name (and possibly make a mistake about) the type each time you call a method.
You don't actually need a constructor function, since functions are already objects. Your PERSON variable may as well be the constructor function.
What you've done here is create a module pattern (like a namespace).
Here is another pattern that keeps what you have but supplies the above advantages:
function Person(name)
{
var p = Object.create(Person.prototype);
p.name = name; // or other means of initialization, use of overloaded arguments, etc.
return p;
}
Person.prototype.sayHello = function () { alert (this.name); }
var p = Person("Fred"); // you can omit "new"
var q = Person("Me");
p.sayHello();
q.sayHello();
console.log(p instanceof Person); // true
var people = ["Bob", "Will", "Mary", "Alandra"].map(Person);
// people contains array of Person objects
Yeah, I'm not really understanding why you're trying to dodge the constructor approach or why they even felt a need to layer syntactical sugar over function constructors (Object.create and soon classes) when constructors by themselves are an elegant, flexible, and perfectly reasonable approach to OOP no matter how many lame reasons are given by people like Crockford for not liking them (because people forget to use the new keyword - seriously?). JS is heavily function-driven and its OOP mechanics are no different. It's better to embrace this than hide from it, IMO.
First of all, your points listed under "Obviously"
Hardly even worth mentioning in JavaScript. High degrees of mutability is by-design. We're not afraid of ourselves or other developers in JavaScript. The private vs. public paradigm isn't useful because it protects us from stupidity but rather because it makes it easier to understand the intention behind the other dev's code.
The effort in invoking isn't the problem. The hassle comes later when it's unclear why you've done what you've done there. I don't really see what you're trying to achieve that the core language approaches don't do better for you.
This is JavaScript. It's been weird to all but JS devs for years now. Don't sweat that if you find a better way to do something that works better at solving a problem in a given domain than a more typical solution might. Just make sure you understand the point of the more typical approach before trying to replace it as so many have when coming to JS from other language paradigms. It's easy to do trivial stuff with JS but once you're at the point where you want to get more OOP-driven learn everything you can about how the core language stuff works so you can apply a bit more skepticism to popular opinions out there spread by people who make a side-living making JavaScript out to be scarier and more riddled with deadly booby traps than it really is.
Now your points under "positive side,"
First of all, repetitive function definition was really only something to worry about in heavy looping scenario. If you were regularly producing objects in large enough quantity fast enough for the non-prototyped public method definitions to be a perf problem, you'd probably be running into memory usage issues with non-trivial objects in short order regardless. I speak in the past tense, however, because it's no longer really a relevant issue either way. In modern browsers, functions defined inside other functions are actually typically performance enhancing due to the way modern JIT compilers work. Regardless of what browsers you support, a few funcs defined per object is a non-issue unless you're expecting tens of thousands of objects.
On the question of simple and understandable, it's not to me because I don't see what win you've garnered here. Now instead of having one object to use, I have to use both the object and it's pseudo-constructor together which if I weren't looking at the definition would imply to me the function that you use with a 'new' keyword to build objects. If I were new to your codebase I'd be wasting a lot of time trying to figure out why you did it this way to avoid breaking some other concern I didn't understand.
My questions would be:
Why not just add all the methods in the object literal in the constructor in the first place? There's no performance issue there and there never really has been so the only other possible win is that you want to be able to add new methods to person after you've created new objects with it, but that's what we use prototype for on proper constructors (prototype methods btw are great for memory in older browsers because they are only defined once).
And if you have to keep passing the object in for the methods to know what the properties are, why do you even want objects? Why not just functions that expect simple data structure-type objects with certain properties? It's not really OOP anymore.
But my main point of criticism
You're missing the main point of OOP which is something JavaScript does a better job of not hiding from people than most languages. Consider the following:
function Person(name){
//var name = name; //<--this might be more clear but it would be redundant
this.identifySelf = function(){ alert(name); }
}
var bob = new Person();
bob.identifySelf();
Now, change the name bob identifies with, without overwriting the object or the method, which are both things you'd only do if it were clear you didn't want to work with the object as originally designed and constructed. You of course can't. That makes it crystal clear to anybody who sees this definition that the name is effectively a constant in this case. In a more complex constructor it would establish that the only thing allowed to alter or modify name is the instance itself unless the user added a non-validating setter method which would be silly because that would basically (looking at you Java Enterprise Beans) MURDER THE CENTRAL PURPOSE OF OOP.
Clear Division of Responsibility is the Key
Forget the key words they put in every book for a second and think about what the whole point is. Before OOP, everything was just a pile of functions and data structures all those functions acted on. With OOP you mostly have a set of methods bundled with a set of data that only the object itself actually ever changes.
So let's say something's gone wrong with output:
In our strictly procedural pile of functions there's no real limit to the number of hands that could have messed up that data. We might have good error-handling but one function could branch in such a way that the original culprit is hard to track down.
In a proper OOP design where data is typically behind an object gatekeeper I know that only one object can actually make the changes responsible.
Objects exposing all of their data most of the time is really only marginally better than the old procedural approach. All that really does is give you a name to categorize loosely related methods with.
Much Ado About 'this'
I've never understood the undue attention assigned to the 'this' keyword being messy and confusing. It's really not that big of a deal. 'this' identifies the instance you're working with. That's it. If the method isn't called as a property it's not going to know what instance to look for so it defaults to the global object. That was dumb (undefined would have been better), but it not working properly in that scenario should be expected in a language where functions are also portable like data and can be attached to other objects very easily. Use 'this' in a function when:
It's defined and called as a property of an instance.
It's passed as an event handler (which will call it as a member of the thing being listened to).
You're using call or apply methods to call it as a property of some other object temporarily without assigning it as such.
But remember, it's the calling that really matters. Assigning a public method to some var and calling from that var will do the global thing or throw an error in strict mode. Without being referenced as object properties, functions only really care about the scope they were defined in (their closures) and what args you pass them.
I'm making a pretty simple game just for fun/practice but I still want to code it well regardless of how simple it is now, in case I want to come back to it and just to learn
So, in that context, my question is:
How much overhead is involved in object allocation? And how well does the interpreter already optimize this? I'm going to be repeatedly checking object grid positions, and if they are still in the same grid square, then no updating the grid array
if (obj.gridPos==obj.getCurrentGridPos()){
//etc
}
But, should I keep an outer "work" point object that the getCurrentGridPos() changes each time or should it return a new point object each time?
Basically, even if the overhead of creating a point object isnt all that much to matter in this scenario, which is faster?
EDIT:
this? which will get called every object each frame
function getGridPos(x,y){
return new Point(Math.ceil(x/25),Math.ceil(y/25));
}
or
//outside the frame by frame update function looping through every object each frame
tempPoint= new Point()
//and each object each frame calls this, passing in tempPoint and checking that value
function makeGridPos(pt,x,y){
pt.x = Math.ceil(x/25);
pt.y = Math.ceil(y/25);
}
Between your two code examples that you have now added, I know of no case where the first would be more efficient than the second. So, if you're trying to optimize for performance or memory use, then re-using an existing object will likely be more efficient than creating a new object each time you call the function.
Note: since JS refers to object by reference, you will have to make sure that your code is not elsewhere hanging on to that object and expecting it to keep its value.
Prior answer:
In all programming (regardless of how good the optimizer is), you are always better caching a result calculated as a result of accessing several member variables that you are using over and over again in the same function rather than recalling the function that calculates it over and over.
So, if you are calling obj.getCurrentGridPos() more than once and conditions have not changed such that it might return a different result, then you should cache it's value locally (in any language). This is just good programming.
var currentPos = obj.getCurrentGridPos();
And, then use that locally cached value:
if (obj.gridPos == currentPos) {
The interpreter may not be able to do this type of optimization for you because it may not be able to tell whether other operations might cause obj.getCurrentGridPos() to return something different from one call to another, but you the programmer can know that.
One other thing. If obj.getCurrentGridPos() returns an actual object, then you probably don't want to be using == or === to compare objects. That compares ONLY to see if they are literally the same object - it does not compare to see if the two objects have the same properties.
This question is VERY difficult to answer because of all the different javascript engine's out there. The "big 4" of browsers all have their own javascript engine/interpreter and each one is going to do their allocation, caching, GCing, etc.. differently.
The Chrome (and Safari) dev tools have a profiling tab where you can profile memory allocation, timings, etc of your code. This will be a place to start (at least for Chrome and Safari)
I'm not certain if IE or Firefox offer such tools, but I wouldn't be surprised if some third party tools exist for these browsers for testing such things...
Also, for reference -
Chrome uses the V8 javascript engine
I.E uses the Triton (I think it's still called that?) javascript engine
Firefox I believe uses Spidermonkey
Safari I'm not sure about, but think it's using the one that's part of WebKit.
It's my understanding that garbage collection stops execution on most JS engines. If you're going to be making many objects per iteration through your game loop and letting them go out of scope that will cause slowdown when the garbage collector takes over.
For this kind of situation you might consider making a singleton to pool your objects with a method to recycle them for reuse by deleting all of their properties, resetting their __proto__ to Object.prototype, and storing them in an array. You can then request recycled objects from the pool as needed, only increasing the pool size when it runs dry.
The short answer is to set one current position object and check against itself as you're literally going to use more memory if you create a new object every time you call getCurrentGridPos().
There may be a better place for the is this a new position check since you should only do that check once per iteration.
It seems optimal to set the currentGridPos using a RequestAnimationFrame and check against its current x y z positions before updating it so you can then trigger a changedPosition type event.
var currentPos = {
x:0,
y:0,
z:0
}
window.requestAnimationFrame(function(newPos){
if (currentPos.x != currentPos.x || currentPos.y != currentPos.y || currentPos.z != newPos.z) {
$.publish("positionChanged"); // tinypubsub https://gist.github.com/661855
}
})
So, just to be clear, yes I think you should keep an outer "work" point object that updates every iteration... and while it's updating you could check to see if its position has changed - this would be a more intentful way to organize the logic and ensure you don't call getCurrentPos more than once per iteration.
This question already has answers here:
Why is extending native objects a bad practice?
(9 answers)
Is using Prototype to extend native objects bad? [closed]
(3 answers)
Closed 1 year ago.
I want to extend Object.prototype, to basically support notifications in JSON data and html elements through UI framework.
Object.prototype.setValue = function(key,value){
// this simply sets value as this[key] = value
// and raises an event
Binder.setValue(this,key,value);
};
Object.prototype.getValue = function(key){
return Binder.getValue(this,key);
};
However, based on this question, Extending Object.prototype JavaScript and few others, people say that we should avoid extending Object.prototype, instead any other type is fine.
If I do not do this, then my code becomes bigger, for example
window.myModel.setValue("currentStatus","empty");
will have to be written like,
Binder.setValue(window.myModel,"currentStatus","empty");
I want to know what will go wrong if I use these methods? will it cause jQuery to behave unexpectedly? I have seen once, that jQuery's ajax request invokes prototype methods as well (as they references to functions for event handling).
What are other side effects of this? I know it fails for(var x in obj), but mostly we can use obj.hasOwnProperty, that should help right?
I know it fails for(var x in obj), but mostly we can use obj.hasOwnProperty, that should help right?
That is the only major pitfall, and .hasOwnProperty() helps, but only in your own code. In jQuery, for example, they've taken the deliberate decision not to call .hasOwnProperty in the $.each() method.
I have used Object.defineProperty(...) in my own code to get around the enumerable property problem with no ill-effects (albeit on Array.prototype)
You just don't want to mess with prototypes from host or native objects.
You cannot know which side effects it has on any third-party script
You may confuse third party code
You don't know if some day those methods are created natively
overall, extending Object.prototype effects any other object on the entire site. Again, you just don't want to do it, unless, you are in such a sandboxed environment and every single piece of ecmascript is written on your own and you are 100% sure no third-party script is ever loaded.