At emberjs documentation http://emberjs.com/documentation/#toc_computed-properties-getters, says that there are two ways to define computed properties. First is through prototype extensions and the second is by wrapping the function in a call to Ember.computed.
Can anyone tell me what is the deference between them and if one way is better than the other, cause in example code there is no obvious deference (or i am missing something?)
There is no difference between both variants - except that the Ember.computed approach is more verbose. In fact, the property variant internally invokes Ember.computed, see definition in function.js.
packages/ember-runtime/lib/ext/function.js:
Function.prototype.property = function() {
var ret = Ember.computed(this);
return ret.property.apply(ret, arguments);
};
I suppose there is none in the end. As stated, you can use the Ember.computed if you don't like your Function prototype extended. Might be useful for metaprogramming as well.
Related
I was working on an AJAX-enabled asp.net application.
I've just added some methods to Array.prototype like
Array.prototype.doSomething = function(){
...
}
This solution worked for me, being possible reuse code in a 'pretty' way.
But when I've tested it working with the entire page, I had problems..
We had some custom ajax extenders, and they started to behave as the unexpected: some controls displayed 'undefined' around its content or value.
What could be the cause for that? Am I missing something about modifing the prototype of standart objects?
Note: I'm pretty sure that the error begins when I modify the prototype for Array. It should be only compatible with IE.
While the potential for clashing with other bits o' code the override a function on a prototype is still a risk, if you want to do this with modern versions of JavaScript, you can use the Object.defineProperty method, e.g.
// functional sort
Object.defineProperty(Array.prototype, 'sortf', {
value: function(compare) { return [].concat(this).sort(compare); }
});
Modifying the built-in object prototypes is a bad idea in general, because it always has the potential to clash with code from other vendors or libraries that loads on the same page.
In the case of the Array object prototype, it is an especially bad idea, because it has the potential to interfere with any piece of code that iterates over the members of any array, for instance with for .. in.
To illustrate using an example (borrowed from here):
Array.prototype.foo = 1;
// somewhere deep in other javascript code...
var a = [1,2,3,4,5];
for (x in a){
// Now foo is a part of EVERY array and
// will show up here as a value of 'x'
}
Unfortunately, the existence of questionable code that does this has made it necessary to also avoid using plain for..in for array iteration, at least if you want maximum portability, just to guard against cases where some other nuisance code has modified the Array prototype. So you really need to do both: you should avoid plain for..in in case some n00b has modified the Array prototype, and you should avoid modifying the Array prototype so you don't mess up any code that uses plain for..in to iterate over arrays.
It would be better for you to create your own type of object constructor complete with doSomething function, rather than extending the built-in Array.
What about Object.defineProperty?
There now exists Object.defineProperty as a general way of extending object prototypes without the new properties being enumerable, though this still doesn't justify extending the built-in types, because even besides for..in there is still the potential for conflicts with other scripts. Consider someone using two Javascript frameworks that both try to extend the Array in a similar way and pick the same method name. Or, consider someone forking your code and then putting both the original and forked versions on the same page. Will the custom enhancements to the Array object still work?
This is the reality with Javascript, and why you should avoid modifying the prototypes of built-in types, even with Object.defineProperty. Define your own types with your own constructors.
There is a caution! Maybe you did that: fiddle demo
Let us say an array and a method foo which return first element:
var myArray = ["apple","ball","cat"];
foo(myArray) // <- 'apple'
function foo(array){
return array[0]
}
The above is okay because the functions are uplifted to the top during interpretation time.
But, this DOES NOT work: (Because the prototype is not defined)
myArray.foo() // <- 'undefined function foo'
Array.prototype.foo = function(){
return this[0]
}
For this to work, simply define prototypes at the top:
Array.prototype.foo = function(){
return this[0]
}
myArray.foo() // <- 'apple'
And YES! You can override prototypes!!! It is ALLOWED. You can even define your own own add method for Arrays.
You augmented generic types so to speak. You've probably overwritten some other lib's functionality and that's why it stopped working.
Suppose that some lib you're using extends Array with function Array.remove(). After the lib has loaded, you also add remove() to Array's prototype but with your own functionality. When lib will call your function it will probably work in a different way as expected and break it's execution... That's what's happening here.
Using Recursion
function forEachWithBreak(someArray, fn){
let breakFlag = false
function breakFn(){
breakFlag = true
}
function loop(indexIntoSomeArray){
if(!breakFlag && indexIntoSomeArray<someArray.length){
fn(someArray[indexIntoSomeArray],breakFn)
loop(indexIntoSomeArray+1)
}
}
loop(0)
}
Test 1 ... break is not called
forEachWithBreak(["a","b","c","d","e","f","g"], function(element, breakFn){
console.log(element)
})
Produces
a
b
c
d
e
f
g
Test 2 ... break is called after element c
forEachWithBreak(["a","b","c","d","e","f","g"], function(element, breakFn){
console.log(element)
if(element =="c"){breakFn()}
})
Produces
a
b
c
There are 2 problems (as mentioned above)
It's enumerable (i.e. will be seen in for .. in)
Potential clashes (js, yourself, third party, etc.)
To solve these 2 problems we will:
Use Object.defineProperty
Give a unique id for our methods
const arrayMethods = {
doSomething: "uuid() - a real function"
}
Object.defineProperty(Array.prototype, arrayMethods.doSomething, {
value() {
// Your code, log as an example
this.forEach(v => console.log(v))
}
})
const arr = [1, 2, 3]
arr[arrayMethods.doSomething]() // 1, 2, 3
The syntax is a bit weird but it's nice if you want to chain methods (just don't forget to return this):
arr
.map(x=>x+1)
[arrayMethods.log]()
.map(x=>x+1)
[arrayMethods.log]()
In general messing with the core javascript objects is a bad idea. You never know what any third party libraries might be expecting and changing the core objects in javascript changes them for everything.
If you use Prototype it's especially bad because prototype messes with the global scope as well and it's hard to tell if you are going to collide or not. Actually modifying core parts of any language is usually a bad idea even in javascript.
(lisp might be the small exception there)
Hello Javascript ninjas ! I have a pretty tough issue to solve and did not find any satisfying solution.
For a very specific Javascript framework I am developping, I need to be able to set the __proto__ property of a dynamically created function. I have some kind of generic function factory and need to have common definitions for the created functions.
I'd like not to argue wether or not this is a good practice as I really need to achieve this for perfectly valid reasons.
Here is a small QUnit sample that runs perfectly on Chrome latest version that shows what I need :
var oCommonFunctionProto = {};
var fnCreateFunction = function () {
var fnResult = function () {};
fnResult.__proto__ = oCommonFunctionProto; // DOES NOT WORK WITH IE9 OR IE10
return fnResult;
};
var fn1 = fnCreateFunction();
oCommonFunctionProto.randomMethod = function() { return 10; };
equal(fn1.randomMethod(), 10, "__proto__ has been set properly");
var oInstance = new fn1(); // fn1 is instantiable
As you can see on this code, anything added to oCommonFunctionProto will be available directly on any function returned by fnCreateFunction method. This allows to build prototype chain on Function objects (like it's often done on prototype chains for objects.
Here is the problem : __proto__ property is immutable in IE9 and IE10 and sadly, I really need to be compatible with those browsers.
Moreover :
I cannot use any third party. I need a fully functional code that do not depend on anything else.
As you can see, the randomMethod was added after the creation of the function. I really need the prototype chaining as in my scenarios, this objects will me modified after function creations. Simply duplicating oCommonFunctionProto properties on the function prototype will not work.
I'm perfectly okay with suboptimal code as long as it does the job. This will be a compatibility hack just for IE9/IE10. AS long as it does the job, I'll be happy.
It could be okay to set the __proto__ at function creation. It's better if I can do it afterwards, but if I have no choice, this can be acceptable.
I tried every hack I could but did not find any way to bypass this limitation on IE9/IE10.
TL;DR : I have to be able to set __proto__ on a javascript function without the help of any third party in IE9 and IE10.
Based on other answers and discussions, it appears this is just not possible for IE<11.
I finally dropped prototype chains, be it for Objects or Functions, in favor of flattened prototype and notification when a logical "parent" prototype changes to update "child" prototype accordingly.
I'm looking at Addy Osmani's gist for a publication/subscription pattern here:
https://github.com/addyosmani/pubsubz/blob/master/pubsubz.js
He surfaces his object as a global like this:
;(function ( window, doc, undef ) {
var topics = {},
subUid = -1,
pubsubz ={};
....
getPubSubz = function(){
return pubsubz;
};
window.pubsubz = getPubSubz();
What is the value of creating that getPubSubz function? Wouldn't it be more straightforward to simply write:
window.pubsubz = pubsubz;
Yes, in this case, because getPubSubz is only called in one place, immediately after declaring it, it could safely be inlined.
It's hard to say exactly what the author had in mind, but in a growing code base there may be some value to having a "getter" function which could be modified if the act of getting the pubsubz object required more advanced logic.
It absolutely would be.
There are only two potential reasons why a getter would be used in this case:
There was previously some additional code inside the getter (logging, perhaps)
Addy Osmani's just following good practice*, and including a getter—even adding the opportunity to add additonal code in the future.
Through the power of GitHub, we can actually eliminate option one, as the getter was added in its current state—so I think we can conclusively say that it's just a matter of good practice here.
*as jantimon alludes to in the comments below, this isn't particularly advantageous in most cases (including this one) and this code does not necessarily need to followed as an example.
Excuse me first. because i don't know this is question is valid or not. i if any one clear my doubt then i am happy.
Basically : what is the different between calling a method like:
object.methodname();
$('#element').methodname();
calling both way is working, but what is the different between, in which criteria make first and second type of methods. is it available in the core javascript as well?
In case if i have a function is it possible to make 2 type of method call always?
Can any one give some good reference to understand correctly?
Thanks in advance.
The first syntax:
object.methodName();
Says to call a function, methodName(), that is defined as a property of object.
The second syntax:
$('#element').methodname();
Says to call a function called $() which (in order for this to work) must return an object and then call methodname() on that returned object.
You said that "calling both way is working," - so presumably you've got some code something like this:
var myObject = $('#element');
myObject.methodname();
This concept of storing the result of the $() function in a variable is commonly called "caching" the jQuery object, and is more efficient if you plan to call a lot of methods on that object because every time you call the jQuery $() function it creates another jQuery object.
"Is it available in the core javascript as well?" Yes, if you implement functions that return objects. That is, JS supports this (it would have to, since jQuery is just a JS library) but it doesn't happen automatically, you have to write appropriate function code. For example:
function getObject() {
return {
myMethod1 : function() { alert("myMethod1"); return this; },
myMethod2 : function() { alert("myMethod2"); return this; }
};
}
getObject().myMethod1().myMethod2();
In my opinion explaining this concept in more depth is beyond the scope of a Stack Overflow answer - you need to read some JavaScript tutorials. MDN's Working With Objects article is a good place to start once you have learned the JS fundamentals (it could be argued that working with objects is a JS fundamental, but obviously I mean even more fundamental stuff than that).
The difference is very subtle.
object.methodname();
This is when JavaScript has the object at hand.
$('#element').methodname();
If you are using jQuery, you are asking jQuery to select the object that has the id of #element. After that you invoke the method on the selected object.
I've always passed arguments to a function like so:
setValue('foo','#bar')
function setValue(val,ele){
$(ele).val(val);
};
Forgive the silly example. But recently I have been working on a project that has some functions that take a lot of arguments. So I started passing the arguments through as an object (not sure if that's the correct way to put that), like so:
setValue({
val:'foo',
ele:'#bar'
});
And then in the function:
function setValue(options){
var value = options.val;
var element = options.ele;
$(element).val(value);
};
My question is, is there a better way to do that? Is it common practice (or okay) to call these 'options'? And do you typically need to 'unpack' (for lack of a better term) the options and set local vars inside the function? I have been doing it this way in case one of them was not defined.
I'm really looking to not create bad habits and write a bunch of code that is ugly. Any help is appreciated and + by me. Thanks.
I do the exact same thing, except I don't declare a new variable for each option inside the function.
I think options is a good name for it although I shorten it to opts.
I always have a "default" object within the function that specify default values for each available option, even if its simply null. I use jQuery, so I can just use $.extend to merge the defaults and user-specified options like this: var opts = $.extend({}, defaults, opts);
I believe this is a great pattern. I've heard an options object like this referred to as a "builder object" in other languages (at least in the context of object creation). Here are some of the advantages:
Users of your function don't have to worry about what order the parameters are in. This is especially helpful in cases like yours where the method takes a lot of arguments. It's easy to get those mixed up, and JavaScript will not complain!
It's easy to make certain parameters optional (this comes in handy when writing a plugin or utility).
There are some pitfalls though. Specifically, the user of your function could not specify some of the options and your code would choke (note that this could also happen with a normal JS function: the user still doesn't have to supply the correct arguments). A good way for handling this is to provide default values for parameters that are not required:
var value = options.val || 0;
var element = options.ele || {};
$(element).val(value);
You could also return from the function immediately or throw an exception if the correct arguments aren't supplied.
A good resource for learning how to handle builder objects is to check out the source of things like jQueryUI.
I realize this question is a year old, but I think the cleanest way to pass an arbitrary number of arguments to a JavaScript function is using an array and the built in apply method:
fun.apply(object, [argsArray])
Where fun is the function, object is your scope/context in which you want the function to be executed and the argsArray is an array of the arguments (which can hold any number of arguments to be passed.
The current pitfall right now is that the arguments must be an array (literal or object) and not an array-like object such as {'arg' : 6, 'arg2' : "stuff"}. ECMAScript 5 will let you pass array-like objects, but it only seems to work in FireFox at the moment and not IE9 or Chrome.
If you look at the jQuery implementation, it uses an options class to handle most of the arbitrary-number-of-parameters functions, so I think you are in good company.
The other way is to test for arguments.length, but that only works if your arguments are always in the same order of optionality.
It's worth remembering that all functions have a bonus parameter called arguments that is an object very much like a JS array (it has length but none of the array functions) that contains all the parameters passed in.
Useful if you want to pass in a range of parameters (e.g.
function Sum() {
var i, sum = 0;
for (i=0; i < arguments.length; i++){
sum+=arguments[i];
}
return sum;
};
If this isn't the case and you just have a lot of parameters, use the params object as you've described.
Nothing wrong with that practice.
"Options" seems like as good a name as any.
You don't need to "unpack" them, but if you'll be accessing the same item several times, it will be a little more efficient to reference them in local variables because local variable access is generally quicker than property lookups.