In Backbone 1.1.2, at line 279
// Return a copy of the model's `attributes` object.
toJSON: function(options) {
return _.clone(this.attributes);
},
options is clearly not used, so why have it there at all. It is just wasted memory.
What am I missing here?
Per comment here is one way of calling this code -so why pass options when it will not be used?
toJSON: function(options) {
return this.map(function(model){ return model.toJSON(options); });
},
It doesn't waste memory, since the argument would have to be available as arguments[0] anyway (either options is a function call and the vm has to do it for side-effects anyway, or it's just an object and so it's just a reference).
It also serves as a document reference to what superclasses can implement.
Since JS is using prototypes for its object orientation, if you create a toJSON function in one of your superclasses, it'll be used instead.
There's no functional reason for including that in the Model.toJSON signature -- it is strictly for developers' benefit.
From the link that #t.niese uncovered:
Morning #aoboturov! Thanks for pointing this out. It's actually intentional and is meant to remind you that collection.toJSON(options) passes along the options argument to each of its models by default. See #1222 and #1098 for details.
(Note that the referenced #1222 is basically just a dupe, and #1098 is where they added the feature in the first place.)
In other words, the parameter is put there for the sake of clarity for developers who may want to override the Model.toJSON implementation. The collection passes along the original Backbone.sync call's options object, since some implementations of Model.toJSON might want to use it.
Including that options parameter in the signature doesn't affect memory usage at all (even if it did, the effect would be miniscule), since the Collection.toJSON implementation passes that options object as an argument, either way.
Related
Below is a code snippet I found online on a blog which entails a simple example in using the stream Transform class to alter data streams and output the altered result. There are some things about this that I don't really understand.
var stream = require('stream');
var util = require('util');
// node v0.10+ use native Transform, else polyfill
var Transform = stream.Transform ||
require('readable-stream').Transform;
Why does the program need to check if the this var points to an instance of the Upper constructor? The Upper constructor is being used to construct the upper object below, so what is the reason to check for this? Also, I tried logging options, but it returns null/undefined, so what's the point of that parameter?
function Upper(options) {
// allow use without new
if (!(this instanceof Upper)) {
return new Upper(options);
}
I assume that this Transform.call method is being made to explicitly set the this variable? But why does the program do that, seeing as how Transform is never being called anyway.
// init Transform
Transform.call(this, options);
}
After googling the util package, I know that it is being used here to allow Upper to inherit Transform's prototypal methods. Is that right?
util.inherits(Upper, Transform);
The function below is what really confuses me. I understand that the program is setting a method on Upper's prototype which is used to transform data being input into it. But, I don't see where this function is being called at all!
Upper.prototype._transform = function (chunk, enc, cb) {
var upperChunk = chunk.toString().toUpperCase();
this.push(upperChunk);
cb();
};
// try it out - from the original code
var upper = new Upper();
upper.pipe(process.stdout); // output to stdout
After running the code through a debugger, I can see that upper.write calls the aforementioned Upper.prototype._transform method, but why does this happen? upper is an instance of the Upper constructor, and write is a method that doesn't seem to have any relation to the _transform method being applied to the prototype of Upper.
upper.write('hello world\n'); // input line 1
upper.write('another line'); // input line 2
upper.end(); // finish
First, if you haven't already, take a look at the Transform stream implementer's documentation here.
Q: Why does the program need to check if the this var points to an instance of the Upper constructor? The Upper constructor is being used to construct the upper object below, so what is the reason to check for this?
A: It needs to check because anyone can call Upper() without new. So if it's detected that a user called the constructor without new, out of convenience (and to make things work correctly), new is implicitly called on the user's behalf.
Q: Also, I tried logging options, but it returns null/undefined, so what's the point of that parameter?
A: options is just a constructor/function parameter. If you don't pass anything to the constructor, then obviously it will be undefined (or whatever value you pass to it). You can have as many parameters as you want/need, just like any ordinary function. In the case of Upper() however, configuration isn't really needed due to the simplicity of the transform (just converting all input to uppercase).
Q: I assume that this Transform.call method is being made to explicitly set the this variable? But why does the program do that, seeing as how Transform is never being called anyway.
A: No, the Transform.call() allows the inherited "class" to perform its own initialization, such as setting up internal state variables. You can think of it as calling the super() in ES6 classes.
Q: After googling the util package, I know that it is being used here to allow Upper to inherit Transform's prototypal methods. Is that right?
A: Yes, that is correct. However, these days you can also use ES6 classes to do real inheritance. The node.js stream implementers documentation shows examples of both inheritance methods.
Q: The function below is what really confuses me. I understand that the program is setting a method on Upper's prototype which is used to transform data being input into it. But, I don't see where this function is being called at all!
A: This function is called internally by node when it has data for you to process. Think of the method as being part of an interface (or a "pure virtual function" if you are familiar with C++) that you are required to implement in your custom Transform.
Q: After running the code through a debugger, I can see that upper.write calls the aforementioned Upper.prototype._transform method, but why does this happen? upper is an instance of the Upper constructor, and write is a method that doesn't seem to have any relation to the _transform method being applied to the prototype of Upper.
A: As noted in the Transform documentation, Transform streams are merely simplified Duplex streams (meaning they accept input and produce output). When you call .write() you are writing to the Writable (input) side of the Transform stream. This is what triggers the call to ._transform() with the data you just passed to .write(). When you call .push() you are writing to the Readable (output) side of the Transform stream. That data is what seen when you either call .read() on the Transform stream or you attach a 'data' event handler.
Generally java-script allows to override (extend the new behavior) any function except those objects which are not frozen or seal. In JavaScript Math is a built-in object. But why JavaScript is giving access to override the existing properties in built-in object ?
Please find screenshot: Initially I find min function is available in Math Object. I have updated "min" property with function. This action replaced the existing code.
For more clarity I have deleted the property from "min". Here deletion should remove the extended behavior not the core one. But it is removing core property why?
Extending or modifying native code is called monkey-patching, and it's a design feature rather than a design flaw. Virtually everything is mutable and extensible in Javascript, and therefore you have the power to change fundamentals to suit your own needs (e.g. you could overload the min method so that it works with different variable types than just integers and floats), but with that power comes responsibility, so it's generally not advised to change these standard functions unless you know what you're doing; likewise, you have to be aware that if your JS file will be running on someone else's environment, you may not be able to rely on everything you think you can (however, you should generally be able to expect the usual global methods and properties, which is why you may call the global Object.prototype.keys or Array.prototype.slice rather than expect the method to be on the prototype of any one particular object).
In short, when you delete a function that you've modified, you will be deleting it entirely, not reverting it back to some sort of original state. You basically overwrote the original, and so there's no way of getting it back (except by deleting the code that overwrites it!).
Thanks to everyone for responding to my question. I got valuable information from everyone. myself did some analysis on this. I have gone through ECMA-262 Specification. I find some properties like 'E' in Math and their configurations.
According to specification document http://www.ecma-international.org/ecma-262/5.1/#sec-15.8.1
15.8.1.1 E
The Number value for e, the base of the natural logarithms, which is approximately 2.7182818284590452354.
This property has the attributes { [[Writable]]: false, [[Enumerable]]: false, [[Configurable]]: false }.
Then I got to know some of properties of Math we can't delete because of 'Configurable' property.
When I executed following code, In retured object I find 'min' property of 'configurable: true'.
Object.getOwnPropertyDescriptor(Math, "min");
Object {value: function, writable: true, enumerable: false, configurable: true}
I agree with user162097 As he said 'it's a design feature rather than a design flaw.'
Thanks
No it is working correctly. delete function is deleting property from your Object. It doesn't know the object was original one or overridden, that's why overriding built in functions behavior is not best practice.
Instead of changing the native function behavior you can create yours in that object prototype, for instance lets create remove function for Array object:
Array.prototype.remove = function(member) {
var index = this.indexOf(member);
if (index > -1) {
this.splice(index, 1);
}
return this;
}
Nevertheless you can inherit from native objects, more about this you can read in this article-inheriting from native objects.
One nature of scripts is a fast prototyping. You may sort this abillity to a non-strict-declaration-aspect of javascript.
at the moment I'm writing a small app and came to the point, where I thought it would be clever to clone an object, instead of using a reference.
The reason I'm doing this is, because I'm collecting objects in a list. Later I will only work with this list, because it's part of a model. The reference isn't something I need and I want to avoid having references to outside objects in the list, because I don't want someone to build a construct, where the model can be changed from an inconsiderate place in their code. (The integrity of the information in the model is very important.)
Additional I thought I will get a better performance out of it, when I don't use references.
So my overall question still is: When should I prefer a clone over an reference in javascript?
Thanks!
If stability is important, then clone it. If testing shows that this is a bottleneck, consider changing it to a reference. I'd be very surprised if it is a bottleneck though, unless you have a very complicated object which is passed back and forth very frequently (and if you're doing that it's probably an indication of a bad design).
Also remember that you can only do so much to save other developers from their own stupidity. If they really want to break your API, they could just replace your functions with their own by copying the source or modifying it at runtime. If you document that the object must not be changed, a good developer (yes, there are some) will follow that rule.
For what it's worth, I've used both approaches in my own projects. For small structs which don't get passed around much, I've made copies for stability, and for larger data (e.g. 3D vertex data which may be passed around every frame), I don't copy.
Why not just make the objects stored in the list immutable? Instead of storing simple JSON-like objects you would store closures.
Say you have an object with two properties A and B. It looks like that:
myObj = {
"A" : "someValue",
"B" : "someOtherValue"
}
But then, as you said, anyone could alter the state of this object by simply overriding it's properties A or B. Instead of passing such objects in a list to the client, you could pass read-only data created from your actual objects.
First define a function that takes an ordinary object and returns a set of accessors to it:
var readOnlyObj = function(builder) {
return {
getA : function() { return builder.A; },
getB : function() { return builder.B; }
}
}
Then instead of your object myObj give the user readOnlyObj(myObj) so that they can access the properties by methods getA and getB.
This way you avoid the costs of cloning and provide a clear set of valid actions that a user can perform on your objects.
Using Ext.js or sencha, what is the point of doing the following:
Ext.apply(app.views, {
contactsList: new app.views.ContactsList(),
contactDetail: new app.views.ContactDetail(),
contactForm: new app.views.ContactForm()
});
As opposed to this standard javascript:
app.views.contactsList = new app.views.ContactsList();
app.views.contactDetail = new app.views.ContactDetail();
app.views.contactForm = new app.views.ContactForm();
Is there any difference?
Ext.apply is generally more convenient (and possibly more efficient if there are fewer activation chain lookups required, as in your example, though that's a minor point) . There is also a variant Ext.applyIf that only applies members from the source object that do not exist in the target object, which is even more useful as it saves you from a boatload of manual if() checks. That's really useful, for example, when applying defaults to a config object that may already have user or app-defined properties assigned.
A note to future readers who look at the accepted answer: Ext also has Ext.extend which actually means "inherit from" a class, as opposed to Ext.apply[If] which merely merges an object instance into another, or Ext.override which overrides (without subclassing) a class definition. Lots of options, depending on what you need.
It's mostly there as a convenience method for code that is accepting an object as an argument, and needs to merge it. Merging objects is a common use case in JavaScript, and this type of helper is implemented by most frameworks. ($.extend in jQuery, Object.extend in Prototype, Object.append in MooTools, etc.)
In your case, there is little difference, other than offering a bit more readable code.
You need Ext.apply if you use Ajax requests to get the configurations from the server. Because Ajax responses are received later on, after the window is rendered. The second part of your code won't work.
I've always passed arguments to a function like so:
setValue('foo','#bar')
function setValue(val,ele){
$(ele).val(val);
};
Forgive the silly example. But recently I have been working on a project that has some functions that take a lot of arguments. So I started passing the arguments through as an object (not sure if that's the correct way to put that), like so:
setValue({
val:'foo',
ele:'#bar'
});
And then in the function:
function setValue(options){
var value = options.val;
var element = options.ele;
$(element).val(value);
};
My question is, is there a better way to do that? Is it common practice (or okay) to call these 'options'? And do you typically need to 'unpack' (for lack of a better term) the options and set local vars inside the function? I have been doing it this way in case one of them was not defined.
I'm really looking to not create bad habits and write a bunch of code that is ugly. Any help is appreciated and + by me. Thanks.
I do the exact same thing, except I don't declare a new variable for each option inside the function.
I think options is a good name for it although I shorten it to opts.
I always have a "default" object within the function that specify default values for each available option, even if its simply null. I use jQuery, so I can just use $.extend to merge the defaults and user-specified options like this: var opts = $.extend({}, defaults, opts);
I believe this is a great pattern. I've heard an options object like this referred to as a "builder object" in other languages (at least in the context of object creation). Here are some of the advantages:
Users of your function don't have to worry about what order the parameters are in. This is especially helpful in cases like yours where the method takes a lot of arguments. It's easy to get those mixed up, and JavaScript will not complain!
It's easy to make certain parameters optional (this comes in handy when writing a plugin or utility).
There are some pitfalls though. Specifically, the user of your function could not specify some of the options and your code would choke (note that this could also happen with a normal JS function: the user still doesn't have to supply the correct arguments). A good way for handling this is to provide default values for parameters that are not required:
var value = options.val || 0;
var element = options.ele || {};
$(element).val(value);
You could also return from the function immediately or throw an exception if the correct arguments aren't supplied.
A good resource for learning how to handle builder objects is to check out the source of things like jQueryUI.
I realize this question is a year old, but I think the cleanest way to pass an arbitrary number of arguments to a JavaScript function is using an array and the built in apply method:
fun.apply(object, [argsArray])
Where fun is the function, object is your scope/context in which you want the function to be executed and the argsArray is an array of the arguments (which can hold any number of arguments to be passed.
The current pitfall right now is that the arguments must be an array (literal or object) and not an array-like object such as {'arg' : 6, 'arg2' : "stuff"}. ECMAScript 5 will let you pass array-like objects, but it only seems to work in FireFox at the moment and not IE9 or Chrome.
If you look at the jQuery implementation, it uses an options class to handle most of the arbitrary-number-of-parameters functions, so I think you are in good company.
The other way is to test for arguments.length, but that only works if your arguments are always in the same order of optionality.
It's worth remembering that all functions have a bonus parameter called arguments that is an object very much like a JS array (it has length but none of the array functions) that contains all the parameters passed in.
Useful if you want to pass in a range of parameters (e.g.
function Sum() {
var i, sum = 0;
for (i=0; i < arguments.length; i++){
sum+=arguments[i];
}
return sum;
};
If this isn't the case and you just have a lot of parameters, use the params object as you've described.
Nothing wrong with that practice.
"Options" seems like as good a name as any.
You don't need to "unpack" them, but if you'll be accessing the same item several times, it will be a little more efficient to reference them in local variables because local variable access is generally quicker than property lookups.