You can extend native objects in javascript. For example, sugar.js extends Array, String, and Function among other things. Native-object extensions can be very useful, but inherently break encapsulation - ie if someone uses the same extension name (overwriting another extension) things will break.
It would be incredibly nice if you could extend objects for a particular scope. E.g. being able to do something like this in node.js:
// myExtension1.js
Object.prototype.x = 5
exports.speak = function() {
var six = ({}.x+1)
console.log("6 equals: "+six)
}
// myExtension2.js
Object.prototype.x = 20
exports.speak = function() {
var twenty1 = ({}.x+1)
console.log("21 equals: "+twenty1)
}
and have this work right:
// test.js
var one = require('myExtension1')
var two = require('myExtension2')
one.speak(); // 6 equals: 6
two.speak(); // 21 equals: 21
Of course in reality, this will print out "6 equals: 21" for the first one.
Is there any way, via any mechanism, to do something where this is possible? The mechanisms I'm interesting in hearing about include:
Pure javascript
Node.js
C++ extensions to Node.js
Unfortunately you cannot do that currently in node, because node shares the same built-in objects across modules.
This is bad because it could brings to unexpected side effects, like it happened in the browsers history in the past, and that's why now everyone is yelling "don't extend built-in object".
Other commonJS environment are following more the original commonJS specs, so that you do not share built-in object, but every module has its own. For instance in jetpack, the Mozilla SDK to build Firefox's add-on, it works in this way: so you the built-in objects are per module and if you extend one you can't clash.
Anyway, in general I believe that extending built-in object nowadays is not really necessary and should be avoided.
This is not possible since a native type only has a single source for its prototype. In general I would discourage mucking with the prototype of native types. Not only are you limiting your portability (as you pointed out), but you may also be unknowingly overwriting existing properties or future properties. This also creates a lot of "magic" in your code that a future maintainer will have a hard time tracking down. The only real exception to this rule is polyfils. If your environment has not implemented a new feature yet then a polyfil can provide you this.
Related
ES5 added a number of methods to Object, which seem to break the semantic consistency of JavaScript.
For instance, prior to this extension, the JavaScript API always revolved around operarting on the object itself;
var arrayLength = [].length;
var firstPosInString = "foo".indexOf("o");
... where as the new Object methods are like;
var obj = { };
Object.defineProperty(obj, {
value: 'a',
writable: false
});
... when the following would have been much more conformative:
var obj = { };
obj.defineProperty({
value: 'a',
writable: false
});
Can anyone cool my curiosity as to why this is? Is there any code snippets that this would break? Are there any public discussions made by the standards committee as to why they chose this approach?
This is all explained very nicely in "Proposed ECMAScript 3.1 Static Object Functions: Use Cases and Rationale" document (pdf) by Allen Wirfs-Brock himself (editor of ES5 spec, and a member of TC39).
I would suggest to read all of it. It's pretty short, easily digestible, and gives a nice glimpse of the thought process behind these ES5 additions.
But to quote relevant section (emphasis mine):
A number of alternatives API designs were considered before the
proposed API was chosen. In the course of considering alternatives we
developed a set of informal guidelines that we applied when
considering the alternatives. These guidelines are:
Cleanly separate the meta and application layers.
Try to minimize the API surface area (i.e., the number of methods and the complexity of their arguments).
Focus on usability in naming and parameter design.
Try to repeatedly apply basic elements of a design.
If possible, enable programmers or implementations to statically optimize uses of the API.
[...]
Here are some of the alternatives that were considered that lead to
the selected design.
The obvious initial idea, following the example of the already
existing standard method Object.prototype.propertyIsEnumerable, was to
add additional “propertyIs...” query methods on Object.prototype for
the other attributes and a parallel set of attribute changing methods.
[...]
As we considered this approach there were a number of things about it
that we didn’t like and that seemed contrary to the above API design
guidelines:
It merges rather than separates the meta and application layers. As methods on Object.prototype the methods would be part of the public
interface of every application object in a program. As such, they need
to be understood by every developer, not just library designers.
[...]
the JavaScript API always revolved around operarting on the object itself;
This is not correct. E.g. JSON and Math always had own methods. Nobody does such things:
var x = 0;
x.cos(); // 1.0
({"a":[0,1],"p":{"x":3,"y":4}}).toJSON();
There are numerous articles on the web about why extending Object.prototype is a bad thing. Yes, they're about client code, but maybe this is bad for build-in methods also for some points.
I'd like something like Python's defaultdict in Javascript, except without using any libraries. I realize this won't exist in pure Javascript. However, is there a way to define such a type in a reasonable amount of code (not just copying-and-pasting some large library into my source file) that won't hit undesirable corner cases later?
I want to be able to write the following code:
var m = defaultdict(function() { return [] });
m["asdf"].push(0);
m["qwer"].push("foo");
Object.keys(m).forEach(function(value, key) {
// Should give me "asdf" -> [0] and "qwer" -> ["foo"]
});
I need this to work on recent versions of Firefox, Chrome, Safari, and ideally Edge.
Again, I do not want to use a library if at all possible. I want a way to do this in a way that minimizes dependencies.
Reasons why previous answers don't work:
This answer uses a library, so it fails my initial criteria. Also, the defaultdict it provides doesn't actually behave like a Javascript object. I'm not looking to write Python in Javascript, I'm looking to make my Javascript code less painful.
This answer suggests defining get. But you can't use this to define a defaultdict over collection types (e.g. a map of lists). And I don't think the approach will work with Object.keys either.
This answer mentions Proxy, but it's not obvious to me how many methods you have to implement to avoid having holes that would lead to bad corner cases later. Writing all of the Proxy methods certainly seems like a pain, but if you skip any methods you might cause painful bugs for yourself down the road if you try to use something you didn't implement a handler for. (Bonus question: What is the minimal set of Proxy methods you'd need to implement to avoid any such holes?) On the other hand, the suggested getter approach doesn't follow standard object syntax, and you also can't do things like Object.keys.
You really seem to be looking for a proxy. It is available in the modern browsers you mention, is not a library, and is the only technology allowing you to keep the standard object syntax. Using a proxy is actually quite simple, all you need to overwrite is the get trap that should automatically create non-existing properties:
function defaultDict(createValue) {
return new Proxy(Object.create(null), {
get(storage, property) {
if (!(property in storage))
storage[property] = createValue(property);
return storage[property];
}
});
}
var m = defaultDict(function() { return [] });
m["asdf"].push(0);
m["qwer"].push("foo");
Object.keys(m).forEach(console.log);
If I implemented a method x on a String like :
String.prototype.x = function (a) {...}
And then the new version of javascript actually implements the x method, but on the another way, either returning something different than my implementation or function with more/less arguments than my implementation. Will this break my implementation and override it?
You'll overwrite the default implementation.
Any code that uses it will use yours instead.
There was a proposal for scoped extension methods and it was rejected because it was too expensive computationally to implement in JS engines. There is talk about a new proposal (protocols) to address the issue. ES6 symbols will also give you a way around that (but with ugly syntax).
However, that's not the punch - here's a fun fact no one is going to tell you.
No one is ever going to implement a method called x on String.prototype
You can implement it and get away with it. Seriously, prollyfilling and polyfilling is a viable, expressive and interesting solution to many use cases. If you're not writing a library I think it's acceptable.
No, you'll be overriding the default implementation of said function, from the point at which you've declared/defined it. The "new" implementation will function in its native behavior, until your implementation's defined.
var foo = 'some arbitrary string';
console.log(foo.indexOf('s')); // logs [0]
String.prototype.indexOf = function(foo, bar) { return 'foo'; };
console.log(foo.indexOf()); // logs [foo]
Illustration: http://jsfiddle.net/Z4Fq9/
Your code will be overriding the default implementation.
However if the interface of your method is not compatible with the standard one the libraries you may use could depend on the standard behavior so the program as a whole could break anyway with newer versions of the libraries.
In general is a bad idea doing something that could break if others do the same: what if another library thinks it's a good idea to add a method x to the standard string object prototype? Trying to avoid conflicts is a must for libraries but it's also good for applications (and if an application is written nicely then a lot of its code is probably quite similar to a library, and may evolve in a library later).
This kind of "patching" makes sense only to provide the a standard method for broken or old javascript implementations where that method is absent. Patching standard prototypes just because you can is a bad idea and will make your code a bad neighbor with which is difficult to share a page.
If the implementation of x is from a new version of Javascript, it's part of the core, therefore when you write String.prototype.x... it will be already there, and you will overwrite it.
Best practice in this kind of things is to write
if( !String.prototype.x ){
String.prototype.x = function ...
//your
Not sure if this is a new question so pls ref to any good source if you have any.
My team is working on a big JS chart project we inherited from the previous developers who made intensive use of built-in objects prototypes for adding reusable code. We have a lot of new utility functions added to Date, Object and other intrinsic objects so I guess they went this way 'cause altering prototypes provides a bit more intuitive API.
On the other hand our component suffers from performance/memory gotchas and we apply all the possible optimizations and best practices. I can't find one regarding API design. I'm trying to figure out whether it's better to seed built-in objects' prototypes with library code instead of combining them in dedicated objects via somewhat namespace pattern.
The question is which design is better? And will one of them gain performance and/or memory over another?
Date.prototype.my_custom_function = new function (...) {...};
var period = new Date();
period.my_custom_function();
vs
DateLib.my_custom_function // defined in a DateLib function
var period = new Date();
DateLib.my_custom_function(period);
Thanks, guys, any help is appreciated!
EDIt:
The main problem is that our component ended up being an awkward JS beast that slows down some mobile devices, especially old ones, such as iPad1 and early Androids... We've done a lot of optimization but I still see several questionable parts. I want to make sure if the built-in prototype pollution is another candidate to look into.
Particularly, the Date and Object guys are heavily loaded with typical library code. For example, in fact the my_custom_function is a big big function and it's not the only additional member that sits on the Date prototype at code startup.
The Object is loaded even more. Most of the client code doesn't make use of those additional features, it's used on purpose - so we're about to decide whichever way we're better stick with:
Keep using prototype pollution design
Refactoring reusable APIs into separate library static objects
to be honest I haven't run perf benchmarks yet, will do once I have time. If someone has outcome/ideas will be very helpful.
Modifying objects you don't own is definitely bad idea. The choice here is rather architectural: if you have to store a date persistently, then use a private property of the constructor:
function DateLib() {
this._dateObject = new Date();
}
DateLib.prototype.getDateString = function () {
return this._dateObject.toDateString()
};
var dateLib = new DateLib();
dateLib.getDateString();
If you just want to do some manipulation with a date, create a method:
var DateLib = {
toDateString: function (date) {
return date.toDateString()
}
}
DateLib.toDateString(new Date());
As it comes to the performance, all approaches are equally fast (thanks to Bergi, alternative test by Esailija).
Note: this is not browser comparison test. Tests were performed on a different machines, so only method-vs-method performance should be analysed here.
Are there any reliable techniques for storing prototype-based libraries/frameworks in mongoDB's system.js? I came across this issue when trying to use dateJS formats within a map-reduce. JIRA #SERVER-770 explains that objects' closures - including their prototypes - are lost when serialized to the system.js collection, and that this is the expected behavior. Unfortunately, this excludes a lot of great frameworks such as dojo, Google Closure, and jQuery.
Is there a way to somehow convert or contain libraries such that they don't rely on prototyping? There's some promise to initializing before the Map-Reduce and passing them in through the scope object, but I haven't had much luck so far. If my approach is flawed, what is a better way to enable server-side javascript re-use for mongo?
Every query using JS may reuse or get a brand new JS context, on which stored JS objects are loaded.
In order to do what you want, you need either:
mongod to run the stored code automatically when installing it
mapreduce to have an init method
The first is definitely the more interesting feature.
Turns out that mongodb v8 build automatically does it (but not officially supported), but not the official spidermonkey build.
Say you store code like:
db.system.js.save({ _id: "mylib", value: "myprint = function() { print('installed'); return 'installed';" }
Then in v8 you can use myprint() freely in your code, but with SM you would need to call mylib() explicitly.
As a workaround you can create another method:
db.system.js.save({ _id: "installLib", value: "if (!libLoaded) mylib(); libLoaded = true;" }
And call it from your map() function.
Created ticket in order to standardize engines and allow automatic run:
https://jira.mongodb.org/browse/SERVER-4450