Is it possible to use Symbol.species for plain objects? - javascript

The MDN gives the following working example of Symbol.species:
class MyArray extends Array {
// Overwrite species to the parent Array constructor
static get [Symbol.species]() { return Array; }
}
var a = new MyArray(1,2,3);
var mapped = a.map(x => x * x);
console.log(mapped instanceof MyArray); // false
console.log(mapped instanceof Array); // true
The ECMAScript 2015 specifications says:
A function valued property that is the constructor function that is used to create derived objects.
The way I understand it it would be mainly used to Duck-Type in some way custom objects in order to trick the instanceof operator.
It seems very useful but I have not managed to use it on plain objects (FF 44.0a2):
let obj = {
get [Symbol.species]() {
return Array
}
}
console.log(obj instanceof Array) //false =(
console.log(obj instanceof Object) //true
Is there any way to use Symbol.species on plain objects in order to trick the instanceof operator?

The way I understand it it would be mainly used to Duck-Type in some way custom objects in order to trick the instanceof operator.
Nope, that's what Symbol.hasInstance is good for (though it makes tricky constructors, e.g. for mixins, not tricky instances).
The point of Symbol.species is to let built-in methods determine the proper type for derived objects. Whenever a function is supposed to return a new instance of the same kind, it usually instantiates a new this.constructor, which might be a subclass of the class that defined the method. But you might not always want this when subclassing, and that's where Symbol.species comes in.
The example given by MDN is quite good, you just must not miss the distinction between a and mapped:
var a = new MyArray(1,2,3);
var mapped = a.map(x => x * x);
Object.getPrototypeOf(a) == MyArray.prototype; // true
Object.getPrototypeOf(mapped) == Array.prototype; // true
(As you know, instanceof is was just sugar for the reverse of isPrototypeOf)
So what happens here is that the Array map method is called on an instance of MyArray, and creates a derived object that is now an instance of Array - because a.constructor[Symbol.species] said so. Without it, map would have created another MyArray.
This trick would work with plain objects just as well:
var b = {
length: 0,
map: Array.prototype.map,
constructor: { // yes, usually a function
[Symbol.species]: MyArray
}
};
var mapped = b.map(x => x*x);
console.log(mapped instanceof MyArray); // true
I can't say whether it's any useful in this example, though :-)

Good question. I admit that I'm not 100% sure, but I've gotten the impression that the answer is no.
I used two methods to determine this answer:
Finding an environment that clearly supports all of the features necessary to test this, and running the code in that environment directly
Referencing the spec
The first point wasn't easy to test, and I wasn't entirely sure that I successfully did it. You used Firefox 44, but I decided to test another environment due to the fact that Kangax's support table says it only supports the existence of Symbol.species, and none of the Array-like features. To see this, you can expand the Symbol.species section in the link above.
In this case, even Kangax is a bit confusing. I would expect there to be non-Array specific tests for Symbol.species aside from the existence test, but maybe I'm just not knowledge enough to understand that those tests cover all of the functionality of Symbols. I'm not sure!
While browsing the table, I saw that Edge supports all of the Symbol.species features. I loaded up my Windows VM, hopped on over to Edge 20, and pasted your code.
In Edge 20, the code had the same result as your test in Firefox 44.
So this began to give me confidence that the feature doesn't work in the way that you want it to. Just to make sure that Edge had its JavaScript in order, I decided to run the code from the MDN article you linked, and...I got a syntax error! It appears Edge is still updating its Class implementation due to late-in-the-game changes to the Class spec, so it's not enabled.
Their current implementation is, however, available through an experimental flag. So I flipped that on and was able to reproduce the result that MDN got.
And that was the extent of my JavaScript environment testing. I don't have 100% confidence in this test, but I would say that I'm fairly confidence in the result.
So onto the second method I used to determine my result: the specification.
I admit that I'm no expert at the ECMAScript specification. I do my best to understand it, but it is likely that I might misinterpret some of the information in there. The reason for this is because I'm not an ECMAScript master, so some of the technical jargon used is just way over my head.
Nevertheless, I can sometimes get the gist of what is being said. Here's a snippet that's about the Symbol ##species:
Methods that create derived collection objects should call ##species to determine the constructor to use to create the derived objects. Subclass constructor may over-ride ##species to change the default constructor assignment.
The phrase "derived objects" is used quite frequently in describing this symbol, but it's not defined anywhere. One might interpret it to be a synonym of sorts with instantiated. If that's a correct interpretation, then this can likely only be used with constructors or classes.
Either way, it sounds as if ##species is an active Symbol, rather than a passive one. By this I mean that it sounds like it's a function that's actually called during the creation of an Object, which is then what determines the instanceof value, rather than a function that's called at the time of instanceof being called.
But anyway, take that interpretation with a grain of salt. Once again, I'm not 100% certain of any of this. I considered posting this as a comment rather than an answer, but it ended up getting quite long.

Related

How to make a slice function closer to the native one? In the form in which invoked? [duplicate]

Google JavaScript Style Guide advises against extending the Array.prototype.
However, I used Array.prototype.filter = Array.prototype.filter || function(...) {...} as a way to have it (and similar methods) in browsers where they do not exist. MDN actually provides similar example.
I am aware about Object.prototype issues, but Array is not a hash table.
What issues may arise while extending Array.prototype that made Google advise against it?
Most people missed the point on this one. Polyfilling or shimming standard functionality like Array.prototype.filter so that it works in older browsers is a good idea in my opinion. Don't listen to the haters. Mozilla even shows you how to do this on the MDN. Usually the advice for not extending Array.prototype or other native prototypes might come down to one of these:
for..in might not work properly
Someone else might also want to extend Array with the same function name
It might not work properly in every browser, even with the shim.
Here are my responses:
You don't need to use for..in on Array's usually. If you do you can use hasOwnProperty to make sure it's legit.
Only extend natives when you know you're the only one doing it OR when it's standard stuff like Array.prototype.filter.
This is annoying and has bit me. Old IE sometimes has problems with adding this kind of functionality. You'll just have to see if it works in a case by case basis. For me the problem I had was adding Object.keys to IE7. It seemed to stop working under certain circumstances. Your mileage may vary.
Check out these references:
http://perfectionkills.com/extending-native-builtins/
http://blip.tv/jsconf/jsconf2011-andrew-dupont-everything-is-permitted-extending-built-ins-5211542
https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Array/filter
https://github.com/kriskowal/es5-shim
Good luck!
I'll give you the bullet points, with key sentences, from Nicholas Zakas' excellent article Maintainable JavaScript: Don’t modify objects you don’t own:
Dependability: "The simple explanation is that an enterprise software product needs a consistent and dependable execution environment to be maintainable."
Incompatible implementations: "Another peril of modifying objects that you don’t own is the possibility of naming collisions and incompatible implementations."
What if everyone did it?: "Simply put: if everyone on your team modified objects that they didn’t own, you’d quickly run into naming collisions, incompatible implementations, and maintenance nightmares."
Basically, don't do it. Even if your project is never going to be used by anyone else, and you're never going to import third party code, don't do it. You'll establish a horrible habit that could be hard to break when you start trying to play nice with others.
As a modern update to Jamund Ferguson's answer:
Usually the advice for not extending Array.prototype or other native prototypes might come down to one of these:
for..in might not work properly
Someone else might also want to extend Array with the same function name
It might not work properly in every browser, even with the shim.
Points 1. and 2. can now be mitigated in ES6 by using a Symbol to add your method.
It makes for a slightly more clumsy call structure, but adds a property that isn't iterated over and can't be easily duplicated.
// Any string works but a namespace may make library code easier to debug.
var myMethod = Symbol('MyNamespace::myMethod');
Array.prototype[ myMethod ] = function(){ /* ... */ };
var arr = [];
// slightly clumsier call syntax
arr[myMethod]();
// Also works for objects
Object.prototype[ myMethod ] = function(){ /* ... */ };
Pros:
For..in works as expected, symbols aren't iterated over.
No clash of method names as symbols are local to scope and take effort to retrieve.
Cons:
Only works in modern environments
Slightly clunky syntax
Extending Array.prototype in your own application code is safe (unless you use for .. in on arrays, in which case you need to pay for that and have fun refactoring them).
Extending native host objects in libraries you intend others to use is not cool. You have no right to corrupt the environment of other people in your own library.
Either do this behind an optional method like lib.extendNatives() or have [].filter as a requirement.
Extending Natives and Host Objects
Prototype does this. It's evil. The following snippet demonstrates how doing so can produce unexpected results:
<script language="javascript" src="https://ajax.googleapis.com/ajax/libs/prototype/1.7.0.0/prototype.js"></script>
<script language="javascript">
a = ["not", "only", "four", "elements"];
for (var i in a)
document.writeln(a[i]);
</script>
The result:
not only four elements function each(iterator, context) { var index = 0; . . .
and about 5000 characters more.
I want to add an additional answer that allows extending the Array prototype without breaking for .. in loops, and without requiring use of hasOwnPropery:
Don't use this bad approach which causes prototype values to appear in for .. in:
Array.prototype.foo = function() { return 'foo'; };
Array.prototype.bar = function() { return 'bar'; };
let a = [ 1, 2, 3, 4 ];
console.log(`Foo: ${a.foo()}`);
console.log(`Bar: ${a.bar()}`);
console.log('==== Enumerate: ====');
for (let v in a) console.log(v);
Instead use Object.defineProperty, with enumerable: false - it exists for pretty much exactly this reason!
Object.defineProperty(Array.prototype, 'foo', {
value: function() { return 'foo'; },
enumerable: false
});
Object.defineProperty(Array.prototype, 'bar', {
value: function() { return 'bar'; },
enumerable: false
});
let a = [ 1, 2, 3, 4 ];
console.log(`Foo: ${a.foo()}`);
console.log(`Bar: ${a.bar()}`);
console.log('==== Enumerate: ====');
for (let v in a) console.log(v);
Note: Overall, I recommend avoiding enumerating Arrays using for .. in. But this knowledge is still useful for extending prototypes of classes where enumeration is appropriate!
Some people use for ... in loops to iterate through arrays. If you add a method to the prototype, the loop will also try to iterate over that key. Of course, you shouldn't use it for this, but some people do anyway.
You can easily create somekind of sandbox with poser library.
Take a look on https://github.com/bevacqua/poser
var Array2 = require('poser').Array();
// <- Array
Array2.prototype.eat = function () {
var r = this[0];
delete this[0];
console.log('Y U NO .shift()?');
return r;
};
var a = new Array2(3, 5, 7);
console.log(Object.keys(Array2.prototype), Object.keys(Array.prototype))
I believe this question deserves an updated ES6 answer.
ES5
First of all, as many people have already stated. Extending the native prototypes to shim or polyfill new standards or fix bugs is standard practice and not harmful. For example if a browser doesn't support the .filter method if (!Array.prototype.filter) you are free to add this functionality on your own. In-fact, the language is designed to do exactly this to manage backwards compatibility.
Now, you'd be forgving for thinking that since JavaScript object use prototypal inheritance, extending a native object like Array.prototype without interfering should be easy, but up until ES6 it's not been feasible.
Unlike objects for example, you had to rely and modifying the Array.prototype to add your own custom methods. As others have pointed out, this is bad because it pollutes the Global namespace, can interfere with other code in an unexpected way, has potential security issues, is a cardinal sin etc.
In ES5 you can try hacking this but the implementations aren't really practically useful. For more in depth information, I recommend you check out this very informative post: http://perfectionkills.com/how-ecmascript-5-still-does-not-allow-to-subclass-an-array/
You can add a method to an array, or even an array constructor but you run into issues trying to work with the native array methods that rely on the length property. Worst of all, these methods are going to return a native Array.prototype and not your shiny new sub-class array, ie: subClassArray.slice(0) instanceof subClassArray === false.
ES6
However, now with ES6 you can subclass builtins using class combined with extends Array that overcomes all these issues. It leaves the Array.prototype intact, creates a new sub-class and the array methods it inherits will be of the same sub-class! https://hacks.mozilla.org/2015/08/es6-in-depth-subclassing/
See the fiddle below for a demonstration:
https://jsfiddle.net/dmq8o0q4/1/
Extending the prototype is a trick that only works once. You do and you use a library that also does it (in an incompatible way) and boom!
The function you are overriding could be used by the internal javascript calls and that could lead to unexpected results. Thats one of the reasons for the guideline
For example I overrode indexOf function of array and it messed up accessing array using [].

How can I make a sub-function for a pre-existing data type? [duplicate]

I was working on an AJAX-enabled asp.net application.
I've just added some methods to Array.prototype like
Array.prototype.doSomething = function(){
...
}
This solution worked for me, being possible reuse code in a 'pretty' way.
But when I've tested it working with the entire page, I had problems..
We had some custom ajax extenders, and they started to behave as the unexpected: some controls displayed 'undefined' around its content or value.
What could be the cause for that? Am I missing something about modifing the prototype of standart objects?
Note: I'm pretty sure that the error begins when I modify the prototype for Array. It should be only compatible with IE.
While the potential for clashing with other bits o' code the override a function on a prototype is still a risk, if you want to do this with modern versions of JavaScript, you can use the Object.defineProperty method, e.g.
// functional sort
Object.defineProperty(Array.prototype, 'sortf', {
value: function(compare) { return [].concat(this).sort(compare); }
});
Modifying the built-in object prototypes is a bad idea in general, because it always has the potential to clash with code from other vendors or libraries that loads on the same page.
In the case of the Array object prototype, it is an especially bad idea, because it has the potential to interfere with any piece of code that iterates over the members of any array, for instance with for .. in.
To illustrate using an example (borrowed from here):
Array.prototype.foo = 1;
// somewhere deep in other javascript code...
var a = [1,2,3,4,5];
for (x in a){
// Now foo is a part of EVERY array and
// will show up here as a value of 'x'
}
Unfortunately, the existence of questionable code that does this has made it necessary to also avoid using plain for..in for array iteration, at least if you want maximum portability, just to guard against cases where some other nuisance code has modified the Array prototype. So you really need to do both: you should avoid plain for..in in case some n00b has modified the Array prototype, and you should avoid modifying the Array prototype so you don't mess up any code that uses plain for..in to iterate over arrays.
It would be better for you to create your own type of object constructor complete with doSomething function, rather than extending the built-in Array.
What about Object.defineProperty?
There now exists Object.defineProperty as a general way of extending object prototypes without the new properties being enumerable, though this still doesn't justify extending the built-in types, because even besides for..in there is still the potential for conflicts with other scripts. Consider someone using two Javascript frameworks that both try to extend the Array in a similar way and pick the same method name. Or, consider someone forking your code and then putting both the original and forked versions on the same page. Will the custom enhancements to the Array object still work?
This is the reality with Javascript, and why you should avoid modifying the prototypes of built-in types, even with Object.defineProperty. Define your own types with your own constructors.
There is a caution! Maybe you did that: fiddle demo
Let us say an array and a method foo which return first element:
var myArray = ["apple","ball","cat"];
foo(myArray) // <- 'apple'
function foo(array){
return array[0]
}
The above is okay because the functions are uplifted to the top during interpretation time.
But, this DOES NOT work: (Because the prototype is not defined)
myArray.foo() // <- 'undefined function foo'
Array.prototype.foo = function(){
return this[0]
}
For this to work, simply define prototypes at the top:
Array.prototype.foo = function(){
return this[0]
}
myArray.foo() // <- 'apple'
And YES! You can override prototypes!!! It is ALLOWED. You can even define your own own add method for Arrays.
You augmented generic types so to speak. You've probably overwritten some other lib's functionality and that's why it stopped working.
Suppose that some lib you're using extends Array with function Array.remove(). After the lib has loaded, you also add remove() to Array's prototype but with your own functionality. When lib will call your function it will probably work in a different way as expected and break it's execution... That's what's happening here.
Using Recursion
function forEachWithBreak(someArray, fn){
let breakFlag = false
function breakFn(){
breakFlag = true
}
function loop(indexIntoSomeArray){
if(!breakFlag && indexIntoSomeArray<someArray.length){
fn(someArray[indexIntoSomeArray],breakFn)
loop(indexIntoSomeArray+1)
}
}
loop(0)
}
Test 1 ... break is not called
forEachWithBreak(["a","b","c","d","e","f","g"], function(element, breakFn){
console.log(element)
})
Produces
a
b
c
d
e
f
g
Test 2 ... break is called after element c
forEachWithBreak(["a","b","c","d","e","f","g"], function(element, breakFn){
console.log(element)
if(element =="c"){breakFn()}
})
Produces
a
b
c
There are 2 problems (as mentioned above)
It's enumerable (i.e. will be seen in for .. in)
Potential clashes (js, yourself, third party, etc.)
To solve these 2 problems we will:
Use Object.defineProperty
Give a unique id for our methods
const arrayMethods = {
doSomething: "uuid() - a real function"
}
Object.defineProperty(Array.prototype, arrayMethods.doSomething, {
value() {
// Your code, log as an example
this.forEach(v => console.log(v))
}
})
const arr = [1, 2, 3]
arr[arrayMethods.doSomething]() // 1, 2, 3
The syntax is a bit weird but it's nice if you want to chain methods (just don't forget to return this):
arr
.map(x=>x+1)
[arrayMethods.log]()
.map(x=>x+1)
[arrayMethods.log]()
In general messing with the core javascript objects is a bad idea. You never know what any third party libraries might be expecting and changing the core objects in javascript changes them for everything.
If you use Prototype it's especially bad because prototype messes with the global scope as well and it's hard to tell if you are going to collide or not. Actually modifying core parts of any language is usually a bad idea even in javascript.
(lisp might be the small exception there)

Why would I need to freeze an object in JavaScript?

It is not clear to me when anyone would need to use Object.freeze in JavaScript. MDN and MSDN don't give real life examples when it is useful.
I get it that an attempt to change such an object at runtime means a crash. The question is rather, when would I appreciate this crash?
To me the immutability is a design time constraint which is supposed to be guaranteed by the type checker.
So is there any point in having a runtime crash in a dynamically typed language, besides detecting a violation better later than never?
The Object.freeze function does the following:
Makes the object non-extensible, so that new properties cannot be added to it.
Sets the configurable attribute to false for all properties of the object. When - configurable is false, the property attributes cannot be changed and the property cannot be deleted.
Sets the writable attribute to false for all data properties of the object. When writable is false, the data property value cannot be changed.
That's the what part, but why would anyone do this?
Well, in the object-oriented paradigm, the notion exists that an existing API contains certain elements that are not intended to be extended, modified, or re-used outside of their current context. The final keyword in various languages is the most suitable analogy of this. Even in languages that are not compiled and therefore easily modified, it still exists, i.e. PHP, and in this case, JavaScript.
You can use this when you have an object representing a logically immutable data structure, especially if:
Changing the properties of the object or altering its "duck type" could lead to bad behavior elsewhere in your application
The object is similar to a mutable type or otherwise looks mutable, and you want programmers to be warned on attempting to change it rather than obtain undefined behavior.
As an API author, this may be exactly the behavior you want. For example, you may have an internally cached structure that represents a canonical server response that you provide to the user of your API by reference but still use internally for a variety of purposes. Your users can reference this structure, but altering it may result in your API having undefined behavior. In this case, you want an exception to be thrown if your users attempt to modify it.
In my nodejs server environment, I use freeze for the same reason I use 'use strict'. If I have an object that I do not want being extended or modified, I will freeze it. If something attempts to extend or modify my frozen object, I WANT my app to throw an error.
To me this relates to consistent, quality, more secure code.
Also,
Chrome is showing significant performance increases working with frozen objects.
Edit:
In my most recent project, I'm sending/receiving encrypted data between a government entity. There are a lot of configuration values. I'm using frozen object(s) for these values. Modification of these values could have serious, adverse side effects. Additionally, as I linked previously, Chrome is showing performance advantages with frozen objects, I assume nodejs does as well.
For simplicity, an example would be:
var US_COIN_VALUE = {
QUARTER: 25,
DIME: 10,
NICKEL: 5,
PENNY: 1
};
return Object.freeze( US_COIN_VALUE );
There is no reason to modify the values in this example. And enjoy the benefits of speed optimizations.
Object.freeze() mainly using in Functional Programming (Immutability)
Immutability is a central concept of functional programming because without it, the data flow in your program is lossy. State history is abandoned, and strange bugs can creep into your software.
In JavaScript, it’s important not to confuse const, with immutability. const creates a variable name binding which can’t be reassigned after creation. const does not create immutable objects. You can’t change the object that the binding refers to, but you can still change the properties of the object, which means that bindings created with const are mutable, not immutable.
Immutable objects can’t be changed at all. You can make a value truly immutable by deep freezing the object. JavaScript has a method that freezes an object one-level deep.
const a = Object.freeze({
foo: 'Hello',
bar: 'world',
baz: '!'
});
When you're writing a library/framework in JS and you don't want some developer to break your dynamic language creation by re-assigning "internal" or public properties.
This is the most obvious use case for immutability.
With the V8 release v7.6 the performance of frozen/sealed arrays is greatly improved. Therefore, one reason you would like to freeze an object is when your code is performance-critical.
What is a practical situation when you might want to freeze an object?
One example, on application startup you create an object containing app settings. You may pass that configuration object around to various modules of the application. But once that settings object is created you want to know that it won't be changed.
This is an old question, but I think I have a good case where freeze might help. I had this problem today.
The problem
class Node {
constructor() {
this._children = [];
this._parent = undefined;
}
get children() { return this._children; }
get parent() { return this._parent; }
set parent(newParent) {
// 1. if _parent is not undefined, remove this node from _parent's children
// 2. set _parent to newParent
// 3. if newParent is not undefined, add this node to newParent's children
}
addChild(node) { node.parent = this; }
removeChild(node) { node.parent === this && (node.parent = undefined); }
...
}
As you can see, when you change the parent, it automatically handles the connection between these nodes, keeping children and parent in sync. However, there is one problem here:
let newNode = new Node();
myNode.children.push(newNode);
Now, myNode has newNode in its children, but newNode does not have myNode as its parent. So you've just broken it.
(OFF-TOPIC) Why are you exposing the children anyway?
Yes, I could just create lots of methods: countChildren(), getChild(index), getChildrenIterator() (which returns a generator), findChildIndex(node), and so on... but is it really a better approach than just returning an array, which provides an interface all javascript programmers already know?
You can access its length to see how many children it has;
You can access the children by their index (i.e. children[i]);
You can iterate over it using for .. of;
And you can use some other nice methods provided by an Array.
Note: returning a copy of the array is out of question! It costs linear time, and any updates to the original array do not propagate to the copy!
The solution
get children() { return Object.freeze(Object.create(this._children)); }
// OR, if you deeply care about performance:
get children() {
return this._PUBLIC_children === undefined
? (this._PUBLIC_children = Object.freeze(Object.create(this._children)))
: this._PUBLIC_children;
}
Done!
Object.create: we create an object that inherits from this._children (i.e. has this._children as its __proto__). This alone solves almost the entire problem:
It's simple and fast (constant time)
You can use anything provided by the Array interface
If you modify the returned object, it does not change the original!
Object.freeze: however, the fact that you can modify the returned object BUT the changes do not affect the original array is extremely confusing for the user of the class! So, we just freeze it. If he tries to modify it, an exception is thrown (assuming strict mode) and he knows he can't (and why). It's sad no exception is thrown for myFrozenObject[x] = y if you are not in strict mode, but myFrozenObject is not modified anyway, so it's still not-so-weird.
Of course the programmer could bypass it by accessing __proto__, e.g:
someNode.children.__proto__.push(new Node());
But I like to think that in this case they actually know what they are doing and have a good reason to do so.
IMPORTANT: notice that this doesn't work so well for objects: using hasOwnProperty in the for .. in will always return false.
UPDATE: using Proxy to solve the same problem for objects
Just for completion: if you have an object instead of an Array you can still solve this problem by using Proxy. Actually, this is a generic solution that should work with any kind of element, but I recommend against (if you can avoid it) due to performance issues:
get myObject() { return Object.freeze(new Proxy(this._myObject, {})); }
This still returns an object that can't be changed, but keeps all the read-only functionality of it. If you really need, you can drop the Object.freeze and implement the required traps (set, deleteProperty, ...) in the Proxy, but that takes extra effort, and that's why the Object.freeze comes in handy with proxies.
I can think of several places that Object.freeze would come in very handy.
The first real world implementation that could use freeze is when developing an application that requires 'state' on the server to match what's in the browser. For instance, imagine you need to add in a level of permissions to your function calls. If you are working in an application there may be places where a Developer could easily change or overwrite the permission settings without even realizing it (especially if the object were being passed through by reference!). But permissions by and large can never change and error'ing when they are changed is preferred. So in this case, the permissions object could be frozen, thereby limiting developer from mistakenly 'setting' permissions erroneously. The same could be said for user-like data like a login name or email address. These things can be mistakenly or maliciously broken with bad code.
Another typical solution would be in a game loop code. There are many settings of game state that you would want to freeze to retain that the state of the game is kept in sync with the server.
Think of Object.freeze as a way to make an object as a Constant. Anytime you would want to have variable constant, you could have an object constant with freeze for similar reasons.
There are also times where you want to pass immutable objects through functions and data passing, and only allow updating the original object with setters. This can be done by cloning and freezing the object for 'getters' and only updating the original with 'setters'.
Are any of these not valid things? It can also be said that frozen objects could be more performant due to the lack of dynamic variables, but I haven't seen any proof of that yet.
The only practical use for Object.freeze is during development. For production code, there is absolutely no benefit for freezing/sealing objects.
Silly Typos
It could help you catch this very common problem during development:
if (myObject.someProp = 5) {
doSomething();
}
In strict mode, this would throw an error if myObject was frozen.
Enforce Coding Protocol / Restriction
It would also help in enforcing a certain protocol in a team, especially with new members who may not have the same coding style as everyone else.
A lot of Java guys like to add a lot of methods to objects to make JS feel more familiar. Freezing objects would prevent them from doing that.
I could see this being useful when you're working with an interactive tool. Rather than:
if ( ! obj.isFrozen() ) {
obj.x = mouse[0];
obj.y = mouse[1];
}
You could simply do:
obj.x = mouse[0];
obj.y = mouse[1];
Properties will only update if the object isn't frozen.
Don't know if this helps, but I use it to create simple enumerations. It allows me to hopefully not get duff data in a database, by knowing the source of the data has been attempted to be unchangeable without purposefully trying to break the code. From a statically typed perspective, it allows for reasoning over code construction.
All the other answers pretty much answer the question.
I just wanted to summarise everything here along with an example.
Use Object.freeze when you need utmost surety regarding its state in the future. You need to make sure that other developers or users of your code do not change internal/public properties. Alexander Mills's answer
Object.freeze has better performance since 19th June, 2019, ever since V8 v7.6 released. Philippe's answer. Also take a look at the V8 docs.
Here is what Object.freeze does, and it should clear out doubts for people who only have surface level understanding of Object.freeze.
const obj = {
name: "Fanoflix"
};
const mutateObject = (testObj) => {
testObj.name = 'Arthas' // NOT Allowed if parameter is frozen
}
obj.name = "Lich King" // Allowed
obj.age = 29; // Allowed
mutateObject(obj) // Allowed
Object.freeze(obj) // ========== Freezing obj ==========
mutateObject(obj) // passed by reference NOT Allowed
obj.name = "Illidan" // mutation NOT Allowed
obj.age = 25; // addition NOT Allowed
delete obj.name // deletion NOT Allowed

status of the "arguments[n] cannot be set if n is greater than the number of formal or actual parameters" bug?

It's possible to set the individual elements of a function arguments property (what Mozilla calls an 'array-like' property), however Mozilla reports it is not possible to add elements to this property in SpiderMonkey 1.5 though this is fixed in 1.6 (ref. the note on SpiderMonkey here... https://developer.mozilla.org/en/JavaScript/Reference/Functions_and_function_scope/arguments).
This is a useful property, chaining constructors from subclasses, creating a list of arguments to pass to a function (e.g., myclassmethod.apply(this, arguments)), etc.
However I've found V8 will not extend the length in the same way Mozilla reports of SpiderMonkey 1.5. (Not sure what the status is with other JavaScript engines, Opera, Rhino, etc.).
Is this actually an ECMA feature? Has Mozilla got it wrong in considering this a bug, or does V8 have a bug to fix?
[Update] I've found using V8 that the arguments.length property can be assigned to, and so effectively arguments can be extended (or set to whatever length you require otherwise). However JSLint does complain this is a bad assignment.
[Update] Some test code if anyone wants to try this in Opera, FF etc., create an instance of a subclass calling the constructor with one arg while adding an element to arguments in the subclass constructor and calling the superclass constructor, the superclass should report two arguments:
function MyClass() {
if (arguments.length) {
console.log("arguments.length === " + arguments.length);
console.log("arguments[0] === " + arguments[0]);
console.log("arguments[1] === " + arguments[1]);
}
}
function MySubClass() {
console.log(arguments.length);
//arguments.length = 2; // uncomment to test extending `length' property works
arguments[1] = 2;
MyClass.apply(this, arguments);
}
MySubClass.prototype = new MyClass();
new MySubClass(1);
[Update] JSLint actually complains when you make any kind of assignment to arguments by the looks (e.g., arguments[0] = "foo"). So maybe JSLint has some work to do here also.
Not sure if this is what you are looking for, but you can make the arguments object into a standard array:
var args = [].slice.call(arguments, 0);
[EDIT] so you could do:
myclassmethod.apply(this, [].slice.call(arguments, 0).push(newValue));
ECMA-262 10.1.8 says that the prototype of the arguments object is the Object prototype and that the arguments object is initialized in a certain way, using the phrase "initial value". So, it would seem like you could assign any of its properties to whatever you want. I'm not sure if that's the correct interpretation though.
OK, I think we can conclude on this one.
Running the above example code on FF, Comodo Dragon, Opera and IE (all current version, Windoze) they will all accept a change of the arguments.length property to resize the (as Mozilla phrases this, 'array-like') arguemnts object. None will increase the size by assigning a value as is usual for an array (i.e., arguments[arguments.length] = "new arg").
It seems the consensus of the javascript engines is that as an 'array-like' object arguments behaves like an array, but is otherwise inclined to throw a few surprises ;) Either way the interpretation of the ECMA standard appears to be on the face of it at least consistent across the board, and so the semantics are not really an issue (the whys and wherefores being the only remaining question but of historical interest only).
Neither MSDN nor Mozilla seem to have this documented (not sure where V8 or Opera hide their JS docs) - in a nutshell to conclude some documentation would definitely not go amiss. At which point I will bow out, however if someone could opportunistically drop the hint to the relevant parties at some point it is a useful technique to use once in a while.
Thanks for all who have taken the time to look at this question.
[Update] BishopZ has pointed out above that .apply will take an array of arguments anyway (this has slipped my mind since I last used it many years ago, and which solves my problem). However the 'array-like' arguments can still be used, and might even be preferable in some situations (easier for example to pop a value onto, or just to pass on arguments to a superclass). Thanks again for looking at the issue.

Why doesn't my object inherit the keys functions?

If I have an object:
A = {a:true}
Why do I have to use:
Object.keys(A)
and not:
A.keys()
If keys is a method of Object, and everything inherits from Object, shouldn't A be able to call keys?
Object.keys is a so-to-say "static" method attached strictly to the Object function, not to its instances.
For it to be inherited, it would need to be defined as Object.prototype.keys.
You can certainly add it yourself if you so desire:
Object.prototype.keys = function () {
return Object.keys(this);
};
Just note, as Rocket mentioned in the comments, "own" properties take precedence over prototype properties:
var foo = {};
foo.keys(); // Array of enumerable key names, if any
var bar = { keys: true };
bar.keys(); // TypeError: not a function
The real answer is "because that's how the ES committee members designed it." But specifically, the second comment on your original question (by #RobW) is exactly what they wanted to avoid.
Moreover, I would ask that you remove the "accepted answer" designation from #JonathonLonowski's answer, because he is clearly giving (and avidly but unwisely defending) advice (modifying built-in prototype's) which most of the web community knows to be of poor form. The ONLY time when it's OK to modify a built-in's prototype is when you are absolutely certain that no other code will trip over it... the only time that is true is you write 100% of all code that runs (or ever will run) on a page. The number of times that is true is so low that this advice is considered not-good-practice.

Categories