When should I prefer a clone over an reference in javascript? - javascript

at the moment I'm writing a small app and came to the point, where I thought it would be clever to clone an object, instead of using a reference.
The reason I'm doing this is, because I'm collecting objects in a list. Later I will only work with this list, because it's part of a model. The reference isn't something I need and I want to avoid having references to outside objects in the list, because I don't want someone to build a construct, where the model can be changed from an inconsiderate place in their code. (The integrity of the information in the model is very important.)
Additional I thought I will get a better performance out of it, when I don't use references.
So my overall question still is: When should I prefer a clone over an reference in javascript?
Thanks!

If stability is important, then clone it. If testing shows that this is a bottleneck, consider changing it to a reference. I'd be very surprised if it is a bottleneck though, unless you have a very complicated object which is passed back and forth very frequently (and if you're doing that it's probably an indication of a bad design).
Also remember that you can only do so much to save other developers from their own stupidity. If they really want to break your API, they could just replace your functions with their own by copying the source or modifying it at runtime. If you document that the object must not be changed, a good developer (yes, there are some) will follow that rule.
For what it's worth, I've used both approaches in my own projects. For small structs which don't get passed around much, I've made copies for stability, and for larger data (e.g. 3D vertex data which may be passed around every frame), I don't copy.

Why not just make the objects stored in the list immutable? Instead of storing simple JSON-like objects you would store closures.
Say you have an object with two properties A and B. It looks like that:
myObj = {
"A" : "someValue",
"B" : "someOtherValue"
}
But then, as you said, anyone could alter the state of this object by simply overriding it's properties A or B. Instead of passing such objects in a list to the client, you could pass read-only data created from your actual objects.
First define a function that takes an ordinary object and returns a set of accessors to it:
var readOnlyObj = function(builder) {
return {
getA : function() { return builder.A; },
getB : function() { return builder.B; }
}
}
Then instead of your object myObj give the user readOnlyObj(myObj) so that they can access the properties by methods getA and getB.
This way you avoid the costs of cloning and provide a clear set of valid actions that a user can perform on your objects.

Related

Object unexpectedly being modified after push into array

I have what seems like it should be a simple operation. For each bridgedSection, I check for a potentialSection with an id that matches the bridged.referenceSection
Then I take that result, parse the HTML on the object with Cherio, make a slight modification (using an id for testing), and then store both the bridgedSection and the modified result on an object, then push that object to the array.
If I log the new object BEFORE pushing, I get the correct object values. If I log it from the array I get incorrect values only for reference.section. bridgedSection is fine, but reference.section matches across all entries in the array.
To say that I'm thoroughly flummoxed is an understatement. Can anyone shed some light on what I am (clearly) doing wrong?
var sectionCount = 0;
bridgedSections.forEach(bridged => {
var obj = potentialSections.find(obj => obj._id == bridged.referenceSection);
$ = cheerio.load(obj.html);
$(".meditor").html(bridged._id);// dropping the id here so it's easy to see if it was updated
obj.html = $.html();
obj.rand = Math.floor(Math.random() * 1000); // can't seem to add to obj either
var thisSection = {
referenceSection: obj,
bridgedSection: bridged,
}
console.log(thisSection) // correct value logged
currentSections.push(thisSection);
sectionCount++;
});
console.log(currentSections);
// this logs an array of the correct length but each
// {}.referenceSection is identical to the last entry pushed above
To try to clarify what both of the above folks are saying, the JavaScript language (like many others) has the concept of references, and makes very heavy use of that concept.
When one variable "refers to" another, there is only one copy of the value in question: everything else is a reference to that one value. Changes made to any of those references will therefore change the [one ...] underlying value (and, be reflected instantaneously in all of the references).
The advantage of references is, of course, that they are extremely "lightweight."
If you need to make a so-called "deep copy" of an array or structure or what-have-you, you can do so. If you want to push the value and be sure that it cannot be changed, you need to make sure that what you've pushed is either such a "deep copy," or that there are no references (as there obviously are, now ...) to whatever it contains. Your choice.
N.B. References – especially circular references – also have important implications for memory management (and "leaks"), because a thing will not be "reaped" by the memory manager until all references to it have ceased to exist. (Everything is "reference counted.")
And, all of what I've just said pretty much applies equally to every language that supports this – as most languages now do.
Javascript is passes function parameters by reference. This means the following happens:
derp = {a:1}
function passedByRef(param){
param['a'] = 2;
}
passedByRef(derp)
console.log(derp['a']) // 2
So when you pass a json object to a function, if you modify said object in the function it will change the original object. You probably want to make a deep copy of bridged before you assign it to thisSection because if you modify the version of bridged later on in thisSection it will modify the original object.
Here is a post that talks about cloning objects or you could look into something like immutable js
I think you need to look into Javascript deep copy.
You are modifying the original object when you modify the second assigned variable, because they are pointing to the same object. What you really need is to duplicate the object, not simply making a pointer to it.
Take a look at this:
https://scotch.io/bar-talk/copying-objects-in-javascript#toc-deep-copying-objects

Is partial immutability a good practice?

Forgive the extremely simplistic example but I was wondering what is the difference between the following two approaches when you need to use an obj's value without modifying the deeply nested properties of it using immutablejs or any other similar libraires:
//case1
var imm_a = {'names':['Joe', 'Jack'], 'owns':{'car': ['Toyota','Ferrari']}};
var a = Immutable.fromJS(imm_a);
a = a.setIn(['owns','car', 0], 'Ford');
//...a bunch of other modifications using immutable.js methods
console.log(a.toJS()); // using the changed value
console.log(imm_a); // imm_a has not changed
//case2
var imm_b = {'names':['Joe', 'Jack'], 'owns':{'car': ['Toyota','Ferrari']}};
var b = Immutable.fromJS(imm_b);
b = b.toJS();
b.owns.car[0] = 'Ford';
//...a bunch of other modifications using native javascript methods
console.log(b); // using the changed value
console.log(imm_b); // imm_b has not changed
In both above cases we have an object (imm_a and imm_b) with deeply nested props that we don't want to mutate but we need to use their modified values. In case 1 we create an Immutable obj and directly modify it using Immutable.js methods, in the second case after we created an Immutable object we create a native Javascript variable from it so that we can work easier with it using direct assignments and other native javascript methods. In both approaches we reach our goal which was to keep imm_a and imm_b unmutated but I find the second approach much easier for the simplicity of modifications. But is there any difference? Which one is recommended as a better practice?
PS: In neither of those case I care about immutability of b or a, I only need imm_a and imm_b to remain unchanged. A common use case would be a Redux app where the new state is dependant on the old state and you need to modify the old state value before returning it as new state. a and b are local temporary variables here. Question is, is it ok to directly modify them as long as I don't rely on them having persistent values? Or if I use immutable practices I need to treat everything as immutable?
The second example entirely defeats the purpose of using ImmutableJS, because you're mutating the contents of the b Javascript object. One of the main points of ImmutableJS is that if you modify an object, you get a completely new object back.
In this simple example it won't have any effects, but the benefit of writing immutable code is that anyone else who has a reference to a knows it will never change and it will always have the same value after they get it. In your second example, since you mutate b in place, any other code that has a reference to b will have the "rug pulled out from under them" so to speak, since your local updates will also happen to their reference to the object.
If you don't use ImmutableJS to ensure actual immutability, you are better off not using it for now. It adds non-trivial overhead work to a program.

Why JavaScript is giving access to override the existing properties in built-in object

Generally java-script allows to override (extend the new behavior) any function except those objects which are not frozen or seal. In JavaScript Math is a built-in object. But why JavaScript is giving access to override the existing properties in built-in object ?
Please find screenshot: Initially I find min function is available in Math Object. I have updated "min" property with function. This action replaced the existing code.
For more clarity I have deleted the property from "min". Here deletion should remove the extended behavior not the core one. But it is removing core property why?
Extending or modifying native code is called monkey-patching, and it's a design feature rather than a design flaw. Virtually everything is mutable and extensible in Javascript, and therefore you have the power to change fundamentals to suit your own needs (e.g. you could overload the min method so that it works with different variable types than just integers and floats), but with that power comes responsibility, so it's generally not advised to change these standard functions unless you know what you're doing; likewise, you have to be aware that if your JS file will be running on someone else's environment, you may not be able to rely on everything you think you can (however, you should generally be able to expect the usual global methods and properties, which is why you may call the global Object.prototype.keys or Array.prototype.slice rather than expect the method to be on the prototype of any one particular object).
In short, when you delete a function that you've modified, you will be deleting it entirely, not reverting it back to some sort of original state. You basically overwrote the original, and so there's no way of getting it back (except by deleting the code that overwrites it!).
Thanks to everyone for responding to my question. I got valuable information from everyone. myself did some analysis on this. I have gone through ECMA-262 Specification. I find some properties like 'E' in Math and their configurations.
According to specification document http://www.ecma-international.org/ecma-262/5.1/#sec-15.8.1
15.8.1.1 E
The Number value for e, the base of the natural logarithms, which is approximately 2.7182818284590452354.
This property has the attributes { [[Writable]]: false, [[Enumerable]]: false, [[Configurable]]: false }.
Then I got to know some of properties of Math we can't delete because of 'Configurable' property.
When I executed following code, In retured object I find 'min' property of 'configurable: true'.
Object.getOwnPropertyDescriptor(Math, "min");
Object {value: function, writable: true, enumerable: false, configurable: true}
I agree with user162097 As he said 'it's a design feature rather than a design flaw.'
Thanks
No it is working correctly. delete function is deleting property from your Object. It doesn't know the object was original one or overridden, that's why overriding built in functions behavior is not best practice.
Instead of changing the native function behavior you can create yours in that object prototype, for instance lets create remove function for Array object:
Array.prototype.remove = function(member) {
var index = this.indexOf(member);
if (index > -1) {
this.splice(index, 1);
}
return this;
}
Nevertheless you can inherit from native objects, more about this you can read in this article-inheriting from native objects.
One nature of scripts is a fast prototyping. You may sort this abillity to a non-strict-declaration-aspect of javascript.

Why would I need to freeze an object in JavaScript?

It is not clear to me when anyone would need to use Object.freeze in JavaScript. MDN and MSDN don't give real life examples when it is useful.
I get it that an attempt to change such an object at runtime means a crash. The question is rather, when would I appreciate this crash?
To me the immutability is a design time constraint which is supposed to be guaranteed by the type checker.
So is there any point in having a runtime crash in a dynamically typed language, besides detecting a violation better later than never?
The Object.freeze function does the following:
Makes the object non-extensible, so that new properties cannot be added to it.
Sets the configurable attribute to false for all properties of the object. When - configurable is false, the property attributes cannot be changed and the property cannot be deleted.
Sets the writable attribute to false for all data properties of the object. When writable is false, the data property value cannot be changed.
That's the what part, but why would anyone do this?
Well, in the object-oriented paradigm, the notion exists that an existing API contains certain elements that are not intended to be extended, modified, or re-used outside of their current context. The final keyword in various languages is the most suitable analogy of this. Even in languages that are not compiled and therefore easily modified, it still exists, i.e. PHP, and in this case, JavaScript.
You can use this when you have an object representing a logically immutable data structure, especially if:
Changing the properties of the object or altering its "duck type" could lead to bad behavior elsewhere in your application
The object is similar to a mutable type or otherwise looks mutable, and you want programmers to be warned on attempting to change it rather than obtain undefined behavior.
As an API author, this may be exactly the behavior you want. For example, you may have an internally cached structure that represents a canonical server response that you provide to the user of your API by reference but still use internally for a variety of purposes. Your users can reference this structure, but altering it may result in your API having undefined behavior. In this case, you want an exception to be thrown if your users attempt to modify it.
In my nodejs server environment, I use freeze for the same reason I use 'use strict'. If I have an object that I do not want being extended or modified, I will freeze it. If something attempts to extend or modify my frozen object, I WANT my app to throw an error.
To me this relates to consistent, quality, more secure code.
Also,
Chrome is showing significant performance increases working with frozen objects.
Edit:
In my most recent project, I'm sending/receiving encrypted data between a government entity. There are a lot of configuration values. I'm using frozen object(s) for these values. Modification of these values could have serious, adverse side effects. Additionally, as I linked previously, Chrome is showing performance advantages with frozen objects, I assume nodejs does as well.
For simplicity, an example would be:
var US_COIN_VALUE = {
QUARTER: 25,
DIME: 10,
NICKEL: 5,
PENNY: 1
};
return Object.freeze( US_COIN_VALUE );
There is no reason to modify the values in this example. And enjoy the benefits of speed optimizations.
Object.freeze() mainly using in Functional Programming (Immutability)
Immutability is a central concept of functional programming because without it, the data flow in your program is lossy. State history is abandoned, and strange bugs can creep into your software.
In JavaScript, it’s important not to confuse const, with immutability. const creates a variable name binding which can’t be reassigned after creation. const does not create immutable objects. You can’t change the object that the binding refers to, but you can still change the properties of the object, which means that bindings created with const are mutable, not immutable.
Immutable objects can’t be changed at all. You can make a value truly immutable by deep freezing the object. JavaScript has a method that freezes an object one-level deep.
const a = Object.freeze({
foo: 'Hello',
bar: 'world',
baz: '!'
});
When you're writing a library/framework in JS and you don't want some developer to break your dynamic language creation by re-assigning "internal" or public properties.
This is the most obvious use case for immutability.
With the V8 release v7.6 the performance of frozen/sealed arrays is greatly improved. Therefore, one reason you would like to freeze an object is when your code is performance-critical.
What is a practical situation when you might want to freeze an object?
One example, on application startup you create an object containing app settings. You may pass that configuration object around to various modules of the application. But once that settings object is created you want to know that it won't be changed.
This is an old question, but I think I have a good case where freeze might help. I had this problem today.
The problem
class Node {
constructor() {
this._children = [];
this._parent = undefined;
}
get children() { return this._children; }
get parent() { return this._parent; }
set parent(newParent) {
// 1. if _parent is not undefined, remove this node from _parent's children
// 2. set _parent to newParent
// 3. if newParent is not undefined, add this node to newParent's children
}
addChild(node) { node.parent = this; }
removeChild(node) { node.parent === this && (node.parent = undefined); }
...
}
As you can see, when you change the parent, it automatically handles the connection between these nodes, keeping children and parent in sync. However, there is one problem here:
let newNode = new Node();
myNode.children.push(newNode);
Now, myNode has newNode in its children, but newNode does not have myNode as its parent. So you've just broken it.
(OFF-TOPIC) Why are you exposing the children anyway?
Yes, I could just create lots of methods: countChildren(), getChild(index), getChildrenIterator() (which returns a generator), findChildIndex(node), and so on... but is it really a better approach than just returning an array, which provides an interface all javascript programmers already know?
You can access its length to see how many children it has;
You can access the children by their index (i.e. children[i]);
You can iterate over it using for .. of;
And you can use some other nice methods provided by an Array.
Note: returning a copy of the array is out of question! It costs linear time, and any updates to the original array do not propagate to the copy!
The solution
get children() { return Object.freeze(Object.create(this._children)); }
// OR, if you deeply care about performance:
get children() {
return this._PUBLIC_children === undefined
? (this._PUBLIC_children = Object.freeze(Object.create(this._children)))
: this._PUBLIC_children;
}
Done!
Object.create: we create an object that inherits from this._children (i.e. has this._children as its __proto__). This alone solves almost the entire problem:
It's simple and fast (constant time)
You can use anything provided by the Array interface
If you modify the returned object, it does not change the original!
Object.freeze: however, the fact that you can modify the returned object BUT the changes do not affect the original array is extremely confusing for the user of the class! So, we just freeze it. If he tries to modify it, an exception is thrown (assuming strict mode) and he knows he can't (and why). It's sad no exception is thrown for myFrozenObject[x] = y if you are not in strict mode, but myFrozenObject is not modified anyway, so it's still not-so-weird.
Of course the programmer could bypass it by accessing __proto__, e.g:
someNode.children.__proto__.push(new Node());
But I like to think that in this case they actually know what they are doing and have a good reason to do so.
IMPORTANT: notice that this doesn't work so well for objects: using hasOwnProperty in the for .. in will always return false.
UPDATE: using Proxy to solve the same problem for objects
Just for completion: if you have an object instead of an Array you can still solve this problem by using Proxy. Actually, this is a generic solution that should work with any kind of element, but I recommend against (if you can avoid it) due to performance issues:
get myObject() { return Object.freeze(new Proxy(this._myObject, {})); }
This still returns an object that can't be changed, but keeps all the read-only functionality of it. If you really need, you can drop the Object.freeze and implement the required traps (set, deleteProperty, ...) in the Proxy, but that takes extra effort, and that's why the Object.freeze comes in handy with proxies.
I can think of several places that Object.freeze would come in very handy.
The first real world implementation that could use freeze is when developing an application that requires 'state' on the server to match what's in the browser. For instance, imagine you need to add in a level of permissions to your function calls. If you are working in an application there may be places where a Developer could easily change or overwrite the permission settings without even realizing it (especially if the object were being passed through by reference!). But permissions by and large can never change and error'ing when they are changed is preferred. So in this case, the permissions object could be frozen, thereby limiting developer from mistakenly 'setting' permissions erroneously. The same could be said for user-like data like a login name or email address. These things can be mistakenly or maliciously broken with bad code.
Another typical solution would be in a game loop code. There are many settings of game state that you would want to freeze to retain that the state of the game is kept in sync with the server.
Think of Object.freeze as a way to make an object as a Constant. Anytime you would want to have variable constant, you could have an object constant with freeze for similar reasons.
There are also times where you want to pass immutable objects through functions and data passing, and only allow updating the original object with setters. This can be done by cloning and freezing the object for 'getters' and only updating the original with 'setters'.
Are any of these not valid things? It can also be said that frozen objects could be more performant due to the lack of dynamic variables, but I haven't seen any proof of that yet.
The only practical use for Object.freeze is during development. For production code, there is absolutely no benefit for freezing/sealing objects.
Silly Typos
It could help you catch this very common problem during development:
if (myObject.someProp = 5) {
doSomething();
}
In strict mode, this would throw an error if myObject was frozen.
Enforce Coding Protocol / Restriction
It would also help in enforcing a certain protocol in a team, especially with new members who may not have the same coding style as everyone else.
A lot of Java guys like to add a lot of methods to objects to make JS feel more familiar. Freezing objects would prevent them from doing that.
I could see this being useful when you're working with an interactive tool. Rather than:
if ( ! obj.isFrozen() ) {
obj.x = mouse[0];
obj.y = mouse[1];
}
You could simply do:
obj.x = mouse[0];
obj.y = mouse[1];
Properties will only update if the object isn't frozen.
Don't know if this helps, but I use it to create simple enumerations. It allows me to hopefully not get duff data in a database, by knowing the source of the data has been attempted to be unchangeable without purposefully trying to break the code. From a statically typed perspective, it allows for reasoning over code construction.
All the other answers pretty much answer the question.
I just wanted to summarise everything here along with an example.
Use Object.freeze when you need utmost surety regarding its state in the future. You need to make sure that other developers or users of your code do not change internal/public properties. Alexander Mills's answer
Object.freeze has better performance since 19th June, 2019, ever since V8 v7.6 released. Philippe's answer. Also take a look at the V8 docs.
Here is what Object.freeze does, and it should clear out doubts for people who only have surface level understanding of Object.freeze.
const obj = {
name: "Fanoflix"
};
const mutateObject = (testObj) => {
testObj.name = 'Arthas' // NOT Allowed if parameter is frozen
}
obj.name = "Lich King" // Allowed
obj.age = 29; // Allowed
mutateObject(obj) // Allowed
Object.freeze(obj) // ========== Freezing obj ==========
mutateObject(obj) // passed by reference NOT Allowed
obj.name = "Illidan" // mutation NOT Allowed
obj.age = 25; // addition NOT Allowed
delete obj.name // deletion NOT Allowed

What is the point of Ext.apply versus simply setting values on the target?

Using Ext.js or sencha, what is the point of doing the following:
Ext.apply(app.views, {
contactsList: new app.views.ContactsList(),
contactDetail: new app.views.ContactDetail(),
contactForm: new app.views.ContactForm()
});
As opposed to this standard javascript:
app.views.contactsList = new app.views.ContactsList();
app.views.contactDetail = new app.views.ContactDetail();
app.views.contactForm = new app.views.ContactForm();
Is there any difference?
Ext.apply is generally more convenient (and possibly more efficient if there are fewer activation chain lookups required, as in your example, though that's a minor point) . There is also a variant Ext.applyIf that only applies members from the source object that do not exist in the target object, which is even more useful as it saves you from a boatload of manual if() checks. That's really useful, for example, when applying defaults to a config object that may already have user or app-defined properties assigned.
A note to future readers who look at the accepted answer: Ext also has Ext.extend which actually means "inherit from" a class, as opposed to Ext.apply[If] which merely merges an object instance into another, or Ext.override which overrides (without subclassing) a class definition. Lots of options, depending on what you need.
It's mostly there as a convenience method for code that is accepting an object as an argument, and needs to merge it. Merging objects is a common use case in JavaScript, and this type of helper is implemented by most frameworks. ($.extend in jQuery, Object.extend in Prototype, Object.append in MooTools, etc.)
In your case, there is little difference, other than offering a bit more readable code.
You need Ext.apply if you use Ajax requests to get the configurations from the server. Because Ajax responses are received later on, after the window is rendered. The second part of your code won't work.

Categories