Javascript Object Identities - javascript

Objects in JavaScript have unique identities. Every object you create via an expression such as a constructor or a literal is considered differently from every other object.
What is the reason behind this?
{}==={}//output:false
For what reason they are treated differently? What makes them different to each other?

{} creates a new object.
When you try and compare two, separate new objects (references), they will never be equal.
Laying it out:
var a = {}; // New object, new reference in memory, stored in `a`
var b = {}; // New object, new reference in memory, stored in `b`
a === b; // Compares (different) references in memory
If it helps, {} is a "shortcut" for new Object(), so more explicitly:
var a = new Object();
var b = new Object();
a === b; // Still false
Maybe the explicitness of new helps you understand the comparison compares different objects.
On the other side, references can be equal, if they point to the same object. For example:
var a = {};
var b = a;
a === b; // TRUE

They are different instances of objects, and can be modified independently. Even if they (currently) look alike, they are not the same. Comparing them by their (property) values can be useful sometimes, but in stateful programming languages the object equality is usually their identity.

The fact that they're different is important in this scenario:
a={};
b={};
a.some_prop = 3;
At this point you'll obviously know that b.some_prop will be undefined.
The == or === operators thus allow you to be sure that you're not changing some object's properties, that you don't want changed

This question is quite old, but I think the actual solution does not pop out clearly enough in the given answers, so far.
For what reason they are treated differently? What makes them
different to each other?
I understand your pain, many sources in the internet do not come straight to the fact:
Object (complex JS types => objects, arrays and functions) variables store only references (=address of the instances in memory) as their value. Object identity is recognized by reference identity.
You expected something like an ID or reference inside the object, which you could use to tell them apart (maybe that's actually done transparently, under the hood). But every time you instantiate an object, a new instance is created in memory and only the reference to it is stored in the variable.
So, when the description of the ===-operator says that it compares the values, it actually means it compares the references (not the properties and their values), which are only equal if they point to the exactly same object.
This article explains it in detail: https://codeburst.io/explaining-value-vs-reference-in-javascript-647a975e12a0
BR
Michael

Both of the objects are created as a separate entities in the memory. To be precise, both of the objects are created as a separate entities on the heap (JavaScript engines use heap and stack memory models for managing running scripts). So, both of the objects may look the same (structure, properties etc.) but under the hood they have two separate addresses in the memory.
Here is some intuition for you. Imagine a new neighborhood where all houses are look the same. You've decided to build another two identical buildings and after finishing the construction both of the buildings are look the same and they even "sit" contiguously but still they are not the same building. They have two separate addresses.

I think that the simplest answer is "they are stored in different locations in memory". Although it is not always clear in languages that hide pointers ( if you know C, C++ or assembly language, you know what pointers are, if not, it is useful study to learn a low level language ) by making everything a pointer, each "object" is actually a pointer to a location in memory where the object exists. In some cases, two variables will point to the same location in memory. In others, they will point to different locations in memory that happen to have similar or identical content. It's like having two different URLs, each of which points to an identical page. The web pages are equal to each other, but the URLs are not.

Related

Object unexpectedly being modified after push into array

I have what seems like it should be a simple operation. For each bridgedSection, I check for a potentialSection with an id that matches the bridged.referenceSection
Then I take that result, parse the HTML on the object with Cherio, make a slight modification (using an id for testing), and then store both the bridgedSection and the modified result on an object, then push that object to the array.
If I log the new object BEFORE pushing, I get the correct object values. If I log it from the array I get incorrect values only for reference.section. bridgedSection is fine, but reference.section matches across all entries in the array.
To say that I'm thoroughly flummoxed is an understatement. Can anyone shed some light on what I am (clearly) doing wrong?
var sectionCount = 0;
bridgedSections.forEach(bridged => {
var obj = potentialSections.find(obj => obj._id == bridged.referenceSection);
$ = cheerio.load(obj.html);
$(".meditor").html(bridged._id);// dropping the id here so it's easy to see if it was updated
obj.html = $.html();
obj.rand = Math.floor(Math.random() * 1000); // can't seem to add to obj either
var thisSection = {
referenceSection: obj,
bridgedSection: bridged,
}
console.log(thisSection) // correct value logged
currentSections.push(thisSection);
sectionCount++;
});
console.log(currentSections);
// this logs an array of the correct length but each
// {}.referenceSection is identical to the last entry pushed above
To try to clarify what both of the above folks are saying, the JavaScript language (like many others) has the concept of references, and makes very heavy use of that concept.
When one variable "refers to" another, there is only one copy of the value in question: everything else is a reference to that one value. Changes made to any of those references will therefore change the [one ...] underlying value (and, be reflected instantaneously in all of the references).
The advantage of references is, of course, that they are extremely "lightweight."
If you need to make a so-called "deep copy" of an array or structure or what-have-you, you can do so. If you want to push the value and be sure that it cannot be changed, you need to make sure that what you've pushed is either such a "deep copy," or that there are no references (as there obviously are, now ...) to whatever it contains. Your choice.
N.B. References – especially circular references – also have important implications for memory management (and "leaks"), because a thing will not be "reaped" by the memory manager until all references to it have ceased to exist. (Everything is "reference counted.")
And, all of what I've just said pretty much applies equally to every language that supports this – as most languages now do.
Javascript is passes function parameters by reference. This means the following happens:
derp = {a:1}
function passedByRef(param){
param['a'] = 2;
}
passedByRef(derp)
console.log(derp['a']) // 2
So when you pass a json object to a function, if you modify said object in the function it will change the original object. You probably want to make a deep copy of bridged before you assign it to thisSection because if you modify the version of bridged later on in thisSection it will modify the original object.
Here is a post that talks about cloning objects or you could look into something like immutable js
I think you need to look into Javascript deep copy.
You are modifying the original object when you modify the second assigned variable, because they are pointing to the same object. What you really need is to duplicate the object, not simply making a pointer to it.
Take a look at this:
https://scotch.io/bar-talk/copying-objects-in-javascript#toc-deep-copying-objects

Storage and 'weight' of Arrays and Object variables

So...
var testArray=new Array("hello");
testArray.length=100;
console.log(testArray.length);
I believe the above means I have created an array that has 100 elements. The first element contains the word hello, the others are null, but their position reserved. While small, I suspect reserving these "slots" uses memory.
What about
var myObj={ testArray: new Array(), testVar: "elvis", anotherArray: new Array() };
myObj.testArray.length=1000;
What is the impact or weight of this setup within javascript. Does the engine reserve three containers, each of similar size for testArray, testVar and anotherArray since they fall under myObj?
I tend to create a single global object (called DATA), and then I create variables under that. These variables contain temporary session data within an intranet app which is used several hours a day. Some of the variables are arrays, some are strings. I do it this way because with a single command DATA=null I can empty everything which would not be the case if I were to have several global variables.
I'm just wondering if my thinking is poor or acceptable/understandable. The answer will also help me to better understand how javascript stores data.
Comments welcome...
I believe the above means I have created an array that has 100
elements. The first element contains the word hello, the others are
null, but their position reserved. While small, I suspect reserving
these "slots" uses memory.
You created an array with 1 element, then you changed the length of that array to 100. However, that does not mean that you have 99 null elements; you only have 1 element in the array. In other words, the length property of the array does not necessary tell you the number of defined elements. The process to reserve this does take memory, but is negligible for a small number of elements. However, I do not recommend using the .length property as an assignment; you are really introducing the potential for some unexpected behavior in your code, as well as mismanaging resources. Instead,You should allow the array to grow and shrink as needed, using functions; .push() and .pop(), .splice(). By doing this, you are minimizing the amount of space required by the array and improving performance.
What is the impact or weight of this setup within JavaScript. Does the engine reserve three containers, each of similar size for testArray, testVar and anotherArray since they fall under myObj?
There are 3 containers that are created for the object.
1)DATA object gets a container2)testArray gets a container because you used the constructor approach (not best-practice)3)anotherArray container because you used the constructor approach (not best-practice)
In your example, DATA is the container for all of the name:value pairs that exist within this container. The "weight" is exactly the same as your first approach (except you are allocating 1000 slots, instead of 100).
I highly suggest that you do not have statements that assign a value to the length of the array. The array should only use as much space as is needed, no more and no less.
You should also create arrays with the literal approach: var array = []; NOT with the new keyword, constructor approach, as you are doing. By using the new keyword, you raise the potential to have bugs in your code. Also, JavaScript variables declared using the new keyword are always created as objects. See the below example.
var data = new Array(2, 10); // Creates an array with two elements (2 and 10)
var data = new Array(2); // Creates an array with 2 undefined elements
Avoid polluting the global namespace!!!! Using an object is a good approach in your situation, but it's very important to keep the global namespace clean and free of clutter as much as possible.
Further supporting resources:
MDN Arrays
MDN array.length
MDN Objects
Writing Fast, Memory-Efficient JavaScript

Which way is best to define a JavaScript array? [duplicate]

I want to create an array in javascript and remember two ways of doing it so I just want to know what the fundamental differences are and if there is a performance difference in these two "styles"
var array_1 = new Array("fee","fie","foo","fum");
var array_2 = ['a','b','c'];
for (let i=0; i<array_1.length; i++){
console.log(array_1[i])
}
for (let i=0; i<array_2.length; i++){
console.log(array_2[i])
}
They do the same thing. Advantages to the [] notation are:
It's shorter.
If someone does something silly like redefine the Array symbol, it still works.
There's no ambiguity when you only define a single entry, whereas when you write new Array(3), if you're used to seeing entries listed in the constructor, you could easily misread that to mean [3], when in fact it creates a new array with a length of 3 and no entries.
It may be a tiny little bit faster (depending on JavaScript implementation), because when you say new Array, the interpreter has to go look up the Array symbol, which means traversing all entries in the scope chain until it gets to the global object and finds it, whereas with [] it doesn't need to do that. The odds of that having any tangible real-world impact in normal use cases are low. Still, though...
So there are several good reasons to use [].
Advantages to new Array:
You can set the initial length of the array, e.g., var a = new Array(3);
I haven't had any reason to do that in several years (not since learning that arrays aren't really arrays and there's no point trying to pre-allocate them). And if you really want to, you can always do this:
var a = [];
a.length = 3;
There's no difference in your usage.
The only real usage difference is passing an integer parameter to new Array() which will set an initial array length (which you can't do with the [] array-literal notation). But they create identical objects either way in your use case.
This benchmark on JSPerf shows the array literal form to be generally faster than the constructor on some browsers (and not slower on any).
This behavior is, of course, totally implementation dependent, so you'll need to run your own test on your own target platforms.
I believe the performance benefits are negligible.
See http://jsperf.com/new-array-vs-literal-array/4
I think both ways are the same in terms of performance since they both create an "Array object" eventually. So once you start accessing the array the mechanism will be the same. I not too sure about how different the mechanisms to construct the arrays be (in terms of performance) though it shouldn't be any noticeable gains using one way to the other.

When should I prefer a clone over an reference in javascript?

at the moment I'm writing a small app and came to the point, where I thought it would be clever to clone an object, instead of using a reference.
The reason I'm doing this is, because I'm collecting objects in a list. Later I will only work with this list, because it's part of a model. The reference isn't something I need and I want to avoid having references to outside objects in the list, because I don't want someone to build a construct, where the model can be changed from an inconsiderate place in their code. (The integrity of the information in the model is very important.)
Additional I thought I will get a better performance out of it, when I don't use references.
So my overall question still is: When should I prefer a clone over an reference in javascript?
Thanks!
If stability is important, then clone it. If testing shows that this is a bottleneck, consider changing it to a reference. I'd be very surprised if it is a bottleneck though, unless you have a very complicated object which is passed back and forth very frequently (and if you're doing that it's probably an indication of a bad design).
Also remember that you can only do so much to save other developers from their own stupidity. If they really want to break your API, they could just replace your functions with their own by copying the source or modifying it at runtime. If you document that the object must not be changed, a good developer (yes, there are some) will follow that rule.
For what it's worth, I've used both approaches in my own projects. For small structs which don't get passed around much, I've made copies for stability, and for larger data (e.g. 3D vertex data which may be passed around every frame), I don't copy.
Why not just make the objects stored in the list immutable? Instead of storing simple JSON-like objects you would store closures.
Say you have an object with two properties A and B. It looks like that:
myObj = {
"A" : "someValue",
"B" : "someOtherValue"
}
But then, as you said, anyone could alter the state of this object by simply overriding it's properties A or B. Instead of passing such objects in a list to the client, you could pass read-only data created from your actual objects.
First define a function that takes an ordinary object and returns a set of accessors to it:
var readOnlyObj = function(builder) {
return {
getA : function() { return builder.A; },
getB : function() { return builder.B; }
}
}
Then instead of your object myObj give the user readOnlyObj(myObj) so that they can access the properties by methods getA and getB.
This way you avoid the costs of cloning and provide a clear set of valid actions that a user can perform on your objects.

defining array in javascript

I want to create an array in javascript and remember two ways of doing it so I just want to know what the fundamental differences are and if there is a performance difference in these two "styles"
var array_1 = new Array("fee","fie","foo","fum");
var array_2 = ['a','b','c'];
for (let i=0; i<array_1.length; i++){
console.log(array_1[i])
}
for (let i=0; i<array_2.length; i++){
console.log(array_2[i])
}
They do the same thing. Advantages to the [] notation are:
It's shorter.
If someone does something silly like redefine the Array symbol, it still works.
There's no ambiguity when you only define a single entry, whereas when you write new Array(3), if you're used to seeing entries listed in the constructor, you could easily misread that to mean [3], when in fact it creates a new array with a length of 3 and no entries.
It may be a tiny little bit faster (depending on JavaScript implementation), because when you say new Array, the interpreter has to go look up the Array symbol, which means traversing all entries in the scope chain until it gets to the global object and finds it, whereas with [] it doesn't need to do that. The odds of that having any tangible real-world impact in normal use cases are low. Still, though...
So there are several good reasons to use [].
Advantages to new Array:
You can set the initial length of the array, e.g., var a = new Array(3);
I haven't had any reason to do that in several years (not since learning that arrays aren't really arrays and there's no point trying to pre-allocate them). And if you really want to, you can always do this:
var a = [];
a.length = 3;
There's no difference in your usage.
The only real usage difference is passing an integer parameter to new Array() which will set an initial array length (which you can't do with the [] array-literal notation). But they create identical objects either way in your use case.
This benchmark on JSPerf shows the array literal form to be generally faster than the constructor on some browsers (and not slower on any).
This behavior is, of course, totally implementation dependent, so you'll need to run your own test on your own target platforms.
I believe the performance benefits are negligible.
See http://jsperf.com/new-array-vs-literal-array/4
I think both ways are the same in terms of performance since they both create an "Array object" eventually. So once you start accessing the array the mechanism will be the same. I not too sure about how different the mechanisms to construct the arrays be (in terms of performance) though it shouldn't be any noticeable gains using one way to the other.

Categories