Array length incorrect - javascript

I have an array filled with objects. It contains 23 items in total, however when I perform a .length on it, it returns 20.
// Get the new routes
var array = PostStore.getPostList();
console.log(array.objects);
console.log(array.objects.length);
Image of what my console returns:
What's going wrong here?

The problem is probably that the array changed between the time you logged it and the time you opened it in the console.
To get the array at the logging time, clone it:
console.log(array.objects.slice());
console.log(array.objects.length);
Note that this won't protect against the array element properties changing. If you want to freeze them too, you need a deep cloning, which is most often possible with
console.log(JSON.parse(JSON.stringify(array.objects.slice()));
This won't work if the objects aren't stringifyable, though (cyclic objects, very deep objects, properties throwing exceptions at reading, etc.). In that case you'll need a specific cloning, like my own JSON.prune.log.
A alternative to logging is also, in such a case, to debug. Set a breakpoint and look at the objects while the code is stopped.

Related

Object unexpectedly being modified after push into array

I have what seems like it should be a simple operation. For each bridgedSection, I check for a potentialSection with an id that matches the bridged.referenceSection
Then I take that result, parse the HTML on the object with Cherio, make a slight modification (using an id for testing), and then store both the bridgedSection and the modified result on an object, then push that object to the array.
If I log the new object BEFORE pushing, I get the correct object values. If I log it from the array I get incorrect values only for reference.section. bridgedSection is fine, but reference.section matches across all entries in the array.
To say that I'm thoroughly flummoxed is an understatement. Can anyone shed some light on what I am (clearly) doing wrong?
var sectionCount = 0;
bridgedSections.forEach(bridged => {
var obj = potentialSections.find(obj => obj._id == bridged.referenceSection);
$ = cheerio.load(obj.html);
$(".meditor").html(bridged._id);// dropping the id here so it's easy to see if it was updated
obj.html = $.html();
obj.rand = Math.floor(Math.random() * 1000); // can't seem to add to obj either
var thisSection = {
referenceSection: obj,
bridgedSection: bridged,
}
console.log(thisSection) // correct value logged
currentSections.push(thisSection);
sectionCount++;
});
console.log(currentSections);
// this logs an array of the correct length but each
// {}.referenceSection is identical to the last entry pushed above
To try to clarify what both of the above folks are saying, the JavaScript language (like many others) has the concept of references, and makes very heavy use of that concept.
When one variable "refers to" another, there is only one copy of the value in question: everything else is a reference to that one value. Changes made to any of those references will therefore change the [one ...] underlying value (and, be reflected instantaneously in all of the references).
The advantage of references is, of course, that they are extremely "lightweight."
If you need to make a so-called "deep copy" of an array or structure or what-have-you, you can do so. If you want to push the value and be sure that it cannot be changed, you need to make sure that what you've pushed is either such a "deep copy," or that there are no references (as there obviously are, now ...) to whatever it contains. Your choice.
N.B. References – especially circular references – also have important implications for memory management (and "leaks"), because a thing will not be "reaped" by the memory manager until all references to it have ceased to exist. (Everything is "reference counted.")
And, all of what I've just said pretty much applies equally to every language that supports this – as most languages now do.
Javascript is passes function parameters by reference. This means the following happens:
derp = {a:1}
function passedByRef(param){
param['a'] = 2;
}
passedByRef(derp)
console.log(derp['a']) // 2
So when you pass a json object to a function, if you modify said object in the function it will change the original object. You probably want to make a deep copy of bridged before you assign it to thisSection because if you modify the version of bridged later on in thisSection it will modify the original object.
Here is a post that talks about cloning objects or you could look into something like immutable js
I think you need to look into Javascript deep copy.
You are modifying the original object when you modify the second assigned variable, because they are pointing to the same object. What you really need is to duplicate the object, not simply making a pointer to it.
Take a look at this:
https://scotch.io/bar-talk/copying-objects-in-javascript#toc-deep-copying-objects

Dynamic property as key VS List of objects in Javascript - which one will be more efficient

I need a data structure something like HashMap(in JAVA) where I want map status of user using user id as a key . So I can easily create a map dynamically like this:-
var mapOfIdVsStatus = {
123: "true",
456: 'false'
}
alert(JSON.stringify(mapOfIdVsStatus));
In my scenario, new user id and status will be added/updated very frequently that is
Object.keys(mapOfIdVsStatus).length;
will increase. But this will help to search status of a user in a faster way.
I can do this in an alternative way like this:
var user1={
id:123,
status: true
}
var user2={
id:456,
status: false
}
var listOfUser = [];
listOfUser.push(user1);
listOfUser.push(user2);
alert(JSON.stringify(listOfUser));
I can also search status in the list for a user but need extra effort like looping..etc
Considering a lots of Add, Search and Memory Optimization which one will be my best choice?
Behaviors would be like:
Are keys usually unknown until run time? -YES
Do you need to look them up dynamically? - YES
Do all values have the same type? -YES
Can they be used interchangeably?- NO
Do you need keys that aren't strings? - Always String
Are key-value pairs often added or removed?- Always added, No removal
Do you have an arbitrary (easily changing) amount of key-value pairs?- YES
Is the collection iterated? - I want to avoid iteration for faster
access.
The best choice is an ES6 Map. MDN gives some hints when to use it instead of a plain object:
If you're still not sure which one to use, ask yourself the following questions:
Are keys usually unknown until run time? Do you need to look them up dynamically?
Do all values have the same type? Can they be used interchangeably?
Do you need keys that aren't strings?
Are key-value pairs often added or removed?
Do you have an arbitrary (easily changing) amount of key-value pairs?
Is the collection iterated?
If you answered 'yes' to any of those questions, that is a sign that you might want to use a Map.
The object, hands-down. Property lookup will be, at worst, a kind of hashmap-style lookup rather than a linear search.
If you do this, it's best to create the object without a prototype:
var mapOfIdVsStatus = Object.create(null);
...so it doesn't inherit properties from Object.prototype.
On ES2015-compliant JavaScript engines, you might also look at Map. Map is specifically designed to be a simple key/value map, and so is unencumbered by any object-ness. On Chrome and Firefox, at least in my simple test, it performs as well or better than object lookup with string keys (this is Chrome; Firefox seems to be similar):
...but object lookup easily outpaces Map with number keys (this is also Chrome, on Firefox it was only twice as fast, not just under three times as fast):
But, those tests only test querying values once they've been added, not the tumult of adding new ones. (The updates you're talking about wouldn't matter, as you're changing the properties on the stored objects, not the thing actually stored, if I'm reading right.)
One really key thing is that if you need to remove entries, with an object you're better off setting the property's value to null or undefined rather than using delete. Using delete on an object can really, really slow down subsequent operations on it.

My JavaScript object has a length field which I didn't assign

I have a rather strange problem which I haven't been able to find a solution to. It might be because no one has had this issue, or it could be the fact that I'm rather lost on what to Google.
Here's what's going on. I'm building a Rock-Paper-Scissors machine learning demo, and I have two JavaScript objects which are important to the script. Those two objects are the winners (an object which stores what beats what), and history (keeps track of all the plays).
At the very top of my script, I make both of those objects, as such:
var winners = {
}
var history = {
}
Now I'm getting to the problem. I have the rest of the code done, but it is irrelevant, because when I comment it out, and since I'm using p5.js, I have a preload function where I simply print out both objects.
function preload(){
console.log(winners);
console.log(history);
}
Then I go into Chrome and run the html which uses the script, open the console. I should see:
Object {}
Object {}
But NOPE!
Object {}
History {length: 1, scrollRestoration: "auto", state: null}
The winners object is just an empty object, but my history object has a length attribute as well as some other stuff. All of my other code is commented out, I promise.
Why does my history object have the stuff in it and how can I get rid of it?
The browser contains a window.history object, which is what you are seeing in the console log. See more information about this object here: https://developer.mozilla.org/en-US/docs/Web/API/Window/history
To circumvent this problem, use a slightly different name for your history object.
There is already a built-in object called history in the browser.
You can read more about it in:
https://developer.mozilla.org/en-US/docs/Web/API/History
http://www.w3schools.com/jsref/obj_history.asp

Does anyone know how consistent `[1,,2]` only having two keys is?

I'm wondering if I can rely on the fact that [1,,2] only has two keys: 0 and 2, or if anyone happens to know if any JS engines will also give me a 1.
Every browser I've tested shows keys 0, 2, but I don't have older versions available at the moment, or an android phone, or ...
Reasoning:
I'm writing a custom timer library on top of requestAnimationFrame and so am returning cancelable ids based on internal array indices. I'm trying to figure out if simply delete ary[ix] is sufficient to be able to walk all of the object keys on the array without extra sanity checks.
You are covered if you are willing to assume that your code will be running on a conforming implementation of ECMAScript. From the spec:
Array elements may be elided at the beginning, middle or end of the
element list. Whenever a comma in the element list is not preceded
by an AssignmentExpression (i.e., a comma at the beginning or after
another comma), the missing array element contributes to the length of
the Array and increases the index of subsequent elements. Elided array
elements are not defined. If an element is elided at the end of an
array, that element does not contribute to the length of the Array.
[1,,3] will produce
[1, undefined, 3]
The length returned is 3.
Edit: to clarify, even though the value is present, a key is not returned.
Object.keys([1,,3]) returns ["0", "2"]
Edit 2: a great answer about checking if an array key exists.
Checking if a key exists in a JavaScript object?

If I set only a high index in an array, does it waste memory?

In Javascript, if I do something like
var alpha = [];
alpha[1000000] = 2;
does this waste memory somehow? I remember reading something about Javascript arrays still setting values for unspecified indices (maybe sets them to undefined?), but I think this may have had something to do with delete. I can't really remember.
See this topic:
are-javascript-arrays-sparse
In most implementations of Javascript (probably all modern ones) arrays are sparse. That means no, it's not going to allocate memory up to the maximum index.
If it's anything like a Lua implementation there is actually an internal array and dictionary. Densely populated parts from the starting index will be stored in the array, sparse portions in the dictionary.
This is an old myth. The other indexes on the array will not be assigned.
When you assign a property name that is an "array index" (e.g. alpha[10] = 'foo', a name that represents an unsigned 32-bit integer) and it is greater than the current value of the length property of an Array object, two things will happen:
The "index named" property will be created on the object.
The length will be incremented to be that index + 1.
Proof of concept:
var alpha = [];
alpha[10] = 2;
alpha.hasOwnProperty(0); // false, the property doesn't exist
alpha.hasOwnProperty(9); // false
alpha.hasOwnProperty(10); // true, the property exist
alpha.length; // 11
As you can see, the hasOwnProperty method returns false when we test the presence of the 0 or 9 properties, because they don't exist physically on the object, whereas it returns true for 10, the property was created.
This misconception probably comes from popular JS consoles, like Firebug, because when they detect that the object being printed is an array-like one, they will simply make a loop, showing each of the index values from 0 to length - 1.
For example, Firebug detects array-like objects simply by looking if they have a length property whose its value is an unsigned 32-bit integer (less than 2^32 - 1), and if they have a splice property that is a function:
console.log({length:3, splice:function(){}});
// Firebug will log: `[undefined, undefined, undefined]`
In the above case, Firebug will internally make a sequential loop, to show each of the property values, but no one of the indexes really exist and showing [undefined, undefined, undefined] will give you the false sensation that those properties exist, or that they were "allocated", but that's not the case...
This has been like that since ever, it's specified even of the ECMAScript 1st Edition Specification (as of 1997), you shouldn't worry to have implementation differences.
About a year ago, I did some testing on how browsers handle arrays (obligatory self-promotional link to my blog post.) My testing was aimed more at CPU performance than at memory consumption, which is much harder to measure. The bottom line, though, was that every browser I tested with seemed to treat sparse arrays as hash tables. That is, unless you initialized the array from the get-go by putting values in consecutive indexes (starting from 0), the array would be implemented in a way that seemed to optimize for space.
So while there's no guarantee, I don't think that setting array[100000] will take any more room than setting array[1] -- unless you also set all the indexes leading up to those.
I dont think so because javascript treats arrays kinda like dictionaries, but with integer keys.
alpha[1000000] = alpha["1000000"]
I don't really know javascript, but it would be pretty odd behaviour if it DIDN'T allocate space for the entire array. Why would you think it wouldn't take up space? You're asking for a huge array. If it didn't give it to you, that would be a specific optimisation.
This obviously ignores OS optimisations such as memory overcommit and other kernel and implementation specifics.

Categories