var myObservableArray = ko.observableArray();
myObservableArray.push('Some value');
or
myObservableArray().push('Some value');
In my opinion only the second one should work because myObservableArray() is an array while myObservableArray is a function. However, to my big surprise both of them work. Could someone explain to me how push method is applied to a function without any problem?
Knockout is open source, so you can find out by looking at the observableArray source code!
// Populate ko.observableArray.fn with read/write functions from native arrays
// Important: Do not add any additional functions here that may reasonably be used to *read* data from the array
// because we'll eval them without causing subscriptions, so ko.computed output could end up getting stale
ko.utils.arrayForEach(["pop", "push", "reverse", "shift", "sort", "splice", "unshift"], function(methodName) {
ko.observableArray['fn'][methodName] = function() {
// Use "peek" to avoid creating a subscription in any computed that we're executing in the context of
// (for consistency with mutating regular observables)
var underlyingArray = this.peek();
this.valueWillMutate();
this.cacheDiffForKnownOperation(underlyingArray, methodName, arguments);
var methodCallResult = underlyingArray[methodName].apply(underlyingArray, arguments);
this.valueHasMutated();
// The native sort and reverse methods return a reference to the array, but it makes more sense to return the observable array instead.
return methodCallResult === underlyingArray ? this : methodCallResult;
};
});
https://github.com/knockout/knockout/blob/master/src/subscribables/observableArray.js#L101
As you can see, knockout exposes some of Array.prototype's methods. It uses apply on the underlying array (this.peek()) to actually use the original methods (instead of mimicking them).
There is one important difference between calling push on the observableArray or on the underlying array:
If you push to the underlying array, knockout will not automatically trigger an update. (Notice the this.valueHasMutated in the extension code)
var array1 = [1,2,3];
var array2 = [1,2,3];
var obsArr1 = ko.observableArray(array1);
var obsArr2 = ko.observableArray(array2);
obsArr1.subscribe(function() { console.log("Obs. Array 1 changed!"); });
obsArr2.subscribe(function() { console.log("Obs. Array 2 changed!"); });
obsArr1.push(4);
obsArr2().push(4);
<script src="https://cdnjs.cloudflare.com/ajax/libs/knockout/3.2.0/knockout-min.js"></script>
Related
Can anyone confirm or deny the below scenario, and explain your reasoning? I contend that this would cause two UI renders and is therefore less performant.
Suppose in Angular you have a data model that is hooked up to a dropdown in the UI. You start with a data model that is an array of objects, you clear the array, you re-fill the array with exactly equivalent objects that are different only in that a property has been changed:
[obj1, obj2, obj3, obj4]
// clear the array
[] // the first UI render event occurs
// you fill the array with new objects that are the same except the value
// of one property has changed from true to false
[obj1, obj2, obj3, obj4] // a second UI render event occurs
I contend that this is more performant:
[obj1, obj2, obj3, obj4]
// change a property on each object from true to false
[obj1, obj2, obj3, obj4] // a single render event occurs
Thank you for looking at this.
If the steps in your first example are supposed to be run synchronously, the assumption is false. Since JavaScript is single-threaded, angular won't have a chance to even notice that you have emptied the array before re-filling it.
For example:
// $scope.model === [obj1, obj2, obj3, obj4];
$scope.model.length = 0; // clear the array
// $scope.model === [] but no UI render occurs here
$scope.model = [obj5, obj6, obj7, obj8]; //re-fill with new objects
//UI render will happen later, and angular will only see the change
//from [obj1, obj2, obj3, obj4] to [obj5, obj6, obj7, obj8]
If the changes are supposed to involve asynchronicity, the delay in these asynchronous operations is likely to take much more time than the empty array render in between, so I wouldn't be concerned about that either.
The possible performance differences come from other things, like from creating new objects or angular needing to do deep equality checks when references haven't changed.
I doubt that this would be the bottleneck of any angular app, though, so I suggest you go with whatever suits your code style better. (Especially as mutable vs immutable objects is quite an important design decision to make).
Assuming that the process is in steps which require user interaction, the steps will be as follows. Note that the numbers in the indented lists represent the high level process that Angular uses.
Angular renders view with default array [obj1, obj2, ob3]
A watcher is created which watches array reference
Angular sets up watchers on the total array of objects
Watcher also watches properties of objects within the array
User interacts with view, causing array to be set to []
Watcher #1 above fires on new array reference, builds watchers for any new objects and deconstructs watchers for old references.
User interacts again to build new array of items with one property change [obj1, obj2′, obj3].
Watcher #1 fires, noticing a new array reference
Angular builds watchers for each object and properties of those objects within the array
In terms of speed, step #2 is essentially a NoOP. What you're probably running into is the time it takes Angular to construct and deconstruct the watchers when a new array of objects is created.
When angular sets up a watcher on an object or an array of objects it'll add in a $hash property to all objects within the array. Now imagine that a new array is created which looks the same as the old one but all of the object references in memory are new and the $hashes are gone or changed. This causes Angular to fire all of the $watch statements for that scope variable.
You shouldn't be creating a new array or assigning the original scope array to a new value. This is a bad pattern. The code below shows two methods: one which will cause the watchers to fire more than desirable, and a better method which will only fire the watchers that need to be fired.
scope.myArr = [{n:1, a:true}, {n:2, a:true}, {n:3, a:true}];
//BAD PATTERN
scope.doSomething = function(n){
scope.myArr = [];
//somehow it gets a new array
getNewArrayFromAPI(n)
.then(function(response){
scope.myArr = response.data;
});
//[{n:1, a:true}, {n:2, a:false}, {n:3, a:true}]
}
//user interaction
scope.doSomething(2);
The following good pattern updates the array in place, never changing the references to the original objects unless it needs to add a new object to the array.
//GOOD PATTERN
scope.myArr = [{n:1, a:true}, {n:2, a:true}, {n:3, a:true}];
scope.doSomething = function(n){
//this method shows an in-place non-HTTP change
scope.myArr.forEach(function(curr){
if(curr.n === n){
curr.a = false;
}
});
scope.getChangedArray(n);
};
//represents an HTTP call, POST or PUT maybe that updates the array
scope.getChangedArray = function(n){
$http.post("/api/changeArray", n)
.then(function(response){
//response.data represents the original array with n:2 changed
response.data.forEach(function(curr){
var match = scope.myArr.filter(function(currMyArr){
return currMyArr.n === curr.n;
});
if(match.length){
//update the existing properties on the matched object
angular.extend(match[0], curr);
}else{
//new object
scope.myArr.push(curr);
}
});
})
};
//user interaction
scope.doSomething(2);
I want to have a list of strings that is unique and so everytime I get a new string that I should push onto the list I need to check if the list contains the item before pushing it on the list. This seems unperformant.
However, if I use a hash structure and store the items as keys, is there some way to make this more performant than a simple array?
I guess I am simply wondering what the most performant set data structure exists in JavaScript.
Yes, using a Set will be much faster than checking for an existing value (O(1) for set vs. O(n) for an array).
var s = Set();
s.add(1); // s is (1)
s.add(2);
s.add(3);
s.add(1)
s.add(1)
// s is now (1, 2, 3)
In modern browsers (Chrome 38+, IE11+) the Set type is defined, it is documented here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set
Otherwise, in JavaScript, Object values (a fundamental type in ECMAScript) are internally implemented as hashtables - so the fastest conceptual "HashSet" structure would exist as a generalisation of of a hashtable with a disregarded value-type.
Here's how I'd do it (if Set was unavailable):
function StringSet() {
this.values = {};
this.add = function(value) {
value = value.toUpperCase(); // use UpperCase for string normalization because of how casing rules work in different languages, especially Turkish
this.values[ value ] = true; // use bool values as stubs
};
this.contains = function(value) {
value = value.toUpperCase();
return value in this.values; // JavaScript has a fast `in` operator which runs in `O(1)` time
}
}
var foo = new StringSet();
foo.add("bar");
assert( foo.contains("bar") );
Copying an array of objects into another array in javascript using slice(0) and concat() doesnt work.
I have tried the following to test if i get the expected behaviour of deep copy using this. But the original array is also getting modified after i make changes in the copied array.
var tags = [];
for(var i=0; i<3; i++) {
tags.push({
sortOrder: i,
type: 'miss'
})
}
for(var tag in tags) {
if(tags[tag].sortOrder == 1) {
tags[tag].type = 'done'
}
}
console.dir(tags)
var copy = tags.slice(0)
console.dir(copy)
copy[0].type = 'test'
console.dir(tags)
var another = tags.concat()
another[0].type = 'miss'
console.dir(tags)
How can i do a deep copy of a array into another, so that the original array is not modified if i make a change in copy array.
Try
var copy = JSON.parse(JSON.stringify(tags));
Try the following
// Deep copy
var newArray = jQuery.extend(true, [], oldArray);
For more details check this question out What is the most efficient way to deep clone an object in JavaScript?
As mentioned Here .slice(0) will be effective in cloning the array with primitive type elements. However in your example tags array contains anonymous objects. Hence any changes to these objects in cloned array are reflected in tags array.
#dangh's reply above derefences these element objects and create new ones.
Here is another thread addressing similar situation
A nice way to clone an array of objects with ES6 is to use spread syntax:
const clonedArray = [...oldArray];
MDN
Easiest and the optimistic way of doing this in one single line is using Underscore/Lodash
let a = _.map(b, _.clone)
You just need to use the '...' notation.
// THE FOLLOWING LINE COPIES all elements of 'tags' INTO 'copy'
var copy = [...tags]
When you have an array say x, [...x] creates a new array with all the values of x. Be careful because this notation works slightly differently on objects. It splits the objects into all of its key, value pairs. So if you want to pass all the key value pairs of an object into a function you just need to pass function({...obj})
Same issue happen to me. I have data from service and save to another variable. When ever I update my array the copied array also updated. old code is like below
//$scope.MyData get from service
$scope.MyDataOriginal = $scope.MyData;
So when ever I change $scope.MyData also change $scope.MyDataOriginal.
I found a solution that angular.copy right code as below
$scope.MyDataOriginal = angular.copy($scope.MyData);
I know that this is a bit older post but I had the good fortune to have found a decent way to deep copy arrays, even those containing arrays, and objects, and even objects containing arrays are copied... I only can see one issue with this code is if you don't have enough memory I can see this choking on very large arrays of arrays and objects... But for the most part it should work. The reason that I am posting this here is that it accomplishes the OP request to copy array of objects by value and not by reference... so now with the code (the checks are from SO, the main copy function I wrote myself, not that some one else probably hasn't written before, I am just not aware of them)::
var isArray = function(a){return (!!a) && (a.constructor===Array);}
var isObject = function(a){return (!!a) && (a.constructor===Object);}
Array.prototype.copy = function(){
var newvals=[],
self=this;
for(var i = 0;i < self.length;i++){
var e=self[i];
if(isObject(e)){
var tmp={},
oKeys=Object.keys(e);
for(var x = 0;x < oKeys.length;x++){
var oks=oKeys[x];
if(isArray(e[oks])){
tmp[oks]=e[oks].copy();
} else {
tmp[oks]=e[oks];
}
}
newvals.push(tmp);
} else {
if(isArray(e)){
newvals.push(e.copy());
} else {
newvals.push(e);
}
}
}
return newvals;
}
This function (Array.prototype.copy) uses recursion to recall it self when an object or array is called returning the values as needed. The process is decently speedy, and does exactly what you would want it to do, it does a deep copy of an array, by value... Tested in chrome, and IE11 and it works in these two browsers.
The way to deeply copy an array in JavaScript with JSON.parse:
let orginalArray=
[
{firstName:"Choton", lastName:"Mohammad", age:26},
{firstName:"Mohammad", lastName:"Ishaque", age:26}
];
let copyArray = JSON.parse(JSON.stringify(orginalArray));
copyArray[0].age=27;
console.log("copyArray",copyArray);
console.log("orginalArray",orginalArray);
For this i use the new ECMAScript 6 Object.assign method :
let oldObject = [1,3,5,"test"];
let newObject = Object.assign({}, oldObject)
the first argument of this method is the array to be updated,
we pass an empty object because we want to have a completely new object,
also you can add other objects to be copied too :
let newObject = Object.assign({}, oldObject, o2, o3, ...)
I have found a behavior I did not expect when trying to use a loop in order to change the value set for a property in an object.
Basically, I declare my object outside the loop.
Then I loop on an array of numeric values, which values are used to update the object property.
Inside the loop, I store the current object state inside an external array.
The result is that instead of having an array containing a series of objects with different numeric values, I end up having the same numeric values in each object stored.
Here is the fiddle http://jsfiddle.net/fAypL/1/
jQuery(function(){
var object_container = [];
var numeric_values = [1, 2 , 3, 4];
var my_object = {};
jQuery.each(numeric_values, function(index, value){
my_object['value'] = value;
object_container.push(my_object);
});
jQuery.each(object_container, function(index, value){
jQuery('#content').prepend(value['value']);
});
});
I would expect to get 1 2 3 4 as values stored in each object, however, what I get is 4 4 4 4, which does not make sense to me.
Any hint on this behavior is more than welcome, thanks
When your code calls .push() and passes my_object, what's being passed is a reference to the object. No copy is made.
Thus, you've pushed four references to the exact same object into the array.
JavaScript objects always participate in expressions in the form of references. There's no other way to deal with objects. Thus when you create a variable, and set its value to be an object, you're really setting its value to be a reference to the object. Same with parameter passing, and anywhere else an object can appear in an expression.
In this case, you can create new objects pretty easily; just dispense with my_object and push a fresh one on each iteration:
object_container.push( { value: value } );
You are not creating a new object each time around the loop - you are just updating the same existing object and pushing references of that to the object array. To create a new object you want to do something like:
my_object = { 'value': value };
object_container.push(my_object);
In this case you now will get something more like what you were looking for. See the updated fiddle here: http://jsfiddle.net/fAypL/2/.
Best of luck!
One more thought (Clone!) - If you are really tied to using the same object each time, just clone the object before you add to the array. There is a great solution for that here.
You are using jQuery so if what you want is to merge without effecting the original look at :
var both_obj = $.extend( {}, default_obj , adding_obj );
This will leave your original object changed, also good to use for a copy.
jquery docs - extend()
An alternate version is to use an object with a constructor and the new keyword:
var object_container = [];
var numeric_values = [1, 2 , 3, 4];
function MyObject(value)
{
this.value = value;
}
jQuery.each(numeric_values, function(index, value){
object_container.push(new MyObject(value));
});
jQuery.each(object_container, function(index, value){
jQuery('#content').prepend(value['value']);
});
Fiddle
My Task
In my JavaScript code i'm often using objects to "map" keys to values so i can later access them directly through a certain value. For example:
var helloMap = {};
helloMap.de = "Hallo";
helloMap["en"] = "Hello";
helloMap.es = "Hola";
So i build up the map object step by step in my source code using the two available notations object style and array style.
Later i can then access the values i added through helloMap["de"] for example. So thats all fine if i don't have to care about the order in which the attributes has been set on the object.
If i want to iterate the objects properties now as far as i know there is no way to ensure that i'll iterate them in the order they have been added (insertion order).
Note: I can't use some wrapper object and simply hold a array in there and then use its methods to add the values so something like this:
var HelloMap = function(){
this.myMap = [];
this.addProperty = function(key, value){
this.myMap.push({key: key, value: value});
}
}
or something similar won't work for me. So the solution needs to be absolutely transparent to the programmer using the object.
That said the object i needed would be an empty object which maintains the order of the properties that were added to it. Something like this would do:
var helloMap = {};
helloMap = getOrderAwareObject(helloMap);
so that every further assignment of the form helloMap.xy = "foo" and helloMap["yz"] = "bar" would be tracked in the object "in order",
Possible Solutions
Since i did not find any solution in underscore or jQuery giving me such a special object i came across the possibility of defining getters and setters for properties in JavaScript objects with Object.defineProperty since i can rely on ECMAScript 5 standard i can use it.
The Problem with this one is, that you have to know all the possible properties that can be set on the object, before they are actually set. Since if you define it you got to name it.
What i am searching for is something like a Default Getter and Default Setter which applies on the object if no getter and setter has been defined for the property. So i could then hide the sorted map behind the object inteface.
Is there already a solution for this in any framework you know?
Is there a mechanism like "default getter/setter" ?
You'll need a wrapper of some kind using an array internally, I'm afraid. ECMAScript 5 (which is the standard on which current browser JavaScript implementations are based) simply doesn't allow for ordered object properties.
However, ECMAScript 6 will have a Map implementation that has ordered properties. See also http://www.nczonline.net/blog/2012/10/09/ecmascript-6-collections-part-2-maps/.
There may also be other options in ECMAScript 6. See the following question:
How can I define a default getter and setter using ECMAScript 5?
Adding a link to a custom javascript library which provides Sorted maps and other implementation, for future reference in this thread . Check out https://github.com/monmohan/dsjslib
-msingh
I don't know of a general solution but non-general solutions are very simple to construct.
Typically, you maintain an Array of objects, with several methods defined as properties of the Array. At least, that's my approach.
Here's an example, taken (in a modified form) from a larger application :
var srcs = [];
srcs.find = function(dist) {
var i;
for(i=0; i<this.length; i++) {
if(dist <= this[i].dist) { return this[i]; }
}
return null;
};
srcs.add = function(dist, src) {
this.push({ dist:dist, src:src });
}
srcs.remove = function(dist) {
var i;
for(i=0; i<this.length; i++) {
if(this[i].dist === dist) {
srcs.splice(i,1);
return true;
}
}
return false;
};
srcs.add(-1, 'item_0.gif' );
srcs.add(1.7, 'item_1.gif');
srcs.add(5, 'item_2.gif');
srcs.add(15, 'item_3.gif');
srcs.add(90, 'item_4.gif');
Unfortunately, you lose the simplicity of a plain js object lookup, but that's the price you pay for having an ordered entity.
If you absolutely must have order and dot.notation, then maintain a plain js Object for lookup and an Array for order. With care, the two can be maintained with total integrity.
See my answer to this question. I implemented an basic ordered hashtable (ES 5+ only, didn't bother to polyfill)
var put = function(k,v){
if(map[k]){
console.log("Key "+ k+" is already present");
}else
{
var newMap = {};
map[k] = v;
Object.keys(map).sort().forEach(function(key){
newMap[key] = map[key];
});
map = newMap;
//delete newMap; in case object memory need to release
return map;
}
}
Put method will always take a key-value pair, internally creates another map with sorted keys from the actual map, update the value and return the updated map with sorted keys.No external library need to includ.