Angular rendering performance and optimization - javascript

Can anyone confirm or deny the below scenario, and explain your reasoning? I contend that this would cause two UI renders and is therefore less performant.
Suppose in Angular you have a data model that is hooked up to a dropdown in the UI. You start with a data model that is an array of objects, you clear the array, you re-fill the array with exactly equivalent objects that are different only in that a property has been changed:
[obj1, obj2, obj3, obj4]
// clear the array
[] // the first UI render event occurs
// you fill the array with new objects that are the same except the value
// of one property has changed from true to false
[obj1, obj2, obj3, obj4] // a second UI render event occurs
I contend that this is more performant:
[obj1, obj2, obj3, obj4]
// change a property on each object from true to false
[obj1, obj2, obj3, obj4] // a single render event occurs
Thank you for looking at this.

If the steps in your first example are supposed to be run synchronously, the assumption is false. Since JavaScript is single-threaded, angular won't have a chance to even notice that you have emptied the array before re-filling it.
For example:
// $scope.model === [obj1, obj2, obj3, obj4];
$scope.model.length = 0; // clear the array
// $scope.model === [] but no UI render occurs here
$scope.model = [obj5, obj6, obj7, obj8]; //re-fill with new objects
//UI render will happen later, and angular will only see the change
//from [obj1, obj2, obj3, obj4] to [obj5, obj6, obj7, obj8]
If the changes are supposed to involve asynchronicity, the delay in these asynchronous operations is likely to take much more time than the empty array render in between, so I wouldn't be concerned about that either.
The possible performance differences come from other things, like from creating new objects or angular needing to do deep equality checks when references haven't changed.
I doubt that this would be the bottleneck of any angular app, though, so I suggest you go with whatever suits your code style better. (Especially as mutable vs immutable objects is quite an important design decision to make).

Assuming that the process is in steps which require user interaction, the steps will be as follows. Note that the numbers in the indented lists represent the high level process that Angular uses.
Angular renders view with default array [obj1, obj2, ob3]
A watcher is created which watches array reference
Angular sets up watchers on the total array of objects
Watcher also watches properties of objects within the array
User interacts with view, causing array to be set to []
Watcher #1 above fires on new array reference, builds watchers for any new objects and deconstructs watchers for old references.
User interacts again to build new array of items with one property change [obj1, obj2′, obj3].
Watcher #1 fires, noticing a new array reference
Angular builds watchers for each object and properties of those objects within the array
In terms of speed, step #2 is essentially a NoOP. What you're probably running into is the time it takes Angular to construct and deconstruct the watchers when a new array of objects is created.
When angular sets up a watcher on an object or an array of objects it'll add in a $hash property to all objects within the array. Now imagine that a new array is created which looks the same as the old one but all of the object references in memory are new and the $hashes are gone or changed. This causes Angular to fire all of the $watch statements for that scope variable.
You shouldn't be creating a new array or assigning the original scope array to a new value. This is a bad pattern. The code below shows two methods: one which will cause the watchers to fire more than desirable, and a better method which will only fire the watchers that need to be fired.
scope.myArr = [{n:1, a:true}, {n:2, a:true}, {n:3, a:true}];
//BAD PATTERN
scope.doSomething = function(n){
scope.myArr = [];
//somehow it gets a new array
getNewArrayFromAPI(n)
.then(function(response){
scope.myArr = response.data;
});
//[{n:1, a:true}, {n:2, a:false}, {n:3, a:true}]
}
//user interaction
scope.doSomething(2);
The following good pattern updates the array in place, never changing the references to the original objects unless it needs to add a new object to the array.
//GOOD PATTERN
scope.myArr = [{n:1, a:true}, {n:2, a:true}, {n:3, a:true}];
scope.doSomething = function(n){
//this method shows an in-place non-HTTP change
scope.myArr.forEach(function(curr){
if(curr.n === n){
curr.a = false;
}
});
scope.getChangedArray(n);
};
//represents an HTTP call, POST or PUT maybe that updates the array
scope.getChangedArray = function(n){
$http.post("/api/changeArray", n)
.then(function(response){
//response.data represents the original array with n:2 changed
response.data.forEach(function(curr){
var match = scope.myArr.filter(function(currMyArr){
return currMyArr.n === curr.n;
});
if(match.length){
//update the existing properties on the matched object
angular.extend(match[0], curr);
}else{
//new object
scope.myArr.push(curr);
}
});
})
};
//user interaction
scope.doSomething(2);

Related

Difference between returning a copy or manipulating original objects in array.prototype.map (In RxJS pipe)

I am working on an Angular 9, RxJS 6 app and have a question regarding the different outcomes of piping subject values and doing unit conversion in that pipe.
Please have a look at this stackblitz. There, inside the backend.service.ts file, an observable is created that does some "unit conversion" and returns everything that is emmitted to the _commodities Subject. If you look at the convertCommodityUnits function, please notice that I commented out the working example and instead have the way I solved it initially.
My question: When you use the unsubscribe buttons on the screen and subscribe again, when using the "conversion solution" that just overrides the object without making a copy, the values in the HTML are converted multiple times, so the pipe does not use the original data that the subject provides. If you use the other code, so creating a clone of the commodity object inside convertCommodityUnits, it works like expected.
Now, I don't understand why the two ways of converting the data behave so differently. I get that one manipulates the data directly, because js does Call by sharing and one returns a new object. But the object that is passed to the convertCommodityUnits function is created by the array.prototype.map function, so it should not overwrite anything, right? I expect that RxJS uses the original, last data that was emitted to the subject to pass into the pipe/map operators, but that does not seem to be the case in the example, right?
How/Why are the values converted multiple times here?
This is more or less a follow-up question on this: Angular/RxJS update piped subject manually (even if no data changed), "unit conversion in rxjs pipe", so it's the same setup.
When you're using map you got a new reference for the array. But you don't get new objects in the newly generated array (shallow copy of the array), so you're mutating the data inside the element.
In the destructuring solution, because you have only primitive types in each object in the array, you kind of generate completely brand new elements to your array each time the conversion method is called (this is important: not only a new array but also new elements in the array => you have performed a deep copy of the array). So you don't accumulate successively the values in each subscription.
It doesn't mean that the 1-level destructuring solution like you used in the provided stackblitz demo will work in all cases. I've seen this mistake being made a lot out there, particularly in redux pattern frameworks that need you to not mutate the stored data, like ngrx, ngxs etc. If you had complex objects in your array, the 1-level destructuring would've kept untouched all the embedded objects in each element of the array. I think it's easier to describe this behavior with examples:
const obj1 = {a: 1};
const array = [{b: 2, obj: obj1}];
// after every newArray assignment in the below code,
// console.log(newArray === array) prints false to the console
let newArray = [...array];
console.log(array[0] === newArray[0]); // true
newArray = array.map(item => item);
console.log(array[0] === newArray[0]); // true
newArray = array.map(item => ({...item}));
console.log(array[0] === newArray[0]); // false
console.log(array[0].obj === newArray[0].obj); // true
newArray = array.map(item => ({
...item,
obj: {...item.obj}
}));
console.log(array[0] === newArray[0]); // false
console.log(array[0].obj === newArray[0].obj); // false

Immutables and collections in JavaScript

I'm trying to get my head around how to use Immutables in JavaScript/TypeScript without taking all day about it. I'm not quite ready to take the dive into Immutable.js, because it seems to leave you high and dry as far as type safety.
So let's take an example where I have an Array where the elements are all of Type MyType. In my Class, I have a method that searches the Array and returns a copy of a matching element so we don't edit the original. Say now that at a later time, I need to look and see if the object is in the Array, but what I have is the copy, not the original.
What is the standard method of handling this? Any method I can think of to determine whether I already have this item is going to take some form of looping through the collection and visiting each element and then doing a clunky equality match, whether that's turning both of them to strings or using a third-party library.
I'd like to use Immutables, but I keep running into situations like this that make them look pretty unattractive. What am I missing?
I suspect that my solution is not "...the standard method of handling this." However, I think it at least is a way of doing what I think you're asking.
You write that you have a method that "...returns a copy of a matching element so we don't edit the original". Could you change that method so that it instead returns both the original and a copy?
As an example, the strategy below involves retrieving both an original element from the array (which can later be used to search by reference) as well as a clone (which can be manipulated as needed without affecting the original). There is still the cost of cloning the original during retrieval, but at least you don't have to do such conversions for every element in the array when you later search the array. Moreover, it even allows you to differentiate between array elements that are identical-by-value, something that would be impossible if you only originally retrieved a copy of an element. The code below demonstrates this by making every array element identical-by-value (but, by definition of what objects are, different-by-reference).
I don't know if this violates other immutability best practices by, e.g., keeping copies of references to elements (which, I suppose, leaves the code open to future violations of immutability even if they are not currently being violated...though you could deep-freeze the original to prevent future mutations). However it at least allows you to keep everything technically immutable while still being able to search by reference. Thus you can mutate your clone as much as you want but still always hold onto an associated copy-by-reference of the original.
const retrieveDerivative = (array, elmtNum) => {
const orig = array[elmtNum];
const clone = JSON.parse(JSON.stringify(orig));
return {orig, clone};
};
const getIndexOfElmt = (array, derivativeOfElement) => {
return array.indexOf(derivativeOfElement.orig);
};
const obj1 = {a: {b: 1}}; // Object #s are irrelevant.
const obj3 = {a: {b: 1}}; // Note that all objects are identical
const obj5 = {a: {b: 1}}; // by value and thus can only be
const obj8 = {a: {b: 1}}; // differentiated by reference.
const myArr = [obj3, obj5, obj1, obj8];
const derivedFromSomeElmt = retrieveDerivative(myArr, 2);
const indexOfSomeElmt = getIndexOfElmt(myArr, derivedFromSomeElmt);
console.log(indexOfSomeElmt);
The situation you've described is one where a mutable datastructure has obvious advantages, but if you otherwise benefit from using immutables there are better approaches.
While keeping it immutable means that your new updated object is completely new, that cuts both ways: you may have a new object, but you also still have access to the original object! You can do a lot of neat things with this, e.g. chain your objects so you have an undo-history, and can go back in time to roll back changes.
So don't use some hacky looking-up-the-properties in the array. The problem with your example is because you're building a new object at the wrong time: don't have a function return a copy of the object. Have the function return the original object, and call your update using the original object as an index.
let myThings = [new MyType(), new MyType(), new MyType()];
// We update by taking the thing, and replacing with a new one.
// I'll keep the array immutable too
function replaceThing(oldThing, newThing) {
const oldIndex = myThings.indexOf(oldThing);
myThings = myThings.slice();
myThings[oldIndex] = newThing;
return myThings;
}
// then when I want to update it
// Keep immutable by spreading
const redThing = myThings.find(({ red }) => red);
if (redThing) {
// In this example, there is a 'clone' method
replaceThing(redThing, Object.assign(redThing.clone(), {
newProperty: 'a new value in my immutable!',
});
}
All that said, classes make this a whole lot more complex too. It's much easier to keep simple objects immutable, since you could simple spread the old object into the new one, e.g. { ...redThing, newProperty: 'a new value' }. Once you get a higher than 1-height object, you may find immutable.js far more useful, since you can mergeDeep.

Both observableArray itself and its inner value have push methods

var myObservableArray = ko.observableArray();
myObservableArray.push('Some value');
or
myObservableArray().push('Some value');
In my opinion only the second one should work because myObservableArray() is an array while myObservableArray is a function. However, to my big surprise both of them work. Could someone explain to me how push method is applied to a function without any problem?
Knockout is open source, so you can find out by looking at the observableArray source code!
// Populate ko.observableArray.fn with read/write functions from native arrays
// Important: Do not add any additional functions here that may reasonably be used to *read* data from the array
// because we'll eval them without causing subscriptions, so ko.computed output could end up getting stale
ko.utils.arrayForEach(["pop", "push", "reverse", "shift", "sort", "splice", "unshift"], function(methodName) {
ko.observableArray['fn'][methodName] = function() {
// Use "peek" to avoid creating a subscription in any computed that we're executing in the context of
// (for consistency with mutating regular observables)
var underlyingArray = this.peek();
this.valueWillMutate();
this.cacheDiffForKnownOperation(underlyingArray, methodName, arguments);
var methodCallResult = underlyingArray[methodName].apply(underlyingArray, arguments);
this.valueHasMutated();
// The native sort and reverse methods return a reference to the array, but it makes more sense to return the observable array instead.
return methodCallResult === underlyingArray ? this : methodCallResult;
};
});
https://github.com/knockout/knockout/blob/master/src/subscribables/observableArray.js#L101
As you can see, knockout exposes some of Array.prototype's methods. It uses apply on the underlying array (this.peek()) to actually use the original methods (instead of mimicking them).
There is one important difference between calling push on the observableArray or on the underlying array:
If you push to the underlying array, knockout will not automatically trigger an update. (Notice the this.valueHasMutated in the extension code)
var array1 = [1,2,3];
var array2 = [1,2,3];
var obsArr1 = ko.observableArray(array1);
var obsArr2 = ko.observableArray(array2);
obsArr1.subscribe(function() { console.log("Obs. Array 1 changed!"); });
obsArr2.subscribe(function() { console.log("Obs. Array 2 changed!"); });
obsArr1.push(4);
obsArr2().push(4);
<script src="https://cdnjs.cloudflare.com/ajax/libs/knockout/3.2.0/knockout-min.js"></script>

$watchCollection() with Nested Arrays

I have a nested array of the form :
$scope.itinerary =
[
[
{name:'x'},
{name:'y'},
{name:'z'}
],
[
{name:'a'},
{name:'b'},
{name:'c'}
]
]
And I am doing a $watchCollection using the following :
$scope.$watchCollection(function () {
return $scope.itinerary;
},
function () {
console.log("Changed")
}
);
But the console.log() is only executed if one of the sub array is deleted or a new array is inserted. If I move an element from One array to another, nothing happens. (eg when I move {name:'a'} from one array to another, nothing happens). How do I put a watch on the nested Array ?
Use deep watch
The $watch() function takes a third, optional argument for "object equality." If you pass-in "true" for this argument, AngularJS will actually perform a deep-object-tree comparison. This means that within each $digest, AngularJS will check to see if the new and old values have the same structure (not just the same physical reference). This allows you to monitor a larger landscape; however, the deep object tree comparison is far more computationally expensive.
$scope.$watch('itinerary',function (newVal,oldVal) {
console.log(newVal)
},true);
Rather than use $watchCollection, you should use $watch with a third argument set to true.
This will works, but it is also a bad idea for performance if the array is large, so use with caution.
Comparison is done using angular.eaquals comparing to a copied object obtained with angular.copy.
More details at https://docs.angularjs.org/api/ng/type/$rootScope.Scope#$watch

UnderscoreJS find-and-replace

In my application, I have a very large array of objects on the front-end, and these objects all have some kind of long ID under the heading ["object_id"]. I'm using UnderscoreJS for all my manipulations of this list. This is a prototype for an application that will eventually be handling most of this effort on the backend.
Merging the list is a big part of the application's requirement. See, the list that I work with initially will have many distinct objects with identical object_ids. Initially I was merging them all in one go with a groupBy and a map-reduce, but now the requirements have changed and I'm merging them one at a time (about a second apart, to simulate a stream of input) into a initially empty array.
My naive implementation was something like this:
function(newObject, objects) {
var obj_id = newObject["object_id"]; //id of object to merge, let's say
var tempObject = null;
var objectToMerge = _.find(objects,
function(obj) {
return obj_id == obj["object_id"];
});
if (objectToMerge) {
tempObject = merge(objectToMerge, newObject);
objects = _.reject(objects, /*same function as findWhere*/ );
} else {
tempObject = newObject;
}
objects.push(tempObject);
return objects;
}
This is ridiculously more efficient than before, when I was remerging from the mock data "source" array every time a new object was supposed to be pushed, so it's down from what I think was O(N^2) at least to O(N), but N here is so large (for JavaScript, anyway!) I'd like to optimize it. Currently worst case, where the object_id is not redundant, is the entire list is traversed twice. So what I'd like is to do a find-and-replace, an operation which would return a new version of the list, but with the merged object in place of the old one.
I could do a map where the iterator returns a new, merged object iff the object_id is the same, but that doesn't have the short-circuit evaluation that _.find has, which means the difference between having a worst-case runtime and having that be the default runtime, and doesn't easily account for pushing the object if there wasn't a match.
I'd also like to avoid mutating the original array in place. I know objects.push(tempObject) does that very thing, but for data-binding reasons I'm ignoring that and returning the mutated list as though it were new.
It's also unavoidable that I'll have to check the array to see if the new object was merged or whether it was appended. Using closures I could keep track of a flag to see if the merge happened, but I'm trying to be as idiomatically LISPy as possible for my own sanity. Also, past a certain point, most objects will be merged, so extra runtime overheard for adding new items isn't a huge problem, as long as it is only incurred when it has to happen.

Categories