I have an angular (9) component which gets BehaviourSubjects. I learn from many sources like this to use the async-pipe when displaying observables content (instead of subscribing it in ngInit). There's also the trick, using *ngIf* with as to not repeat it all the time. But since they are BehaviourSubjects after all, I could could simply do
<div>{{behaviourSubject.getValue()}}</div>
or whatever. Actually it's seems much cleaner to me then using 'async' and practically leads to less problems here and there. But I am nor sure if this is an okay pattern or has it serious disadvateges?
I'd refer you to Ben Lesh's (author of RxJS) answer on this topic here
99.9% of the time you should NOT use getValue()
There are multiple reasons for that...
In Angular, you won't be able to use the OnPush ChangeDetectionStrategy. Not using it makes your app slower because Angular will constantly try to sync the value with the cached view value. In your case, it even needs to call the getValue function first.
When the BehaviourSubject errors or completes you'll not be able to call getValue.
Generally the use of getValue, and I'd argue even BehaviourSubject, is not necessary, because you can express most Observables by only using pipeable operators on another source Observable. The only real place where Subjects are necessary is when you need to convert an otherwise unobservable event to an Observable.
While it might look cleaner not to use async, you're actually moving the hard work to Angular which needs to do figure out when it should call getValue().
BehaviorSubject often live inside services in order to dispatch new values to other services/components to keep them up to date.
A good practice is to declare the BehaviorSubject as private and to only exposes him .asObservable(), so consumers aren't allowed to change its value directly.
That's why we have to use the async pipe on the provided observable source.
Second reason: async pipes are automatically unsubscribing from the observables they're fed with. [Edition]: as the comparison is with .getValue() which provide the value of the subject without the need to subscribe, there is no explicit benefit of the pipe of a subject in this use case.
Calling methods within templates expressions would be the first thing you would want to avoid in Angular.It is considered bad practice to call a method within the template.Click here for more information around that.
As Gerome mentioned, it would be a right approach to expose behaviour subject as an observable and subscribing to it within the template using async pipe, and since its a behaviour subject, it will always have latest values emitted as well on subscription, hence you can avoid using getValue() method as well.
If your property is of BehaviourSubject type it's totally fine to use getValue() in the template. The difference between getValue() and | async as value is that getValue() is called every time of change detection to detect rerender case, but because there's nothing behind than return this._value it's totally fine.
Related
Background
I'm the maintainer of a low level library for fast object traversal in Node.js. The focus of the library is speed and it is heavily optimised. However there is one big slowdown: Callback Parameters
The Problem
Callbacks are provided by the library consumer and can be invoked many, many times per scan. For every invocation all parameters are computed and passed to the callback. In most cases only a fraction of the parameters are actually used by the callback.
The Goal
The goal is to eliminate the unnecessary computation of these parameters.
Solutions Ideas
Ideally NodeJs would expose the callback parameters as defined by the callback. However obtaining them doesn't seem to be possible without a lot of black magic (string parsing). It would also not solve the situation where parameters are only required conditionally.
Instead of trying to obtain the parameters from the callback, we could require the callback to expose the required parameters. It sounds very inconvenient and error prone and would also not solve conditionally requires.
We could introduce a different callback for every parameter combination. This sounds like a bad idea.
Instead of passing in the parameters directly, we could pass in a function for each parameter that computes and returns the parameter value. Inside the callback the parameter would then be invoked as required. It's ugly but might be the best approach?
Questions
How do other libraries solve this?
What are other ways this can be solved?
This is a very fundamental design decision and I'm trying to get this right.
Thank you very much for your time! As always appreciated!
You could pass to the callback an object that has various methods on it that the client using the callback could call to fetch whatever parameters they actually need. That way, you'd have a clean object interface and you'd only compute the necessary information that was actually requested.
This general design pattern is sometimes called "lazy computation" where you only do the computation as required. You can use either accessor functions or getters, depending upon the type of interface you want to expose.
For performance reasons, you can perhaps reuse the same object for each time you call the callback rather than building a new one (depends upon details of your implementation).
Note that you don't even have to put all the information needed for the computation into the object itself as the methods on the object can, in some cases, refer to your own local context and locally scoped variables when doing their computation.
However there is one big slowdown: Callback Parameters
Did you actually benchmark this? I doubt constructing the argument values is that costly. Notice that if this is a really heavily used call, V8 might be able to inline it and then optimise away unused argument values.
Ideally NodeJs would expose the callback parameters as defined by the callback.
Actually, it does. If you do want to rely on this property though, you should properly document that you do, otherwise this magic could lead to obscure bugs.
We could introduce a different callback for every parameter combination. This sounds like a bad idea.
It doesn't seem to be that much of a problem to provide two options, filter(key, value) and filterDetailed(key, value, context). If the optimisation is really worth it, and as you say this is a low-level library, just go for it.
Instead of passing in the parameters directly, we could pass in a function for each parameter that computes and returns the parameter value. Inside the callback the parameter would then be invoked as required. It's ugly but might be the best approach?
Constructing a closure object to pass instead of a parameter does have some overhead as well, so you will need to benchmark this properly. It might not be worth it.
However, I see that you are actually passing a single context object as the argument on which the computed values are accessed as properties. In that case, you can simply make these properties getters that will compute the value when they are accessed, not when the object is constructed.
I am yet to work on static getDerivedStateFromProps so I am trying to understand about it.
I understand React has deprecated componentWillReceiveProps in React v16+ by introducing a new life cycle method called static getDerivedStateFromProps(). Ok but wondering why React has changed to a static method instead of a normal method.
Why
static getDerivedStateFromProps(nextProps, prevState){
}
Why not
getDerivedStateFromProps(nextProps, prevState){
}
I am unable to understand why it’s a static method.
To understand what React is trying to achieve with static methods, you should have a good understanding of the following:
Side-effects
Why is asynchronous code considered a bad approach up until the componentDidMount hook
Asynchronous rendering
How static methods aid in discouraging impure and asynchronous coding
Side-effect is nothing but manipulating any data out of scope. So side-effects in getDerivedStateFromProps would mean changes to any other variable other than its own local variables.
Functions that don't cause side-effects are called pure functions and in the case of their arguments, these are cloned before they are manipulated, thereby preserving the state of the objects that such arguments point to.
These functions simply return modified values from within their scope and the caller can decide the course of action with the returned data.
Inducing custom asynchronous code in a library like React with its own lifecycle flows is not a great idea. It should be carefully inserted at the right moment. Let's understand why by analysing the component creation lifecycle of a custom class component (to keep this short lets consider it is also the root element).
At the beginning the ReactDOM.render method invokes the react.createElement() method call.
react.createElement() => calls new ClassElement(props) constructor => returns the ClassElement instance.
After the constructor call, react.createElement() calls the ClassElement.getDerivedStateFromProps(props) method call.
After the above method returns, react.createElement() calls the instance.render() method.
(this can be skipped)
This is followed up with other synchronous calls such as diffing with
virtual DOM and updating real DOM etc and there are no hooks provided
to tap into these calls(mostly because there is no strong need). A key
point to note here is that javascript execution, real DOM updates and
UI painting - all - happen within a single thread in the browser thus
forcing them to be synchronous. This is one reason why you can write
something synchronous like:
let myDiv = document.getElementbyID("myDiv");
myDiv.style.width = "300px"; // myDiv already holds a reference to the real DOM element
console.log(myDiv.style.width); // the width is already set!
because you know at the end of each of those statements, that the
earlier statement is completed in DOM and in browser window(UI I
mean).
Finally, after the render method returns, react.createElement() calls the componentDidMount to successfully mark the end of lifecycle. Since it's the end, componentDidMount naturally serves as the best junction to attach asynchronous as well as impure functions.
What we must understand is that the lifecycle methods is constantly tweaked for performance and flexibility reasons and is completely under the control of React engineers. It's not just with React, in fact it's with any third party code's flow. So inducing impure functions or asynchronous calls could lead to issues because you would be forcing the React Engineers to be careful with their optimisations.
For e.g. if the React Engineers decide to run getDerivedStateFromProps twice or more times in a single lifecycle flow, both impure functions and asynchronous calls would be fired twice or more, directly affecting some part of the application. However with pure functions this would not be a problem because they only return values and it is upto the React Engineers to decide the course in the multiple getDerivedStateFromProps calls(they can simply discard all the returned values up until the last call and make use of the last one).
Yet another example would be what if the React Engineers decide to make the render call asynchronous. Maybe they would want to club all the render calls (from the parent to all the nested children) and fire them at once asynchronously to improve performance.
Now this would mean that the asynchronous calls written in render method or prior to it(like in constructor or getDerivedStateFromProps) could interfere with the render process because of the unpredictability in the asynchronous process completion. One could complete before or later than the other, triggering their respective callbacks unpredictably. This unpredictability could reflect in the form of multiple rendering, unpredictable state etc.
Importantly, both these ideas aren't just examples, rather were expressed by the React engineers as a possible future optimisation approach. read here: https://stackoverflow.com/a/41612993/923372
In spite of all this, React Engineers know the developers out there could still write asynchronous code or impure functions and to discourage that, they made one of the lifecycle methods static. The constructor, render, getSnapshotBeforeUpdate, componentDidMount and componentDidUpdate methods cant be static because they need to have access to instance properties like this.state, this.props, other custom event handlers etc.(constructor initialises them, render uses them to control the UI logic, other lifecycle methods need these to compare it with earlier states)
However considering getDerivedStateFromProps, this hook is only provided to return an updated clone of the state if the previous props is different from the current props. And by that very definition, this sounds pure with no need for any access to instance properties. Let's analyse why.
For this hook to work, the developer first needs to store the previous props in the instance state(let's say, in the constructor call). This is so because getDerivedStateFromProps receives the instance state as well as new props as arguments. The developer can then proceed to diff the desired property and return an updated clone of the state (without having the need to access this.props or this.state).
By making getDerivedStateFromProps static, not only is React forcing you to write pure functions, it is also making it difficult to write asynchronous calls because you have access to no instance from within this method. Usually the asynchronous call would provide a callback which would most probably be an instance method.
Now this doesn't mean the developers cant write them, instead this is just making it difficult and forcing to keep away from such approaches.
A simple rule of thumb is to stay away from impure and asynchronous functional approaches for the duration of third party induced flows. You should only induce such approaches at the end of such flows.
According to the description of this Proposal:
This proposal is intended to reduce the risk of writing
async-compatible React components.
It does this by removing many <sup>1</sup> of the potential pitfalls in
the current API while retaining important functionality the API
enables. I believe this can be accomplished through a combination of:
Choosing lifecycle method names that have a clearer, more limited purpose.
Making certain lifecycles static to prevent unsafe access of instance properties.
And here
Replace error-prone render phase lifecycle hooks with static methods
to make it easier to write async-compatible React components.
Eventually, after lots of discussions, the goal of using static method is also described officially here:
The goal of this proposal is to reduce the risk of writing
async-compatible React components. I believe that can be accomplished
by removing many1 of the potential pitfalls in the current API while
retaining important functionality the API enables. This can be done
through a combination of:
Choosing lifecycle method names that have a clearer, more limited purpose.
Making certain lifecycles static to prevent unsafe access of instance properties.
It is not possible to detect or prevent all side-effects (eg mutations
of global/shared objects).
You are not supposed to touch any internal data in that method so it is defined as static. This way there is no object you can touch and the only things you’re allowed to do are to use the provided previous state and next props to do whatever you’re doing.
getDerivedStateFromProps exists only to enable a component to update its internal state as a result of changes in props. As we update only state on the bases of props, so there is no reason of comparing nextProps and this.props. Here we should compare only next props and previous state, If state and props are different, update state otherwise there should be no update.
If we compare this.props with next props,we require to store the old props value, which impact performance. Keeping copy of past value is called memoization. To avoid misuse of “this” and memoization, getDerivedStateFromProps is made static.
We can consider above as the reason for componentWillReciveProps depreciation too.
getDerivedStateFromProps is a new API that has been introduced in order for it to be extensible when Async rendering as a feature is released. According to Dan Abramov in a tweet,
This method is chosen to be static to help ensure purity which is
important because it fires during interruptible phase.
The idea to move all unstable things and side effects after the render method. Giving access to component instance variables during an interruptible phase may lead to people using it with all sorts of side effects causing an inconsistency in async rendering
Is it safe to assume that RxJS will trigger the next function of each of its observers in the order they have subscribed. I have a class with a public propery of BehaviorSubject. The first subscription made to it will be from with in the class' constructor. I would like to make sure that the next of this private subscription works before any other's.
Practically speaking, yes, this is safe; the implementation of the Subject class (from which BehaviorSubject inherits) always processes subscriptions in the order they are taken. While I've not seen a guarantee from the rxjs team that this will always be the case, I imagine changing this would break a lot of code (mine included).
Strictly speaking, no, no guarantee is made regarding subscription processing order. This goes back to Rx under .NET, when the team tried to align the subscription behavior with that of multicast delegates (you can find a good explanation from Bart De Smet at https://social.msdn.microsoft.com/Forums/en-US/ac721f91-4dbc-40b8-a2b2-19f00998239f/order-of-subscriptions-order-of-observations?forum=rx).
I have run across scenarios before where the "process in subscription order" hasn't suited me, and I've needed to take direct control. In this case, I've used a simple function to turn one observable into two, one of which is guaranteed to be notified before the other. You could use a similar method to avoid making the assumption that subscriptions will always be processed in order, though I personally do not think it's necessary. If interested, you can find details here: RxJs: Drag and Drop example : add mousedragstart
In terms of behaviorSubject, and subjects in general, they are "Hot" Observables that produce and consume values. You can assume that the next function will always trigger so long as nothing calls the observer.complete() method.
The first subscription you have will set and initialize the state (assumption here) and so every subsequent subscriber will be able to hook in to that subscription and ascertain the next(value) emissions.
Hope this helps.
Problem
While the $digest-cycle in my app still runs quite fast, i noticed that some callbacks (which are bound in the templates for example via ng-if) are called way more often than i expected. This goes up to 100+ calls on a single UI-interaction (where I would generally expect something between 3 or 10 calls at most).
I would like to understand why the callbacks are called this often and possibly reduce the number of calls to prevent future performance Issues.
What I tried
From my understanding the described behaviour means that the $digest-cycle takes up to a few-hundred loops to remove all dirty-flags and make sure that all rendered nodes are up-to-date.
I simplified several callbacks to just return true - instead of evaluating some model-values - which had no effect on the number of $digest calls at all. I also checked the Performance-Tab in the Chrome-developer-Tools which only told me that the calls themselves are executed within a few ms.
For trouble-shooting i also removed several ng-repeat blocks and angular-filters throughout the application since those obviously apply several watches to be evaluated in the $digest loop. This had no impact on the number of calls to the callback-functions either.
Thus i guess i need a more sophisticated tool or method to debug the (number of) $digest calls throughout my application to even figure out where all those calls are coming from and how to reduce them.
Questions
Which tools and methods can I use to evaluate the performance of the $digest-loop (and especially the number of loops) in my angular-application?
How do I reduce the number of calls to callbacks which are bound in a template?
I think to answer the second question it would already be helpful to understand what can cause additional calls to foo() in a setup like this:
<div ng-if="ctrl.foo()">
<!--<span>content</span> -->
</div>
First thing what does actually digest cycle in angularJS?
1. Its process in which angular framework check for all two way binding variable changes by its own continuously.
2. When ever user interact and change two way binding variable then it get fire.
3. programmatically(in controller, service or factory) two way binding variable get changed
Above are reasons to fire digest cycle call...
Which entity are part of digest cycle?
1. $watch added on variables.
2. ngModel, ng-model iteself internally add $watch on varaible
Basically $watch function.
What we can do to avoid $digest/avoid call to $watch?
Think about variable using in UI that does this variable need to be two way binding?
If answer is NO then just go for one-way bind syntax
Avoid use of watch function from controller, service, factory
Then how can I watch it...
RX js is right now best library which can help to overcome this issue. Its just one option.
Use getter setter
How?
mymodule.controlle('ctrName', ctrClass);
ctrClass {
constructor($scope) {
this.myVar1 = null;
this.myVar2 = null;
}
set myVar1(value) {
// either code which i want in watcher
// or
// Some function which i want to execute after value get set
this.afterSet();
return this.myVar1 = value;
}
afterSet() {
}
}
Use controllerAs feature of angular
Create directives with isolated scopes
About tool:
To validate angular application Batarange is good tool.
I know enough jQuery/JavaScript to be dangerous. I have a JSON array that I'm interacting with using two different elements (a calendar and a table, to be precise). Is there an event handler (or any other way) I could bind to so that the table would refresh when the JSON changes?
Basic programming, parse the json (=string) into a javascript object or array. (you probably have already done that.) Use an implementation of the observer patern.
I suggest taking a good look at #Adam Merrifield 's interesting links.
Most of the time using getters and setter where you can fire a custom event (or call a callback method) inside a setter is the key in this.
KnockoutJS is a good framework to help you do such binding. It also uses the observable - observer/subscriber pattern.
using timers is not a really good idea.. little to much overhead. (doing stuff also when nothing gets changed. And you will always hop x ms behind (depending on the polling frequency).
You might want to consider Knockout.JS
It allows bi-directional mapping, so a change to your model should reflect on your view and vice/versa.
http://knockoutjs.com/documentation/json-data.html
However, it might be late stages of your dev cycle, but something to consider.