I wonder how to subscribe to the changes of a JavaScript object e.g. like Redux does. I read through a lot of JS documentations but I couldn't find a non-deprecated way to handle this problem (Object.protype.watch() as well as Object.observe() are deprecated). Moreover I read a few Stackoverflow questions regarding this topic but they are all at least 5 years old. To visualize my problem I'll show an example.
This could be an object I want to watch for:
const store = {
anArray = [
'Hi',
'my',
'name',
'is'
]
}
.. and this a function which changes the store object:
function addAName() {
store.anArray.push('Bob')
}
My goal in this example is to trigger the following function every time the store object changes
function storeChanged() {
console.log('The store object has changed!')
}
Thank you in advance!
Have you tried using Proxy from ECMA6? I think this is what you are looking for
You only have to define a function as the set of the validator of the Proxy like this:
let validator = {
set: function(target, key, value) {
console.log(`The property ${key} has been updated with ${value}`);
return true;
}
};
let store = new Proxy({}, validator);
store.a = 'hello';
// console => The property a has been updated with hello
To solve this problem without any indirections (in using object) you can use proxy.
By wrapping all objects with observable you can edit your store freely and _base keeps track of which property has changed.
const observable = (target, callback, _base = []) => {
for (const key in target) {
if (typeof target[key] === 'object')
target[key] = observable(target[key], callback, [..._base, key])
}
return new Proxy(target, {
set(target, key, value) {
if (typeof value === 'object') value = observable(value, callback, [..._base, key])
callback([..._base, key], target[key] = value)
return value
}
})
}
const a = observable({
a: [1, 2, 3],
b: { c: { d: 1 } }
}, (key, val) => {
console.log(key, val);
})
a.a.push(1)
a.b.c.d = 1
a.b = {}
a.b.c = 1
You can use Object.defineproperty() to create reactive getters/setters. It has good browser support and looks handy.
function Store() {
let array = [];
Object.defineProperty(this, 'array', {
get: function() {
console.log('Get:', array);
return array;
},
set: function(value) {
array = value;
console.log('Set:', array)
}
});
}
var store = new Store();
store.array; //Get: []
store.array = [11]; //Set: [11]
store.array.push(5) //Set: [11, 5]
store.array = store.array.concat(1, 2, 3) //Set: [11, 5, 1, 2, 3]
It's non-trivial.
There are several approaches that different tools (Redux, Angular, KnockoutJS, etc.) use.
Channeling changes through functions - This is the approach Redux uses (more). You don't directly modify things, you pass them through reducers, which means Redux is aware that you've changed something.
Diffing - Literally comparing the object tree to a previous copy of the object tree and acting on changes made. At least some versions of Angular/AngularJS use(d) this approach.
Wrapping - (Kind of a variant on #1) Wrapping all modification operations on all objects in the tree (such as the push method on your array) with wrappers that notify a controller that they object they're on has been called — by wrapping those methods (and replacing simple data properties with accessor properties) and/or using Proxy objects. KnockoutJS uses a version of this approach.
You can try this npm package
tahasoft-event-emitter
Your code will look like this
import { EventEmitter } from "tahasoft-event-emitter";
const store = {
anArray: ["Hi", "my", "name", "is"]
};
const onStoreChange = new EventEmitter();
function addAName(name) {
onStoreChange.emit(name);
store.anArray.push(name);
}
function storeChanged(name) {
console.log("The store object has changed!. New name is " + name);
}
onStoreChange.add((name) => storeChanged(name));
addAName("Bob");
If you are not interested in the new value of name, you can write it simple like this
import { EventEmitter } from "tahasoft-event-emitter";
const store = {
anArray: ["Hi", "my", "name", "is"]
};
const onStoreChange = new EventEmitter();
function addAName() {
store.anArray.push("Bob");
onStoreChange.emit();
}
function storeChanged() {
console.log("The store object has changed!");
}
onStoreChange.add(storeChanged);
addAName();
Whenever you call addAName, storeChanged will be called
Here is the example for on codesandbox
EventEmitter on an object status change
You can check the package GitHub repository to know how it is created. It is very simple.
I am not sure if there is a way to do it with native functionality and even if there is I think you won't be able to do this without some sort of abstraction.
It doesn't even work natively in react/redux either as you need to specifically call setState explicitly to trigger changes. I recommend a very observer pattern whose implementation roughly looks like this.
var store = {
names: ['Bob', 'Jon', 'Doe']
}
var storeObservers = [];
Now, simple push your observer functions doesn't matter even if they are part of some component or module
storeObservers.push(MyComponent.handleStoreChange)
Now simple expose a function to change the store similar to setState
function changeStore(store) {
store = store
storeObservers.forEach(function(observer){
observer()
})
}
You can obviously modularized this to handle more complex situations or if you only want to change a part of state and allow observers to bind callbacks to partial state changes.
Alejandro Riera's answer using Proxy is probably the correct approach in most cases. However, if you're willing to include a (small-ish, ~90kb) library, Vue.js can have watched properties on its instances, which can sort of do what you want.
It is probably overkill to use it for just change observation, but if your website has other uses for a reactive framework, it may be a good approach.
I sometimes use it as an object store without an associated element, like this:
const store = new Vue({
data: {
anArray: [
'Hi',
'my',
'name',
'is'
]
},
watch: {
// whenever anArray changes, this function will run
anArray: function () {
console.log('The store object has changed:', this.anArray);
}
}
});
function addAName() {
// push random strings as names
const newName = '_' + Math.random().toString(36).substr(2, 6);
store.anArray.push(newName);
}
// demo
setInterval(addAName, 5000);
Related
I'm currently working in a page with parent/child components. Somehow my child component gets updated when I manage its variables in the parent component.
What I'm trying to do:
My child component has a 'site' variable with all the data i need to send via API
My parent component has a Save button to send the child data to the Back-end
When 'site' changes in the child component, I'm emitting an event #change to the parent component
The #change event contains all the data I need, but not in the format I want
There is a function submit() that gets this data and modify the one of the arrays so that this: ['foo','bar'] becomes this 'foo,bar'
The problem when I do the step '5' my child component gets updated
The child component inside the parent component
<configuracoes :configuracoes="configuracoes" #change="setData"
v-if="currentPage === 'configs'"/>
The change event emitted by the child component
this.$emit("change", this.site);
The important part of 'site' var
site: {
seo: {
keywords: [],
...
},
...
},
The setData() function
setData(data) {
this.data = data;
},
The submitData() function
submitData() {
if (this.currentPage === "configs") {
let data = ({}, Object.assign(this.data))
let keywords = data.seo.keywords.join(',')
data.seo.keywords = keywords
this.$store.dispatch("sites/updateSite", {
empresa_id: this.user.empresa_id,
site_id: this.siteId,
dados: data,
});
}
}
As you can see, I'm declaring another variable let data to avoid updating this.site variable, but no success
First of all, there is an issue with how you're "copying" your this.data object.
let data = ({}, Object.assign(this.data)); // this doesn't work
console.log(data === this.data); // true
const dataCopy = Object.assign({}, this.data); // this works
console.log(dataCopy === this.data); // false
The way Object.assign works, all the properties will be copied over into the first argument. Since you only pass a single argument, it doesn't change and is still pointing to the same old object.
If you use the correct way, you will most likely still run into the same issue. The reason is that data.seo is not a primitive value (a number or a string), but is an object.
This means that the whole seo object will be copied over into the new copy. In other words, even though dataCopy !== this.data, dataCopy.seo === this.data.seo. This is known as "shallow copy".
You want to make sure you DO NOT modify the original seo object, here are a few ways to do that.
let goodCopy;
const newKeywords = this.data.seo.keywords.join(',');
// use object spread syntax
goodCopy = {
...this.data,
seo: {
...this.data.seo,
keywords: newKeywords,
},
};
// use Object.assign
goodCopy = Object.assign(
{},
this.data,
{
seo: Object.assign(
{},
this.data.seo,
{keywords: newKeywords}),
});
// create a copy of "seo", and then change it to your liking
const seoCopy = {...this.data.seo};
seoCopy.keywords = newKeywords;
goodCopy = Object.assign({}, this.data, {seo: seoCopy});
this.$store.dispatch('sites/updateSite', {
empresa_id: this.user.empresa_id,
site_id: this.siteId,
dados: goodCopy,
});
If you want to read up on ways to copy a JavaScript object, here's a good question.
I am a relative beginner in Angular, and I am struggling to understand some source I am reading from the ng-bootstrap project. The source code can be found here.
I am very confused by the code in ngOnInit:
ngOnInit(): void {
const inputValues$ = _do.call(this._valueChanges, value => {
this._userInput = value;
if (this.editable) {
this._onChange(value);
}
});
const results$ = letProto.call(inputValues$, this.ngbTypeahead);
const processedResults$ = _do.call(results$, () => {
if (!this.editable) {
this._onChange(undefined);
}
});
const userInput$ = switchMap.call(this._resubscribeTypeahead, () => processedResults$);
this._subscription = this._subscribeToUserInput(userInput$);
}
What is the point of calling .call(...) on these Observable functions? What kind of behaviour is this trying to achieve? Is this a normal pattern?
I've done a lot of reading/watching about Observables (no pun intended) as part of my Angular education but I have never come across anything like this. Any explanation would be appreciated
My personal opinion is that they were using this for RxJS prior 5.5 which introduced lettable operators. The same style is used internally by Angular. For example: https://github.com/angular/angular/blob/master/packages/router/src/router_preloader.ts#L91.
The reason for this is that by default they would have to patch the Observable class with rxjs/add/operators/XXX. The disadvantage of this is that some 3rd party library is modifying a global object that might unexpectedly cause problems somewhere else in your app. See https://github.com/ReactiveX/rxjs/blob/master/doc/lettable-operators.md#why.
You can see at the beginning of the file that they import each operator separately https://github.com/ng-bootstrap/ng-bootstrap/blob/master/src/typeahead/typeahead.ts#L22-L25.
So by using .call() they can use any operator and still avoid patching the Observable class.
To understand it, first you can have a look at the predefined JavaScript function method "call":
var person = {
firstName:"John",
lastName: "Doe",
fullName: function() {
return this.firstName + " " + this.lastName;
}
}
var myObject = {
firstName:"Mary",
lastName: "Doe",
}
person.fullName.call(myObject); // Will return "Mary Doe"
The reason of calling "call" is to invoke a function in object "person" and pass the context to it "myObject".
Similarly, the reason of this calling "call" below:
const inputValues$ = _do.call(this._valueChanges, value => {
this._userInput = value;
if (this.editable) {
this._onChange(value);
}
});
is providing the context "this._valueChanges", but also provide the function to be called base on that context, that is the second parameter, the anonymous function
value => {
this._userInput = value;
if (this.editable) {
this._onChange(value);
}
}
In the example that you're using:
this._valueChanges is the Input Event Observerable
The _do.call is for doing some side affects whenever the event input happens, then it returns a mirrored Observable of the source Observable (the event observable)
UPDATED
Example code: https://plnkr.co/edit/dJNRNI?p=preview
About the do calling:
You can call it on an Observable like this:
const source = Rx.Observable.of(1,2,3,4,5);
const example = source
.do(val => console.log(`BEFORE MAP: ${val}`))
.map(val => val + 10)
.do(val => console.log(`AFTER MAP: ${val}`));
const subscribe = example.subscribe(val => console.log(val));
In this case you don't have to pass the first parameter as the context "Observable".
But when you call it from its own place like you said, you need to pass the first parameter as the "Observable" that you want to call on. That's the different.
as #Fan Cheung mentioned, if you don't want to call it from its own place, you can do it like:
const inputValues$=this._valueChanges.do(value=>{
this._userInput = value;
if (this.editable) {
this._onChange(value);
}
})
I suppose
const inputValues$ = _do.call(this._valueChanges, value => {
this._userInput = value;
if (this.editable) {
this._onChange(value);
}
});
is equivalent to
const inputValues$=this._valueChanges.do(value=>{
this._userInput = value;
if (this.editable) {
this._onChange(value);
}
})
In my opinion it's not an usual pattern(I think it is the same pattern but written in different fashion) for working with observable. _do() in the code is being used as standalone function take a callback as argument and required to be binded to the scope of the source Observable
https://github.com/ReactiveX/rxjs/blob/master/src/operator/do.ts
I use the following code which is working great but I wonder if in JS there is a way to avoid the if and to do it inside the loop, I want to use also lodash if it helps
for (provider in config.providers[0]) {
if (provider === "save") {
....
You can chain calls together using _.chain, filter by a value, and then use each to call a function for each filtered result. However, you have to add a final .value() call at the end for it to evaluate the expression you just built.
I'd argue that for short, simple conditional blocks, an if statement is easier and more readable. I'd use lodash- and more specifically chaining- if you are combining multiple operations or performing sophisticated filtering, sorting, etc. over an object or collection.
var providers = ['hello', 'world', 'save'];
_.chain(providers)
.filter(function(provider) {
return provider === 'save';
}).each(function(p) {
document.write(p); // your code here
}).value();
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/3.8.0/lodash.js"></script>
Edit: My mistake; filter does not have an overload where you can just supply a literal value. If you want to do literal value checking you have to supply a function as in my amended answer above.
I'd argue that what you have there is pretty good, clean and readable, but since you mentioned lodash, I will give it a try.
_.each(_.filter(config.providers[0], p => p === 'save'), p => {
// Do something with p
...
});
Note that the arrow function/lambda of ECMAScript 6 doesn't come to Chrome until version 45.
Basically, you are testing to see if config.providers[0], which is an object, contains a property called save (or some other dynamic value, I'm using a variable called provider to store that value in my example code below).
You can use this instead of using a for .. in .. loop:
var provider = 'save';
if (config.providers[0][provider] !== undefined) {
...
}
Or using #initialxy's (better!) suggestion:
if (provider in config.providers[0]) {
...
}
How about:
for (provider in config.providers[0].filter(function(a) {return a === "save"}) {
...
}
Strategy, you are looking for some kind of strategy pattern as,
Currenlty the save is hardcoded but what will you do if its coming from other varible – Al Bundy
var actions = {
save: function() {
alert('saved with args: ' + JSON.stringify(arguments))
},
delete: function() {
alert('deleted')
},
default: function() {
alert('action not supported')
}
}
var config = {
providers: [{
'save': function() {
return {
action: 'save',
args: 'some arguments'
}
},
notSupported: function() {}
}]
}
for (provider in config.providers[0]) {
(actions[provider] || actions['default'])(config.providers[0][provider]())
}
Push „Run code snippet” button will shows two pop-ups - be carefull
It is not clearly stated by the original poster whether the desired output
should be a single save - or an array containing all occurrences of
save.
This answer shows a solution to the latter case.
const providers = ['save', 'hello', 'world', 'save'];
const saves = [];
_.forEach(_.filter(providers, elem => { return elem==='save' }),
provider => { saves.push(provider); });
console.log(saves);
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.19/lodash.js"></script>
For example, I have a simple implementation of Angular's data binding mechanism - notification when data on object changes:
// implementation
var watchers = [];
function watch(fn, cb) {
var oldValue = fn();
watchers.push(function() {
var newValue = fn();
if (newValue !== oldValue) {
oldValue = newValue;
cb(newValue);
}
})
}
function digest() {
watchers.forEach(function(fn) {
fn()
});
}
// using implementation
var some = {
prop: 3
}
watch(function() {
return some.prop;
}, function(newValue) {
console.log('value changed to ' + newValue);
});
Now I have two options of modifying some
// option 1
delete some.prop; // this is optional here
some.prop = 5;
digest(); // outputs "value changed to 5"
// option 2
// instead of changing the property on the same object,
// I assign reference to the new object
some = {
prop: 11
}
digest(); // outputs "value changed to 11"
My question is which of two options is better to use?
Usually objects contain a lot of fields, so reassing them as in option 1 is not always convenient. However, by using the second option I'm loosing reference to the original object and that potentially could lead to memory leaks if something had reference to this object. Is my thinking correct?
Why are you creating an implementation for core functionality already in AngularJS? You should use the built in feature. The way data gets propigated through the scopes is the other thing in Angular that makes what you are concerned with a none issue.
My state is:
[
{type: "translateX", x: 10},
{type: "scaleX", x: 1.2}
]
I’m using Two-Way Binding Helpers and I can’t provide a valid key string for linkState:
this.state.map(function(item, i) {
return <div><input valueLink={this.linkState( ??? )}></div>
}
Would be nice if this.linkState accepted some query syntax, such as "0.type" to retrieve "translateX" from my example.
Are there any workarounds?
I wrote DeepLinkState mixin which is a drop-in replacement for React.addons.LinkedStateMixin. Usage example:
this.state.map(function(item, i) {
return <div><input valueLink={this.linkState([i, "x"])}></div>
}
linkState("0.x") is also acceptable syntax.
Edit:
I realized that deep-path for LinkedState is pretty cool so I try to implement it.
The code: https://gist.github.com/tungd/8367229
Usage: http://jsfiddle.net/uHm6k/3/
As the document stated, LinkedState is a wrapper around onChange/setState and meant for simple case. You can always write the full onChange/setState to achieve what you want. If you really want to stick with LinkedState, you can use the non mixin version, for example:
getInitialState: function() {
return { values: [
{ type: "translateX", x: 10 },
{ type: "scaleX", x: 1.2 }
]}
},
handleTypeChange: function(i, value) {
this.state.values[i].type = value
this.setState({ values: this.state.values })
},
render: function() {
...
this.state.values.map(function(item, i) {
var typeLink = {
value: this.state.values[i].type,
requestChange: this.handleTypeChange.bind(null, i)
}
return <div><input valueLink={typeLink}/></div>
}, this)
...
}
Here is working JSFiddle: http://jsfiddle.net/srbGL/
You can implement your own mixin if the base mixin doesn't satisfy you.
See how this mixin is implemented:
var LinkedStateMixin = {
/**
* Create a ReactLink that's linked to part of this component's state. The
* ReactLink will have the current value of this.state[key] and will call
* setState() when a change is requested.
*
* #param {string} key state key to update. Note: you may want to use keyOf()
* if you're using Google Closure Compiler advanced mode.
* #return {ReactLink} ReactLink instance linking to the state.
*/
linkState: function(key) {
return new ReactLink(
this.state[key],
ReactStateSetters.createStateKeySetter(this, key)
);
}
};
/**
* #param {*} value current value of the link
* #param {function} requestChange callback to request a change
*/
function ReactLink(value, requestChange) {
this.value = value;
this.requestChange = requestChange;
}
https://github.com/facebook/react/blob/fc73bf0a0abf739a9a8e6b1a5197dab113e76f27/src/addons/link/LinkedStateMixin.js
https://github.com/facebook/react/blob/fc73bf0a0abf739a9a8e6b1a5197dab113e76f27/src/addons/link/ReactLink.js
So you can easily try to write your own linkState function based on the above.
linkState: function(key,key2) {
return new ReactLink(
this.state[key][key2],
function(newValue) {
this.state[key][key2] = newValue;
}
);
}
Notice that I didn't use the ReactStateSetters.createStateKeySetter(this, key).
https://github.com/facebook/react/blob/fc73bf0a0abf739a9a8e6b1a5197dab113e76f27/src/core/ReactStateSetters.js
By looking at the source code again you can find out this method doesn't do so much except it creates a function and does little caching optimizations:
function createStateKeySetter(component, key) {
// Partial state is allocated outside of the function closure so it can be
// reused with every call, avoiding memory allocation when this function
// is called.
var partialState = {};
return function stateKeySetter(value) {
partialState[key] = value;
component.setState(partialState);
};
}
So you should definitely try to write your own mixin.
This can be very useful if you have in your state a complex object and you want to modify it through the object API.
I do it without using value-link addon.
Here is a demo: http://wingspan.github.io/wingspan-forms/examples/form-twins/
The secret sauce is to only define one onChange function:
onChange: function (path, /* more paths,*/ value) {
// clone the prior state
// traverse the tree by the paths and assign the value
this.setState(nextState);
}
use it like this:
<input
value={this.state['forms']['0']['firstName']}
onChange={_.partial(this.onChange, 'forms', '0', 'firstName')} />
If you have many (value, onChange) pairs that you have to pass around everywhere, it might make sense to define an abstraction around this similar to ReactLink, but I personally got pretty far without using ReactLink.
My colleagues and I recently open sourced wingspan-forms, a React library that helps with with deeply nested state. We leverage this approach heavily. You can see more example demos with linked state on the github page.
I wrote a blogpost about it: http://blog.sendsonar.com/2015/08/04/angular-like-deep-path-data-bindings-in-react/
But basically I created a new component that would accept the 'state' of parent and a deep path, so you don't have to write extra code.
<MagicInput binding={[this, 'account.owner.email']} />
There's a JSFiddle too so you can play with it
Here's the tutorial explaining how to handle things like this.
State and Forms in React, Part 3: Handling the Complex State
TL;DR:
0) Don't use standard links. Use these.
1) Change your state to look like this:
collection : [
{type: "translateX", x: 10},
{type: "scaleX", x: 1.2}
]
2) Take link to the collection:
var collectionLink = Link.state( this, 'collection' );
3) Iterate through the links to its elements:
collectionLink.map(function( itemLink, i ) {
return <div><input valueLink={itemLink}></div>
})
I took a different approach which does not employ mixins and does not automatically mutate the state
See github.com/mcmlxxxviii/react-value-link