"Unsubscribe" function callback/hook in Observable "executor" function - javascript

I am confused about what the purpose of the "dispose" or "unsubscribe" function is for, which is (optionally) returned from an observable "executor" function, like so:
const Rx = require('rxjs');
const obs = Rx.Observable.create(obs => {
// we are in the Observable "executor" function
obs.next(4);
// we return this function, which gets called if we unsubscribe
return function () {
console.log('disposed');
}
});
const s1 = obs.subscribe(
function (v) {
console.log(v);
},
function (e) {
console.log(e);
},
function () {
console.log('complete');
}
);
const s2 = obs.subscribe(
function (v) {
console.log(v);
},
function (e) {
console.log(e);
},
function () {
console.log('complete');
}
);
s1.unsubscribe();
s2.unsubscribe();
What confuses me is that such a function would actually be more likely to hold on to references in your code and therefore prevent garbage collection.
Can anyone tell me what the purpose is of returning a function in that scenario, what the function is called, and what it's signature is? I am having trouble figuring out information about it.
I also see much more complex examples of returning a subscription from the executor function, for example this:
let index = 0;
let obsEnqueue = this.obsEnqueue = new Rx.Subject();
this.queueStream = Rx.Observable.create(obs => {
const push = Rx.Subscriber.create(v => {
if ((index % obsEnqueue.observers.length) === obsEnqueue.observers.indexOf(push)) {
obs.next(v);
}
});
return obsEnqueue.subscribe(push);
});
This seems to return a subscription instead of just a plain function. Can anyone explain what's going on with this?
To make it a clear question, what is the difference between doing this:
const sub = new Rx.Subject();
const obs = Rx.Observable.create($obs => {
$obs.next(4);
return sub.subscribe($obs);
});
and not returning the result of the subscribe call:
const sub = new Rx.Subject();
const obs = Rx.Observable.create($obs => {
$obs.next(4);
sub.subscribe($obs);
});

The unsubscribe function that Rx.Observable.create needs to return is invoked when downstream does not listen to the stream anymore, effectively giving you time to clean up resources.
In regards to your question; .subscribe() returns the subscription on which you can call .unsubscribe(). So if you want to do something with an other subscription you can pipe through that subscription to your downstream:
const obs = Rx.Observable.create($obs => {
const timer = Rx.Observable.interval(300)
.do(i => console.log('emission: ' + i))
return timer.subscribe($obs);
});
obs.take(4).subscribe(i => console.log('outer-emission:'+i))
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.0.2/Rx.js"></script>
Without the unsubscribe function you would stop listening to the observable but the interval created internally would keep on running:
const obs = Rx.Observable.create($obs => {
const timer = Rx.Observable.interval(300)
.do(i => console.log('emission: ' + i))
.take(10)
.subscribe(
val => $obs.next(val),
err => $obs.error(err),
() => $obs.complete()
);
return function(){} // empty unsubscribe function, internal subscription will keep on running
});
obs.take(4).subscribe(i => console.log('outer-emission:'+i))
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.0.2/Rx.js"></script>

Related

Promisify a function that creates a mongoose model instance (i.e. a document), saves it to the model, and returns then returns the model

I'm trying to Promisify the following function
let Definition = mongoose.model('Definition', mySchema)
let saveDefinition = (newDefinition) => {
var newDef = new Definition(newDefinition);
newDef.save();
return Definition.find();
}
to achieve the following sequence of events
let saveDefinition = (newDefinition) => {
return = new Promise((res, rej) => {
// newDef = new Definition(newDefinition)
// then
// newDef.save()
// then
// return Definition.find()
})
}
The goal is to invoke this function upon a request from the client, save a document to the model called "Definition" and return all of the documents within the model back to to the client. Any help or guidance would be greatly appreciated.
I'm not really sure on how to approach the problem
a function that creates a mongoose model instance (i.e. a document), saves it to the model, and returns then returns the model
There is nothing special you need to do. .save() already returns (a promise for) the saved document. Return it.
const Definition = mongoose.model('Definition', mySchema);
const saveDefinition = (data) => {
const newDef = new Definition(data);
return newDef.save();
};
Done.
I would write it differently to get rid of that global Definition variable:
const saveObject = modelName => {
const Model = mongoose.model(modelName, mySchema);
return (data) => new Model(data).save();
};
const saveDefinition = saveObject('Definition');
const saveWhatever = saveObject('Whatever');
Usage is the same in both cases
saveDefinition({ /* ... */ }).then(def => {
// success
}).catch(err => {
// failure
});
or
async () => {
try {
const def = await saveDefinition({ /* ... */ });
// success
} catch (err) {
// failure
}
};

How to return a closure from an async function and assign it to useState?

I'm trying to return a function (closure) from a async function. So far, the way I have things space is always a promise and not a function, even though it should be a closure.
// ...
const [space, set_space] = React.useState(false);
React.useEffect(
() => {
API.SERVER(query, PAGE_SIZE)
.then(response => set_space(response));
},
[query]
);
React.useEffect(
() => {
console.log(space.constructor);
if (space) {
space(page)
.then(response => set_titles(Array.from(response.data)));
}
},
[space, page]
);
// ...
API
// ...
export default {
SERVER:async function(search, width)
{
if (!search) {
// return () => {};
search = DEFAULT_QUERY;
}
const all = (await axios.get(SEARCH_URL + `?search=${search}`)).data;
console.log(all);
const pages = chunk(all, width);
// function pager(results) {
// const dex = (results > width ? results / width : 0);
// const next = pages[dex];
// return axios.get(SEARCH_URL + `?fromids=${next}`);
// }
// return pager;
return function(results) {
const dex = (results > width ? results / width : 0);
const next = pages[dex];
return axios.get(SEARCH_URL + `?fromids=${next}`);
};
// return pager;
}
};
space.constructor says it's a Promise, but it should be a function that returns axios' Promise, not a Promise it's self. I can't really change the way the API is set up because I need to hold the index client side, then walk though it by changing space's parameter.
The problem is, that set_space doesn't handle functions the same way as other values.
That's because React's state setters can take a callback that receives the previous state as an argument.
However, React has no way of differentiating between a callback and a value that happens to be a function. So, React will assume it's a callback and calls it, then set the state to contain its return value, another Promise.
To put the function itself into the state, you can create a callback that returns the function, like this:
API.SERVER(query, PAGE_SIZE)
.then(response => set_space(() => response));
Hmmm... you just want a function that on call returns you the promise? What about useCallback then?
const triggerAsync = React.useCallback(
() => {
return API.SERVER(query, PAGE_SIZE)
.then(response => set_space(response))
},
[query]
);
triggerAsync should then return a promise when you execute it

Using promise in loop results in Promise failure

I'd like to reuse the same code in a loop. This code contains promises. However, when iterating, this code results in an error.
I've tried using for and while loops. There seems to be no issue when I use the for loop for a single iteration.
Here is a minimal version of my code:
var search_url = /* Some initial URL */
var glued = "";
for(var i = 0; i < 2; i++)
{
const prom = request(search_url)
.then(function success(response /* An array from a XMLHTTPRequest*/) {
if (/* Some condition */)
{
search_url = /* Gets next URL */
glued += processQuery(response[0]);
} else {
console.log("Done.")
}
})
.catch(function failure(err) {
console.error(err.message); // TODO: do something w error
})
}
document.getElementById('api-content').textContent = glued;
I expect the results to append to the variable glued but instead, I get an error: failure Promise.catch (async) (anonymous) after the first iteration of the loop.
Answer:
You can use the Symbol.iterator in accordance with for await to perform asynchronous execution of your promises. This can be packaged up into a constructor, in the example case it's called Serial (because we're going through promises one by one, in order)
function Serial(promises = []) {
return {
promises,
resolved: [],
addPromise: function(fn) {
promises.push(fn);
},
resolve: async function(cb = i => i, err = (e) => console.log("trace: Serial.resolve " + e)) {
try {
for await (let p of this[Symbol.iterator]()) {}
return this.resolved.map(cb);
} catch (e) {
err(e);
}
},
[Symbol.iterator]: async function*() {
this.resolved = [];
for (let promise of this.promises) {
let p = await promise().catch(e => console.log("trace: Serial[Symbol.iterator] ::" + e));
this.resolved.push(p);
yield p;
}
}
}
}
What is the above?
It's a constructor called Serial.
It takes as an argument an array of Functions that return Promises.
The functions are stored in Serial.promises
It has an empty array stored in Serial.resolved - this will store the resolved promise requests.
It has two methods:
addPromise: Takes a Function that returns a Promise and adds it to Serial.promises
resolve: Asynchronously calls a custom Symbol.iterator. This iterator goes through every single promise, waits for it to be completed, and adds it to Serial.resolved. Once this is completed, it returns a map function that acts on the populated Serial.resolved array. This allows you to simply call resolve and then provide a callback of what to do with the array of responses. A.e. .resolve()((resolved_requests) => //do something with resolved_requests)
Why does it work?
Although many people don't realize this Symbol.iterator is much more powerful than standard for loops. This is for two big reasons.
The first reason, and the one that is applicable in this situation, is because it allows for asynchronous calls that can affect the state of the applied object.
The second reason is that it can be used to provide two different types of data from the same object. A.e. You may have an array that you would like to read the contents of:
let arr = [1,2,3,4];
You can use a for loop or forEach to get the data:
arr.forEach(v => console.log(v));
// 1, 2, 3, 4
But if you adjust the iterator:
arr[Symbol.iterator] = function* () {
yield* this.map(v => v+1);
};
You get this:
arr.forEach(v => console.log(v));
// 1, 2, 3, 4
for(let v of arr) console.log(v);
// 2, 3, 4, 5
This is useful for many different reasons, including timestamping requests/mapping references, etc. If you'd like to know more please take a look at the ECMAScript Documentation: For in and For Of Statements
Use:
It can be used by calling the constructor with an Array of functions that return Promises. You can also add Function Promises to the Object by using
new Serial([])
.addPromise(() => fetch(url))
It doesn't run the Function Promises until you use the .resolve method.
This means that you can add promises ad hoc if you'd like before you do anything with the asynchronous calls. A.e. These two are the same:
With addPromise:
let promises = new Serial([() => fetch(url), () => fetch(url2), () => fetch(url3)]);
promises.addPromise(() => fetch(url4));
promises.resolve().then((responses) => responses)
Without addPromise:
let promises = new Serial([() => fetch(url), () => fetch(url2), () => fetch(url3), () => fetch(url4)])
.resolve().then((responses) => responses)
Data:
Since I can't really replicate your data calls, I opted for JSONPlaceholder (a fake online rest api) to show the promise requests in action.
The data looks like this:
let searchURLs = ["https://jsonplaceholder.typicode.com/todos/1",
"https://jsonplaceholder.typicode.com/todos/2",
"https://jsonplaceholder.typicode.com/todos/3"]
//since our constructor takes functions that return promises, I map over the URLS:
.map(url => () => fetch(url));
To get the responses we can call the above data using our constructor:
let promises = new Serial(searchURLS)
.resolve()
.then((resolved_array) => console.log(resolved_array));
Our resolved_array gives us an array of XHR Response Objects. You can see that here:
function Serial(promises = []) {
return {
promises,
resolved: [],
addPromise: function(fn) {
promises.push(fn);
},
resolve: async function(cb = i => i, err = (e) => console.log("trace: Serial.resolve " + e)) {
try {
for await (let p of this[Symbol.iterator]()) {}
return this.resolved.map(cb);
} catch (e) {
err(e);
}
},
[Symbol.iterator]: async function*() {
this.resolved = [];
for (let promise of this.promises) {
let p = await promise().catch(e => console.log("trace: Serial[Symbol.iterator] ::" + e));
this.resolved.push(p);
yield p;
}
}
}
}
let searchURLs = ["https://jsonplaceholder.typicode.com/todos/1", "https://jsonplaceholder.typicode.com/todos/2", "https://jsonplaceholder.typicode.com/todos/3"].map(url => () => fetch(url));
let promises = new Serial(searchURLs).resolve().then((resolved_array) => console.log(resolved_array));
Getting Results to Screen:
I opted to use a closure function to simply add text to an output HTMLElement.
This is added like this:
HTML:
<output></output>
JS:
let output = ((selector) => (text) => document.querySelector(selector).textContent += text)("output");
Putting it together:
If we use the output snippet along with our Serial object the final functional code looks like this:
let promises = new Serial(searchURLs).resolve()
.then((resolved) => resolved.map(response =>
response.json()
.then(obj => output(obj.title))));
What's happening above is this:
we input all our functions that return promises. new Serial(searchURLS)
we tell it to resolve all the requests .resolve()
after it resolves all the requests, we tell it to take the requests and map the array .then(resolved => resolved.map
the responses we turn to objects by using .json method. This is necessary for JSON, but may not be necessary for you
after this is done, we use .then(obj => to tell it to do something with each computed response
we output the title to the screen using output(obj.title)
Result:
let output = ((selector) => (text) => document.querySelector(selector).textContent += text)("output");
function Serial(promises = []) {
return {
promises,
resolved: [],
addPromise: function(fn) {
promises.push(fn);
},
resolve: async function(cb = i => i, err = (e) => console.log("trace: Serial.resolve " + e)) {
try {
for await (let p of this[Symbol.iterator]()) {}
return this.resolved.map(cb);
} catch (e) {
err(e);
}
},
[Symbol.iterator]: async function*() {
this.resolved = [];
for (let promise of this.promises) {
let p = await promise().catch(e => console.log("trace: Serial[Symbol.iterator] ::" + e));
this.resolved.push(p);
yield p;
}
}
}
}
let searchURLs = ["https://jsonplaceholder.typicode.com/todos/1", "https://jsonplaceholder.typicode.com/todos/2", "https://jsonplaceholder.typicode.com/todos/3"].map(url => () => fetch(url));
let promises = new Serial(searchURLs).resolve()
.then((resolved) => resolved.map(response =>
response.json()
.then(obj => output(obj.title))));
<output></output>
Why go this route?
It's reusable, functional, and if you import the Serial Constructor you can keep your code slim and comprehensible. If this is a cornerstone of your code, it'll be easy to maintain and use.
Using it with your code:
I will add how to specifically use this with your code to fully answer your question and so that you may understand further.
NOTE glued will be populated with the requested data, but it's unnecessary. I left it in because you may have wanted it stored for a reason outside the scope of your question and I don't want to make assumptions.
//setup urls:
var search_urls = ["https://jsonplaceholder.typicode.com/todos/1", "https://jsonplaceholder.typicode.com/todos/2"];
var request = (url) => () => fetch(url);
let my_requests = new Serial(search_urls.map(request));
//setup glued (you don't really need to, but if for some reason you want the info stored...
var glued = "";
//setup helper function to grab title(this is necessary for my specific data)
var addTitle = (req) => req.json().then(obj => (glued += obj.title, document.getElementById('api-content').textContent = glued));
// put it all together:
my_requests.resolve().then(requests => requests.map(addTitle));
Using it with your code - Working Example:
function Serial(promises = []) {
return {
promises,
resolved: [],
addPromise: function(fn) {
promises.push(fn);
},
resolve: async function(cb = i => i, err = (e) => console.log("trace: Serial.resolve " + e)) {
try {
for await (let p of this[Symbol.iterator]()) {}
return this.resolved.map(cb);
} catch (e) {
err(e);
}
},
[Symbol.iterator]: async function*() {
this.resolved = [];
for (let promise of this.promises) {
let p = await promise().catch(e => console.log("trace: Serial[Symbol.iterator] ::" + e));
this.resolved.push(p);
yield p;
}
}
}
}
//setup urls:
var search_urls = ["https://jsonplaceholder.typicode.com/todos/1", "https://jsonplaceholder.typicode.com/todos/2"];
var request = (url) => () => fetch(url);
let my_requests = new Serial(search_urls.map(request));
//setup glued (you don't really need to, but if for some reason you want the info stored...
var glued = "";
//setup helper function to grab title(this is necessary for my specific data)
var addTitle = (req) => req.json().then(obj => (glued += obj.title, document.getElementById('api-content').textContent = glued));
// put it all together:
my_requests.resolve().then(requests => requests.map(addTitle));
<div id="api-content"></div>
Final Note
It's likely that we will be seeing a prototypal change to the Promise object in the future that allows for easy serialization of Promises. Currently (7/15/19) there is a TC39 Proposal that does add a lot of functionality to the Promise object but it hasn't been fully vetted yet, and as with many ideas trapped within the Proposal stage, it's almost impossible to tell when they will be implemented into Browsers, or even if the idea will stagnate and fall off the radar.
Until then workarounds like this are necessary and useful( the reason why I even went through the motions of constructing this Serializer object was for a transpiler I wrote in Node, but it's been very helpful beyond that! ) but do keep an eye out for any changes because you never know!
Hope this helps! Happy Coding!
Your best bet is probably going to be building up that glued variable with recursion.
Here's an example using recursion with a callback function:
var glued = "";
requestRecursively(/* Some initial URL string */, function() {
document.getElementById('api-content').textContent = glued;
});
function requestRecursively(url, cb) {
request(url).then(function (response) {
if (/* Some condition */) {
glued += processQuery(response[0]);
var next = /* Gets next URL string */;
if (next) {
// There's another URL. Make another request.
requestRecursively(next, cb);
} else {
// We're done. Invoke the callback;
cb();
}
} else {
console.log("Done.");
}
}).catch(function (err) {
console.error(err.message);
});
}

Wait for mongo writes before returning from a find

I want a store js object that manages a mongodb collection and behaves like that:
store.insert(thing); // called from a pubsub system that don't wait the insert to finish
store.get(); // returns a promise that resolves to the things in the collection
// even if called immediately after insert it must contain the last thing inserted
I implemented it manually like that:
let inserts = 0;
let afterInserts = [];
const checkInsertsFinished = () => {
if (inserts === 0) {
afterInserts.forEach(resolve => resolve());
afterInserts = [];
}
};
const decrementInserts = () => {
inserts -= 1;
checkInsertsFinished();
};
const insertsFinished = () =>
new Promise((resolve) => {
afterInserts.push(resolve);
checkInsertsFinished();
});
const insert = (thing) => {
inserts += 1;
db.collection('mycollection').insertOne(thing).then(decrementInserts);
};
const get = async () => {
await insertsFinished(); // if there are inserts happening, wait for them to finish
return db.collection('mycollection').find({}).toArray();
};
return { insert, get };
I suppose that there are more standard ways to accomplish this but I miss the vocabulary to find libs or built-in features... How would you do that?
Thanks for your advices.
JavaScript is single threaded, none of code you write can be run at the same time on multiple threads so you should be able to do it this way:
let inserting = Promise.resolve(),
startGetting={};
const insert = (thing) => {
startGetting={};//de-reference startGetting
inserting = db.collection('mycollection').insertOne(thing)
return inserting;
};
const get = () => {
const rec = () =>
inserting.then(
_ =>
new Promise(
(resolve,reject)=>{
//the next op is truely async (although I wonder why no promise),
// someone can insert while this is completing
const start=startGetting;
db.collection('mycollection').find({}).toArray(
(err,results)=>
err
? reject(err)
//inserting could have been set to tby the time
// this completed (in theory) so use startGetting reference equality
: startGetting === start
? resolve(results)//while getting nobody inserted
: resolve(rec())
)
})
);
return rec();
};
return { insert, get };

Toggling API call by a stream

Here:
import Rx from 'rxjs';
function fakeApi(name, delay, response) {
return new Rx.Observable(observer => {
console.log(`${name}: Request.`)
let running = true;
const id = setTimeout(() => {
console.log(`${name}: Response.`)
running = false;
observer.next(response);
observer.complete();
}, delay);
return () => {
if(running) console.log(`${name}: Cancel.`)
clearTimeout(id);
}
})
}
function apiSearch() { return fakeApi('Search', 4000, "This is a result of the search."); }
//============================================================
const messages$ = new Rx.Subject();
const toggle$ = messages$.filter(m => m === 'toggle');
const searchDone$ = toggle$.flatMap(() =>
apiSearch().takeUntil(toggle$)
);
searchDone$.subscribe(m => console.log('Subscriber:', m))
setTimeout(() => {
// This one starts the API call.
toggle$.next('toggle');
}, 2000)
setTimeout(() => {
// This one should only cancel the API call in progress, not to start a new one.
toggle$.next('toggle');
}, 3000)
setTimeout(() => {
// And this should start a new request again...
toggle$.next('toggle');
}, 9000)
my intent is to start the API call and stop it when it is in progress by the same toggle$ signal. Problem with the code is that toggle$ starts a new API call every time. I would like it not to start the new call when there is one already running, just to stop the one which is already in progress. Some way should I "unsubscribe" the outermost flatMap from toggle$ stream while apiSearch() is running. I guess that there is a need to restructure the code to achieve the behaviour... What is the RxJS way of doing that?
UPDATE: After some more investigations and user guide lookups, I came with this:
const searchDone$ = toggle$.take(1).flatMap(() =>
apiSearch().takeUntil(toggle$)
).repeat()
Works like it should. Still feels cryptic a little bit. Is this how you RxJS guys would solve it?
I think your solution will work only once since you're using take(1). You could do it like this:
const searchDone$ = toggle$
.let(observable => {
let pending;
return observable
.switchMap(() => {
let innerObs;
if (pending) {
innerObs = Observable.empty();
} else {
pending = innerObs = apiSearch();
}
return innerObs.finally(() => pending = null);
});
});
I'm using let() only to wrap pending without declaring it in parent scope. The switchMap() operator unsubscribes for you automatically without using take*().
The output with your test setTimeouts will be as follows:
Search: Request.
Search: Cancel.
Search: Request.
Search: Response.
Subscriber: This is a result of the search.

Categories