I am using Apache Cordova and I've run into a multi-platform issue in regards to the Promise object.
Currently, I have to instantiate a promise like this:
var promise = new Promise(...) {
//Implementation
}
This is fine, however if the app is running on the Windows platform, I have to use WinJS instead. Like this:
var promise = new WinJS.Promise(...) {
//Implementation
}
This results in the following code:
var promise;
if (cordova.platformId == "windows") {
promise = new WinJS.Promise(...) {
//Implementation
}
}
else {
promise = new Promise(...) {
//Exactly the same implementation as above
}
}
The main issue here is that I am duplicating the implementation inside each promise, resulting in two blocks of code which is exactly the same. Therefore it's harder to maintain.
Is there a way I can instantiate the correct Promise based on the current platform without having to duplicate the code twice?
If Promise doesn't exist, you could just assign it to WinJS.Promise and then use Promise like you normally would.
Like:
if (typeof Promise === 'undefined' && cordova.platformId === 'windows') {
Promise = WinJS.Promise; // global assignment
}
// At this point you can use new Promise() as usual
As you develop in JS/Angular, why don't you use the Angular Promise?
I mean $q, an implementation of promises/deferred objects.
See Doc for $q
Related
I have this code jQuery code fragment:
$.get('/api/' + currentPage).done(function(data) { ... })
.fail(...)
I want to replace $.get('/api/'+currentPage) with a promise that always succeeds and returns a specific value for data. Something like:
let myData = { ... } // value of data I want to pass to the done function
(new AlwaysSucceeds(myData)).done(function(data) { ... })
.fail(...)
I could cobble up a dummy object, or I could extract out the done function but I want to keep changes to the code to a minimum.
Is there a way to do this?
UPDATE: To help clarify what's going, the code I am working with is (here). Normally this app is served from a nodejs server which implements the /api/... call, but I am converting it to be served
from a static page server. I know what is going to be returned from
the $.get call. To keep changes to the code clean I simply want to
change that line to:
let myData = {...}
// $.get('/api/' + currentPage) -- comment out the $.get call
(SOMETHINGHERE(myData)).done(function(data) {
The SOMETHINGHERE expression needs to implement .done(f)
which will call the function f with myData and then return
some object which implements .fail(...) which does nothing.
You can just replace $.get(...) with a function that returns a promise that is already resolved with the data you already have. And, the shortest way to get an already resolved jQuery promise, resolved with a particular value, is this:
$.when(myData).done(...)
The more text book way to do it in jQuery is:
$.Deferred().resolve(myData).done(...)
And, if you care to switch your logic to the the ES6 standard (instead of the non-standard jQuery promise behaviors), then you could use this:
Promise.resolve(myData).then(...).catch(...)
You can achieve this by implementing AlwaysSuceeds constructor function. Please see below example.
function AlwaysSucceeds(data) {
this.data = data;
}
AlwaysSucceeds.prototype.done = function(fn) {
fn(this.data);
return this;
}
AlwaysSucceeds.prototype.fail = function(fn) {
return this;
}
var myData = {
a: 1
};
(new AlwaysSucceeds(myData)).done(function(data) {
console.log(data)
}).fail(function(data){
})
Since jQuery Ajax functions just return $.Deferred objects, you can just substitute an immediately-resolved Deferred:
$.Deferred().resolve(myData).then(...)
In this particular case, if you want to make it easy to switch between synchronous and asynchronous code, and you have access to async/await, you can just use those directly:
try {
const data = await Promise.resolve($.get('/api/' + currentPage));
// code in done
} catch (err) {
// code in fail
}
would become
try {
const data = myData;
// code in done
} catch (err) {
// code in fail (never runs unless other code throws exceptions)
}
It's not clear what you actually want but be carufull using jQuery Deferred with native promises, the deferred has some non standard methods that native promises don't have.
So to be save I always assume there is a thenable, something that has a then with that you can pretty much do whatever you want.
jQuery Deferred do not behave like native promises either (depending on version):
$.Deferred().reject("hello world")
.then(
undefined
,x=>x
)
.then(
x=>console.log("Never happens",x)
)
Promise.reject("hello world")
.then(
undefined
,x=>x
);
.then(
x=>console.log("Well behaved",x)
);
Promise.resolve().then(x=>{throw "nope"})
.then(undefined,err=>console.warn(err));
$.Deferred().resolve().then(x=>{throw "nope"})//crashes
.then(undefined,err=>err);
So it will be saver to use native promises and polyfill with something that behaves like native.
To answer the question about non failing promise, if you want to make a request but return a default when it rejects and keep returning the same once resolves or rejects you can do:
const get = (p=>{
(url) => {
p = p ||
//return native promise or properly polyfilled one
Promise.resolve($.get(url))
.then(
undefined,
_=> {defaultobject:true}
);
return p;
}
})();
Your get function will return a native promise so no fail, done and other things that are non standard. Combining "promises" from different libraries and native promises it would be best to only use then
Cannot figure out the proper way to define a js API wrapper for a native promise created for react native (mainly following the original docs). Defining the native function like this:
RCT_REMAP_METHOD(getMyPromise,
getMyPromiseResolver:(RCTPromiseResolveBlock)resolve
rejecter:(RCTPromiseRejectBlock)reject)
{
NSString *result = #"Kept!";
if (result) {
resolve(result);
} else {
reject(#"no_status", #"My promise failed",
[NSError errorWithDomain:#"MyModule" code:200 userInfo:nil]);
}
}
Accessing the native function directly works without any problems (as given in the example in the docs):
var NativeRNMyModule = require('NativeModules').RNMyModule;
async function localGetMyPromise() {
try {
var result = await NativeRNMyModule.getMyPromise();
console.log("MyPromise kept: ", result);
} catch (e) {
console.log("MyPromise rejected: ", e.message);
}
}
localGetMyPromise();
However, since I have a javascript wrapper API for all my other functions in index.js of the module, I would like to have one also for native promises. I've been through a few dozen iterations, this being the simplest one:
var NativeRNMyModule = require('NativeModules').RNMyModule;
var RNMyModule = {
getMyPromise: async function() {
NativeRNMyModule.getMyPromise();
}};
module.exports = RNMyModule;
Other iterations include adding an await before the call to NativeRNMyModule.getMyPromise(); as well as creating a completely new intermediate promise. Even with the simplest approach the returned promise-functions directly and through the wrapper look otherwise identical on the console , but the one coming through the wrapper has return value undefined. Consequently, the result of querying through the wrapper is also undefined.
What is the best way to create a working pass-through wrapper for a promise?
When using Promises, why can't triggers for resolve and reject be defined elsewhere in the codebase?
I don't understand why resolve and reject logic should be localized where the promise is declared. Is this an oversight, or is there a benefit to mandating the executor parameter?
I believe the executor function should be optional, and that its existence should determine whether the promise encapsulates resolution or not. The promise would be much more extensible without such mandates, since you don't have to initiate async right away. The promise should also be resettable. It's a 1 shot switch, 1 or 0, resolve() or reject(). There are a multitude of parallel and sequential outcomes that can be attached: promise.then(parallel1) and promise.then(parallel2) and also promise.then(seq1).then(seq2) but reference-privileged players cannot resolve/reject INTO the switch
You can construct a tree of outcomes at a later time, but you can't alter them, nor can you alter the roots (input triggers)
Honestly, the tree of sequential outcomes should be edittable as well.. say you want to splice out one step and do something else instead, after you've declared many promise chains. It doesn't make sense to reconstruct the promise and every sequential function, especially since you can't even reject or destroy the promise either...
This is called the revealing constructor pattern coined by Domenic.
Basically, the idea is to give you access to parts of an object while that object is not fully constructed yet. Quoting Domenic:
I call this the revealing constructor pattern because the Promise constructor is revealing its internal capabilities, but only to the code that constructs the promise in question. The ability to resolve or reject the promise is only revealed to the constructing code, and is crucially not revealed to anyone using the promise. So if we hand off p to another consumer, say
The past
Initially, promises worked with deferred objects, this is true in the Twisted promises JavaScript promises originated in. This is still true (but often deprecated) in older implementations like Angular's $q, Q, jQuery and old versions of bluebird.
The API went something like:
var d = Deferred();
d.resolve();
d.reject();
d.promise; // the actual promise
It worked, but it had a problem. Deferreds and the promise constructor are typically used for converting non-promise APIs to promises. There is a "famous" problem in JavaScript called Zalgo - basically, it means that an API must be synchronous or asynchronous but never both at once.
The thing is - with deferreds it's possible to do something like:
function request(param) {
var d = Deferred();
var options = JSON.parse(param);
d.ajax(function(err, value) {
if(err) d.reject(err);
else d.resolve(value);
});
}
There is a hidden subtle bug here - if param is not a valid JSON this function throws synchronously, meaning that I have to wrap every promise returning function in both a } catch (e) { and a .catch(e => to catch all errors.
The promise constructor catches such exceptions and converts them to rejections which means you never have to worry about synchronous exceptions vs asynchronous ones with promises. (It guards you on the other side by always executing then callbacks "in the next tick").
In addition, it also required an extra type every developer has to learn about where the promise constructor does not which is pretty nice.
FYI, if you're dying to use the deferred interface rather than the Promise executor interface despite all the good reasons against the deferred interface, you can code one trivially once and then use it everywhere (personally I think it's a bad idea to code this way, but your volume of questions on this topic suggests you think differently, so here it is):
function Deferred() {
var self = this;
var p = this.promise = new Promise(function(resolve, reject) {
self.resolve = resolve;
self.reject = reject;
});
this.then = p.then.bind(p);
this.catch = p.catch.bind(p);
if (p.finally) {
this.finally = p.finally.bind(p);
}
}
Now, you can use the interface you seem to be asking for:
var d = new Deferred();
d.resolve();
d.reject();
d.promise; // the actual promise
d.then(...) // can use .then() on either the Deferred or the Promise
d.promise.then(...)
Here a slightly more compact ES6 version:
function Deferred() {
const p = this.promise = new Promise((resolve, reject) => {
this.resolve = resolve;
this.reject = reject;
});
this.then = p.then.bind(p);
this.catch = p.catch.bind(p);
if (p.finally) {
this.finally = p.finally.bind(p);
}
}
Or, you can do what you asked for in your question using this Deferred() constructor:
var request = new Deferred();
request.resolve();
request.then(handleSuccess, handleError);
But, it has the downsides pointed out by Benjamin and is not considered the best way to code promises.
I know that it's not the best approach to overwrite native JS API and I do it more for experiment.
I would like to overwrite Promise resolve method handler to do some extra logic on each resolve. Is it possible?
Yes, it's possible. You have to wrap Promise.prototype.then method.
Promise.prototype.then = (oldThen=>{
return function then(_successHandler, _rejectHandler){
/* your logic here;
remember: both successHandler and rejectHandler can be non-functions */
return oldThen.call(this, wrappedSuccessHandler, wrappedRejectHandler);
}
})(Promise.prototype.then);
This code won't be able to intercept new Promise() calls, but there is other workaround for this:
class SubPromise extends Promise {
constructor(executor) {
super(function(_resolve, _reject) {
/* your code goes here */
return executor(wrappedResolve, wrappedReject);
});
}
then(success, reject) {
return super.then(wrappedSuccessHandler, wrappedRejectHandler);
}
}
window.Promise = SubPromise;
It replaces global Promise property with your implementation, so all subsequent calls resolving to window.Promise will return your implementation.
See 25.4.5.3 Promise.prototype.then in spec for further details (with default handlers "thrower" and "identity").
I have a node#4.3.1 + mongo + mongoose#4.4.4 project which I think is providing me with the mpromise library (the mongoose docs imply this)
Assuming it is mpromise, I'm stumped by the very simple job of creating an already fulfilled promise (so I can stub a function that should return a promise).
The mpromise doc (see the "Chain" section) says I can do this:
function makeMeAPromise(i) {
var p = new Promise;
p.fulfill(i);
return p;
}
But this fails with the exception "Promise resolver undefined is not a function".
Am I using mpromise? Is the doc lying? How do I create a resolved promise?
edit: This works
return new Promise(function(fulfill, reject) { fulfill("really? this can't be the only way"); });
but that can't be the simplest way, right?
Mongo doesn't override the default Promise object in Node as far as I know. So the default ways should work just fine:
const resolvedPromise1 = new Promise(f => f("Fullfilled!"));
const resolvedPromise2 = Promise.resolve("Fullfilled!");