I've got this problem that I couldn't find a solution for by googling.
I've got a library, that I'm using (and do not want to edit, unless it's really necessary) that allows the user to select an item, then calls my custom callback function to modify the item and then continues working with it.
I need to perform some asynchronous tasks on it, which may take some time. This creates a race condition as my async tasks have not yet finished when the callback function is finished and the library continues its work on the item.
library.onItemSelectionCallback = function (item) {
myService.modifyItem(item).then(
function (modifiedItemProperty) {
item.newProperty = modifiedItemProperty;
});
myService.anotherModifyItem(item).then(
function (modifiedItemProperty) {
item.existingProperty = modifiedItemProperty;
});
}
How do I wait for both of my async tasks to finish, before allowing this callback to finish?
Only thing I could think of is looping with while and sleep every hundred or so milliseconds until both of the promises have been resolved, but that doesn't seem to be a very good solution.
I understand that this makes async requests quite synchronous and might possibly be detrimental for UX, but do not really see another way out.
EDIT: I know that i'm risking with removing the generic nature of the question and thus making it too localized, I will say that I'm trying to use angular-file-upload module, specifically, trying to mount a custom imageService, that would resize the picture before it's upload. I'm mounting it on the onBeforeUploadItem callback. The idea is that creating the resized image may take a while and that is why I need to return a promise from my imageService, that needs to be resolved before upload.
If modifyItem and anotherModifyItem work independently (that is, one does not rely on the other), you can just pipe them both into $q.all, eg
library.onItemSelectionCallback = function(item) {
var promises = {
newProperty: myService.modifyItem(item),
existingProperty: myService.anotherModifyItem(item)
};
return $q.all(promises).then(function(values) {
return angular.extend(item, values);
});
}
This will return a promise that resolves with item.
For the first part of my question -- Yes, I guess the only way to really wait for those two promises to be resolved would be something with a while and sleep, making them synchronous, which would probably work and not even be that bad (except for the site pausing until the requests are fulfilled), but would make me feel very, very bad about myself as a person and how my actions affect this world.
It is not possible to correctly mix callbacks and promises without hacks afaik.
For the second part of my question -- as per comments of #georgeawg, figured that an AngularJS module that implements HTML5 API and callbacks instead of $http service and promises is not how a good AngularJS module should be implemented, and so I moved towards a different module ng-file-upload, which, even though one could argue is less stylish, does the job very well and in an Angular way (ng-file-upload provides a simple $upload service, that returns a promise. If you want to modify files before upload, suggested way is to simply $watch and catch the moment user drag-drops or selects a file.).
Related
I think I know the answer to this but not sure. Wanting to confirm more than anything. The question applies (I think equally) to all three async approaches (callbacks, promises, async/await), but I'll ask it in the context of promises.
As I understand it, asynchronous programming is intended (at least) for event-driven applications to perform tasks in response to events, without the processes of doing those tasks blocking the ability to do any other tasks (presumably in response to other events) the application. eg. one event might trigger this series of tasks:
Query a database for some data
Wait for the response
Manipulate the data from the response (or handle errors as needed)
Write changes back to the database
done.
In more traditional programming (eg. C/C++) there'd be a main function that calls all that, and might get something back from it. But generally, the process of that main function, sitting around waiting for that to come back, blocks other operations (unless you start manually manipulating threads, or other stuff, that this JS async programming is presumably supposed to spare us from, right?).
But in the above example, supposedly, the (conceptual) "main function" doesn't need anything back from it. Step 5 isn't "return some result I need to act on", it's "done", end of story. If I understand correctly, the above is more likely called by a listener. That listener was set up by JS/node's equivalent of a "main function" -- the code that runs from the entry point of the app -- which has long since ended and the listeners are now running the show. Anything else this entire app needs to do will be triggered by some other independent event, caught by a listener. (Not that this is the only way but as I understand it this is pretty common). So perhaps the user sees the results, and hits another button or whatever to initiate some other, separate, independent, task with it. The above task is long deceased.
Ok... if all that's correct then there's this: For each of those 5 steps, we need to call them in succession. Each step relies on something provided by the one before it. So that process basically needs to be synchronous.
In promise code I believe it looks something like this:
askDbForData() // step 1
.then(responseFromDB => { // step 2
makeTheDesiredChangesToTheData(responseFromDB) // step 3
})
.then(changedData => writeBackToDB(changedData)) // step 4
Looks to me like ultimately this is chaining functions one after the other to perform what is otherwise essentially a synchronous task.
But but but... Synchronous... = blocking?
I've just realized how this doesn't appear to be very clear to me in most of the documentation/articles I've read on this. This is what I need to clear up...
I think the point is: the part where .next(...) picks up the result and sends it off to the next piece in the chain -- that part is "blocking" (though it happens in the blink of an eye so it's kinda moot), but each of those (presumably time consuming) functions (like askDbForData()) -- which are supposed to be asynchronous, and return promises -- do their thing separately and independently of any other control flow, etc, thus not blocking anything.
In other words, the promise chain itself is synchronous, but each piece along the way is asynchronous. The kicker, I think: Not asynchronous to other tasks in the same chain, but asynchronous to everything else the app is doing in other chains initiated by other events.
It might be that anyone reading this will read it and go "yeah well duh, that's the whole point". Perhaps that's what I'm hoping. But if it is, it hasn't been clear in any of my research so far, so it's be great to get it clear from someone(s) who "get" it. I think if all this IS correct, then it pretty much clears up nearly every other confusion I've had with this topic.
So... Yes this is long, but it is only one question. The question is:
Is that it -- is that the point -- or if not, what am I missing?
Thanks!
Looks to me like ultimately this is chaining functions one after the other to perform what is otherwise essentially a synchronous task
No, the code you show there only performs the synchronous part of askDbForData(), then returns. If properly coded, this first part is usually nothing.
I think the point is: the part where .next(...) picks up the result and sends it off to the next piece in the chain -- that part is "blocking" (though it happens in the blink of an eye so it's kinda moot)
Again no, the continuations are only called once the previous step is done. Once a continuation happens, the same split happens, first the synchronous (and hopefully minimal or non-existent) part runs then the rest of the function is registered as a continuation.
In other words, the promise chain itself is synchronous, but each piece along the way is asynchronous.
Absolutely not, the chain of promises is asynchronous. .next() stores a function reference to be called at a later date, it neither calls nor waits for the call of that function code.
The kicker, I think: Not asynchronous to other tasks in the same chain, but asynchronous to everything else the app is doing in other chains initiated by other events.
I've tried to avoid the word "chain" because it seems it's confusing you even more. It's not a chain, it's a tree, you can have promises executed in parallel or sequentially. So again, no.
Consider this.
You have a function A that returns a promise..a promise for something to come.
You also have a function B that does something else.
FunctionA().then(result=>{console.log(result)}).catch();
FunctionB();
If the promise in function A takes a long time to resolve..why not go ahead and do FunctionB while we wait?
Get it?
If you know that the Promise has already been resolved why can't you just call get() on it and receive the value? As opposed to using then(..) with a callback function.
So instead of doing:
promise.then(function(value) {
// do something with value
});
I want to be able to do the much simpler:
var value = promise.get();
Java offers this for it's CompletableFuture and I see no reason why JavaScript couldn't offer the same.
Java's get method "Waits if necessary for this future to complete", i.e. it blocks the current thread. We absolutely never want to do that in JavaScript, which has only one "thread".
It would have been possible to integrate methods in the API to determine synchronously whether and with what results the promise completed, but it's a good thing they didn't. Having only one single method, then, to get results when they are available, makes things a lot easier, safer and more consistent. There's no benefit in writing your own if-pending-then-this-else-that logic, it only opens up possibilities for mistakes. Asynchrony is hard.
Of course it not, because the task will run asynchronously so you can't get result immediately.
But you can use a sync/await to write sequential asynchronous code.
Can anyone help me understand the function of NodeJS and performance impact for the below scenario.
a. Making the request to Rest API end point "/api/XXX". In this request, i am returning the response triggering the asynchronous function like below.
function update(req, res) {
executeUpdate(req.body); //Asynchronous function
res.send(200);
}
b. In this, I send the response back without waiting for the function to complete and this function executing four mongodb updates of different collection.
Questions:
As I read, the NodeJS works on the single thread, how this
asynchronous function is executing?
If there are multiple requests for same end point, how will be the
performance impact of NodeJS?
How exactly the NodeJS handles the asynchronous function of each
request, because as the NodeJS is runs on the single thread, is there
any possibility of the memory issue?
In short, it depends on what you are doing in your function.
The synchronous functions in node are executed on main thread, thus,
they will not preempt and execute until end of the function or until
return statement is encountered.
The async functions, on the other hand, are removed from main thread,
and will only be executed when async tasks are completed on a
separate worker thread.
There are, I think, two different parts in the answer to your question.
Actual Performance - which includes CPU & memory performance. It also obviously includes speed.
Understanding as the previous poster said, Sync and Async.
In dealing with #1 - actual performance the real only way to test it is to create or use a testing environment on your code. In a rudimentary way based upon the system you are using you can view some of the information in top (linux) or Glances will give you a basic idea of performance, but in order to know exactly what is going on you will need to apply some of the various testing environments or writing your own tests.
Approaching #2 - It is not only sync and async processes you have to understand, but also the ramifications of both. This includes the use of callbacks and promises.
It really all depends on the current process you are attempting to code. For instance, many Node programmers seem to prefer using promises when they make calls to MongoDB, especially when one requires more than one call based upon the return of the cursor.
There is really no written-in-stone formula for when you use sync or async processes. Avoiding callback hell is something all Node programmers try to do. Catching errors etc. is something you always need to be careful about. As I said some programmers will always opt for Promises or Async when dealing with returns of data. The famous Async library coupled with Bluebird are the choice of many for certain scenarios.
All that being said, and remember your question is general and therefore so is my answer, in order to properly know the implications on your performance, in memory, cpu and speed as well as in return of information or passing to the browser, it is a good idea to understand as best as you can sync, async, callbacks, promises and error catching. You will discover certain situations are great for sync (and much faster), while others do require async and/or promises.
Hope this helps somewhat.
I know there is similar questions on here about this, but I cannot make sense of them for the life of me.
Here's an example, where I need to click a button and check the url.
My initial thought is I would write it as
element(by.id('button')).click();
expect(browser.getCurrentUrl()).toContain('asyncisconfusing');
I know the expect handles its promise but what about the .click? Shouldn't I have to write it like this?
element(by.id('button')).click().then(() => {
expect(browser.getCurrentUrl()).toContain('asyncisconfusing')
})
Or is protractor/webdriver auto-magically doing this?
In theory, since Protractor maintains a queue of promises via Control Flow and works in sync with an AngularJS application under test, you should not resolve promises explicitly unless you need a real value for further processing. In other words, this should be the prefferred form:
element(by.id('button')).click();
expect(browser.getCurrentUrl()).toContain('asyncisconfusing');
In practice though, explicitly resolving click() promises, or adding explicit waits via browser.wait() helps to deal with occasional and random timing issues.
http://seleniumhq.github.io/selenium/docs/api/javascript/module/selenium-webdriver/lib/promise.html
The first section talks about how the control flow is used to manage promises without having to chain together every single command.
I'm wondering if there's a way to cause JavaScript to wait for some variable-length code execution to finish before continuing using events and loops. Before answering with using timeouts, callbacks or referencing this as a duplicate, hear me out.
I want to expose a large API to a web worker. I want this API to feel 'native' in the sense that you can access each member using a getter which gets the information from the other thread. My initial idea was to compile the API and rebuild the entire object on the worker. While this works (and was a really fun project), it's slow at startup and cannot show changes made to the API without it being sent to the worker again after modification. Observers would solve part of this, and web workers transferrable objects would solve all, but they aren't adopted widely yet.
Since worker round-trip calls happen in a matter of milliseconds, I think stalling the thread for a few milliseconds may be an alright solution. Of course I would think about terminating in cases where calls take too long, but I'm trying to create a proof of concept first.
Let's say I want to expose the api object to the worker. I would define a getter for self.api which would fetch the first layer of properties. Each property would then be another getter and the process would continue until the final object is found.
worker.js
self.addEventListener('message', function(event) {
self.dataRecieved = true;
self.data = event.data; // would actually build new getters here
});
Object.defineProperty(self, 'api', {
get: function() {
self.dataRecieved = false;
self.postMessage('request api first-layer properties');
while(!self.dataRecieved);
return self.data; // whatever properties were received from host
}
});
For experimentation, we'll do a simple round-trip with no data processing:
index.html (only JS part)
var worker = new Worker("worker.js");
worker.onmessage = function() {
worker.postMessage();
};
If onmessage would interrupt the loop, the script should theoretically work. Then the worker could access objects like window.document.body.style on the fly.
My question really boils down to: is there a way to guarantee that an event will interrupt an executing code block?
From my understanding of events in JavaScript, I thought they did interrupt the current thread. Does it not because it's executing a blank statement over and over? What if I generated code to be executed and kept doing that until the data returned?
is there a way to guarantee that an event will interrupt an executing code block
As #slebetman suggests in comments, no, not in Javascript running in a browser's web-worker (with one possible exception that I can think of, see suggestion 3. below).
My suggestions, in decreasing order of preference:
Give up the desire to feel "native" (or maybe "local" might be a better term). Something like the infinite while loop that you suggest also seems to be very much fighting agains the cooperative multitasking environment offered by Javascript, including when thinking about a single web worker.
Communication between workers in Javascript is asynchronous. Perhaps it can fail, take longer than just a few milliseconds. I'm not sure what your use case is, but my feeling is that when the project grows, you might want to use those milliseconds for something else.
You could change your defined property to return a promise, and then the caller would do a .then on the response to retrieve the value, just like any other asynchronous API.
Angular Protractor/Webdriver has an API that uses a control flow to simulate a synchronous environment using promises, by always passing promises about. Taking the code from https://stackoverflow.com/a/22697369/1319998
browser.get(url);
var title = browser.getTitle();
expect(title).toEqual('My Title');
By my understanding, each line above adds a promise to the control flow to execute asynchronously. title isn't actually the title, but a promise that resolves to the title for example. While it looks like synchronous code, the getting and testing all happens asynchronously later.
You could implement something similar in the web worker. However, I do wonder whether it will be worth the effort. There would be a lot of code to do this, and I can't help feeling that the main consequence would be that it would end up harder to write code using this, and not easier, as there would be a lot of hidden behaviour.
The only thing that I know of that can be made synchronous in Javascript, is XMLHttpRequest when setting the async parameter to false https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest#Parameters. I wonder if you could come up with some sort of way to request to the server that maintains a connection with the main thread and pass data along that way. I have to say, my instinct is that this is quite an awful idea, and would be much slower than just requesting data from the main thread.
For what I know, there is not something native in JS to do this but it is relatively easy to do something similar. I made one some time ago for myself: https://github.com/xpy/whener/blob/master/whener.js .
You use it like when( condition, callback ) where condition is a function that should return true when your condition is met, and callback is the function that you want to execute at that time.