I am developing an app in aureliajs. The development process is started for many months and now, the back-end developers want to make their services versioned. So I have a web service to call to get the version of each server side (web api) app and then, for the further requests, call the right api address including its version.
So, in the app.js I am requesting the system meta and storing it somewhere. But some components get initialized before this request gets done. So they won't find the version initialized and requesting the wrong server data.
I want to make the app.js constructor wait until this data is retrieved. For example something like this:
export class App {
async constructor(...) {
...
await this.initializeHttp();
...
}
initializeHttp(){
// get the system meta from server
}
}
but this solution is not applicable. Because the constructor can't be async. So how should I block the job until the system meta is retrieved?
UPDATE
The question is not a duplicate of this question. In that question, there is a place in outer class to await for the initialization job; although in my question, the main problem is, where to put this await-tion. So the question is not just about async function in constructor, but is about blocking all aurelia jobs until async job resolves.
Aurelia provides many ways to handle asynchronous flow. If your custom element is a routed component, then you can leverage activate lifecycle to return a promise and initialize the http service asynchronously.
Otherwise, you can use CompositionTransaction to halt the process further, before you are done with initialization. You can see a preliminary example at https://tungphamblog.wordpress.com/2016/08/15/aurelia-customelement-async/
You can also leverage async nature of configure function in bootstrapping an Aurelia application to do initialization there:
export function configure(aurelia) {
...
await aurelia.container.get(HttpServiceInitializer).initialize();
}
Related
I'm writing a node.js server script that uses a shared text list data for multiple clients asynchronously.
the clients can read, add or update items of this shared list.
static getitems(){
if (list== undefined) list = JSON.parse(fs.readFileSync("./list.json"));
return list;
}
static additem(newitem){
var key = Object.keys(newitem)[0];
list[key] = newitem[key];
fs.writeFileSync("./list.json", JSON.stringify(list));
}
clients can modify and get the list data using the following express APIs
app.get("/getlist"), (req, res)=>{
res.send(TempMan.getTemplates());
});
app.post("/addlist"), (req, res)=>{
additem(req.body.newitem)
res.status(204).end()
});
with long background in C#, C++ and other desktop programming languages, although I red javascript doesn't run into race condition, I am so worried resource sharing is going to be a problem. I was first thinking of semaphores or shared lock or some other multiple thread management solutions in other languages, but yet read javascript doesn't need such methods.
does such node.js implementation run to resource sharing problem such as simultaneous attempts of file read/write? how can I solve this? do I need some kind of transaction functions I can use in javascript?
Generally speaking, a Node.js program may encounter a resource sharing problem you call, usually, we call it "race condition" problems. It is not due to two threads/processes but it is due to the intrinsic property: async. Assume that there are two async functions, the first one has started but is not finished and it has some await inside, in this situation, the second async function can start. It may cause race conditions if they access the same resource in their code blocks.
I have made a slide to introduce this issue: https://slides.com/grimmer/intro_js_ts_task_queuelib_d4c/fullscreen#/0/12.
Go back to your example code, your code WILL NOT have any race conditions. Even you put any usage of async function inside express routing callback instead of fs.writeFileSync, the reason is that the implementation of Express will await the first async routing callback handler function and only starts to execute the second async routing callback handler function after the first one is finished.
For example:
app.post('/testing1', async (req, res) => {
// Do something here
});
app.post('/testing2', async (req, res) => {
// Do something here
});
is like the below code in the implementation of Express,
async function expressCore() {
await 1st_routing_call_back()
await 2nd_routing_call_back()
}
But please keep in mind that the other server frameworks may not have the same behavior. https://www.apollographql.com/ and https://nestjs.com/ both allow two async routing methods to be executed concurrently. Like below
async function otherServerFrameworkCore() {
1st_routing_call_back()
2nd_routing_call_back()
}
and you need to find a way to avoid race conditions if this is your concern. Either using transaction for DB usage or some npm synchronization libraries which are lightweight and suitable for single Node.js instance program, e.g. https://www.npmjs.com/package/d4c-queue which is made by me. Multi Node.js instances are multi-processes and should have possible race condition issues and DB transaction is a more suitable solution.
I'm creating a testing application that has 1000s of questions hosted on firebase. To prevent downloading the questions multiple times, I've implemented a questions service where in the constructor I download the questions:
this.db.list("questions/", { preserveSnapshot: true}).subscribe(snapshots => {...}
This downloads the questions and pushes them to a questions array so that I don't have to re download until the next session. I also have a function to serve the questions:
getQuestion(){
return this.questions[0];
}
However, because of the asynchronous nature of firebase, often times the data is not yet downloaded before getQuestion() is called, so it returns undefined.
Is there a proper way to implement this data store type pattern in angular, and make sure the async call in the constructor finishes before getQuestion() gets called?
I've tried adding a variable ready, initializing it to false, and setting it to true when the async call returns. Then, getQuestions() is modified to look like:
getQuestion(){
while(!this.ready()){}
return this.questions[0];
}
However this just causes the app to hang.
It's almost never necessary to use preserveSnapshot. Not having to worry about snapshots is one of the main benefits of using AngularFire. Just write this.db.list(PATH).subscribe(list =>.
You're confusing "downloading" with "subscribing". It is hardly ever a good idea to subscribe inside a service, and store the data locally--you'll never be exactly sure when the subscribe handler has run, as you have found.
Instead, the service should provide an observable, which consumers--usually components-will consume. Those consumers can subscribe to the observable and do whatever they want, include storing the data statically, or, preferably, you can subscribe to the observable directly within a template using the async pipe.
The general rule is to subscribe as late as possible--ideally in the template. Write your code as a set of observables which you map and filter and compose.
Firebase caches results and in general you don't need to worry about caching yourself.
Call getQuestion() function after Data from FireBase was downloaded.
Use blow code:
this.db.list("questions/").subscribe(list => {...} //etc
Where should long running processes "live" in a react+redux app?
For a simple example, consider a class which sends and receives messages over a websocket:
class WebsocketStreamer {
sendMessage(message) {
this.socket.send(…);
}
onMessageReceive(event) {
this.dispatch({
type: "STREAMER_RECV",
message: event.data,
})
}
}
How should the lifecycle of this class be managed?
My first instinct is to keep it on the store:
var stores = {
streamer: function(state={}, action) {
if (action.type == "##INIT")
return { streamer: new WebsocketStreamer() }
if (action.type == "STREAMER_SEND")
state.streamer.sendMessage(action.message)
return state;
}
}
But, aside from being a bit strange, there's also no way for the WebsocketStreamer to get access to the dispatch() function, and it breaks hot reloading.
Another potential solution is to keep it in a global somewhere:
const streamer = new WebsocketStreamer();
But that has obvious testability implications, and breaks hot reloading too.
So, where should a long running process live in a react + redux app?
Note: I realize that this simple example could be built with just stores + action providers. But I would specifically like to know where long-lived processes should exist in situations where they exist.
In my experience, there are two options. First, you can pass store to any non-Redux code and dispatch actions from here. I've did this with socket connection and all was fine. Second, if you need socket or whatever to change with redux actions, it looks like a good idea to put connection and it's management in custom middleware. You'll have access to store API as well as will be informed on all actions dispatching, so could do anything you need to.
I'm doing something similar with websockets. In my case, I simply wrap the websocket client in a React component that renders null and inject it as close to the root as possible.
<App>
<WebSocketClientThingy handlers={configuredHandlers}/>
....
</App>
Here's a quick example. It's pretty naive, but get's stuff done.
https://github.com/trbngr/react-example-pusher
Quick note:
The websocket doesn't live in the store. It's simply there and publishes actions.
EDIT:
I decided to explore setting the client (long-lived object) into the global state. I gotta say that I'm a fan of this approach.
https://github.com/trbngr/react-example-pusher/tree/client_as_state
I opensource a demo issue tracker with long running ops using React/Redux/Node,all the involved code but is open sourced and MIT.
Sometimes I need to pull or push the repo and depending on the connection this might take a long time, this is where the next long running operation comes in.
Overall the key points of the approach are:
A redux store with the active operations and its status.
A redux store with events of the operations
Initialize the operations store with all the ongoing operations during the page
initialization
Use a http events server to update a operation status, either data, error, complete, progress, etc.
Connect the components like buttons to the ongoing operation status.
For each component keep a list of involved operations and parameters, if the operation and parameter match... change the button state to loading/done/etc..
Change the status of the operation store with the events update or request results (I am using a GraphQL and all the mutations returns a "Operation" type)
The involved repositories are:
https://github.com/vicjicaman/tracker-common ---> GO TO ---> src/pkg/app-operation/src
https://github.com/vicjicaman/tracker-app-events
https://github.com/vicjicaman/tracker-operation
https://github.com/vicjicaman/tracker-events
This is how it looks like running:
https://user-images.githubusercontent.com/36018976/60389724-4e0aa280-9ac7-11e9-9129-b8e31b455c50.gif
Keeping the state this way also helps you to:
The ongoing operations state is keep even if the window is refreshed
If you open two or more windows and perform operations in one window, the state of the running operations and UI for all the windows are sync as well.
I hope the approach or code helps.
We are getting through the long slog of updating our ember-cli application to its latest iteration. We fell very much behind. I am at the stage where instance initializers have been introduced and I am getting the feeling this is going to break the way in which I have implemented a certain initializer currently.
export function initialize(container, application) {
var store = container.lookup('store:main');
// We need a basket to be present when
// the application loads. Wait for this
// to happen before continuing.
application.deferReadiness();
store.findOrCreateRecord('order', basketToken).then(function(basket) {
container.register('basket:main', basket, { instantiate: false });
application.inject('controller:basket', 'model', 'basket:main');
// Let the application know we have
// a basket and can continue.
application.advanceReadiness();
});
}
What is now recommended is that I split this up into a "normal" initializer to register the basket object and an instance initializer to grab the store and make the call to our API server. Doing this however I would not have access to the registry within the instance initializer to register the returned object from my promise which I would then inject into my controller. I assume I am thinking about this all wrong, but I have not been able to wrap my head around it. Any suggestions how I should by updating this?
I think it's reasonable to post #tomdale explanation here as an answer to help others with understanding initializers.
#tomdale: "It's not possible to defer app readiness in an instance initializer, since by definition instance initializers are only run after the app has finished booting.
Sidebar on the semantics of application booting: "App readiness" (as in, deferReadiness() and advanceReadiness()) refers to whether all of the code for the application has loaded. Once all of the code has loaded, a new instance is created, which is your application.
To restate, the lifecycle of an Ember application running in the browser is:
Ember loads.
You create an Ember.Application instance global (e.g.
App).
At this point, none of your classes have been loaded yet.
As your JavaScript file is evaluated, you register classes on the
application (e.g. App.MyController = Ember.Controller.extend(…);)
Ember waits for DOM ready to ensure that all of your JavaScript
included via <script> tags has loaded.
Initializers are run.
If you need to lazily load code or wait for additional setup, you can call deferReadiness().
Once everything is loaded, you can call advanceReadiness().
At this point, we say that the Application is
ready; in other words, we have told Ember that all of the classes
(components, routes, controllers, etc.) that make up the app are
loaded.
A new instance of the application is created, and instance
initializers are run.
Routing starts and the UI is rendered to the
screen.
If you want to delay showing the UI because there is some runtime setup you need to do (for example, you want to open a WebSocket before the app starts running), the correct solution is to use the beforeModel/model/afterModel hooks in the ApplicationRoute. All of these hooks allow you to return a promise that will prevent child routes from being evaluated until they resolve.
Using deferReadiness() in an initializer is an unfortunate hack that many people have come to rely on. I call it a hack because, unlike the model promise chain in the router, it breaks things like error and loading substates. By blocking rendering in initializers, IMO you are creating a worse experience for users because they will not see a loading or error substate if the promise is slow or rejects, and most of the code I've seen doesn't have any error handling code at all. This leads to apps that can break with just a blank white screen and no indication to the user that something bad has happened."
I understand that this image has been the ultimate guide of most, if not all, Flux programmers. Having this flow in mind, I have a few questions:
Is it correct/highly advisable to have all of my $.ajax calls inside my Web API Utils?
Callbacks call the action creators, passing the data in the process
If I want my Store to make an AJAX call, I do have to call the Action Creator first, right? Is it fundamentally incorrect to call a function in Web API Utils directly from Store?
Is there like a virtual one-sided arrow connecting from Store to Action Creators?
I have a lot of operations that do not go through views
What are the Callbacks between Dispatcher and Store?
What's the Web API here? Is this where you'd apply a RESTful API? Is there an example of this somewhere?
Is it okay to have a logic involved (to know which Action to dispatch) in one of my Action Creators? Basically, this action receives the response from my AJAX call. This is a snippet:
var TransportActions = {
receiveProxyMessage: function (message, status, xhr) {
switch (message) {
case ProxyResponses.AUTHORIZED:
AppDispatcher.dispatch({
type: ActionTypes.LOGIN_SUCCESS,
reply: m
});
break;
case ProxyResponses.UNAUTHORIZED:
AppDispatcher.dispatch({
type: ActionTypes.LOGIN_FAIL,
reply: m
});
break;
...
}
}
}
I've seen a lot of different answers online, but I am still not sure how I would incorporate all of them in my application. TYIA!
Is it correct/highly advisable to have all of my $.ajax calls inside my Web API Utils? Callbacks call the action creators, passing the data in the process.
Yes, you should put all your request into a single entity, i.e. the Web API Utils. They should dispatch the responses so any store can choose to act on them.
I wrote a blogpost a while ago showing one way on how to handle requests http://www.code-experience.com/async-requests-with-react-js-and-flux-revisited/
If I want my Store to make an AJAX call, I do have to call the Action Creator first, right? Is it fundamentally incorrect to call a function in Web API Utils directly from Store?
This is a good question, and as far as I have seen it, everybody does it a little different. Flux (from Facebook) does not provide a full answer.
There are generally two approaches you could take here:
You can make the argument that a Store should not "ask" for any data, but simply digest actions and notify the view. This means that you have to fire "fetch" actions within the components if a store are empty. This means that you will have to check on every data-listening view if it has to fetch data. This can lead to code duplication, if multiple views listen to the same Store.
The Stores are "smart" in the sense that if they get asked for data, they check if they actually have state to deliver. If they do not, they tell the API Utils to fetch data and return a pending state to the views.
Please note that this "tell the API to fetch data" is NOT a callback based operation, but a "fire and forget" one. The API will dispatch actions once the request returns.
I prefer option 2 to option 1, and I've heard Bill Fisher from the Facebook team say that they do it like this as well. (see comments somewhere in the blogpost above)
so No it is not fundamentally wrong to call the Api directly from the Store in my opinion.
Is there like a virtual one-sided arrow connecting from Store to Action Creators?
Depending on your Flux implementation there might very well be.
What are the Callbacks between Dispatcher and Store?
They are the only functions that can actually change the state in a store! each Store registers a callback with the Dispatcher. All the callbacks get invoked whenever a action is dispatched. Each callback decides if it has to mutate the store given the action type. Some Flux libraries try to hide this implementation detail from you.
What's the Web API here? Is this where you'd apply a RESTful API? Is there an example of this somewhere?
I think in the picture the Web API rectangle represents the actual server, the API Utils are the ones that make calls to the server (i.e. $.ajax or superagent). It ist most likely a RESTful API serving JSONs.
General advice:
Flux is a quite loose concept, and exact implementations change from team to team. I noticed that Facebook has changed some approaches here and there over time as well. The exact cycle is not strictly defined. There are some quite "fixed" things though:
There is a Dispatcher that dispatches all actions to all stores and permits only one action at the time to prevent event-chain hell.
Stores are action receivers, and all state must be changed through actions.
actions are "fire and forget" (no callbacks!)
Views receive state from the store and fire actions
Other things are done differently from implementation to implementation
Is it correct/highly advisable to have all of my $.ajax calls inside my Web API Utils? Callbacks call the action creators, passing the data in the process.
Absolutely.
If I want my Store to make an AJAX call, I do have to call the Action Creator first, right? Is it fundamentally incorrect to call a function in Web API Utils directly from Store?
First, ask yourself why your store needs to do an API call. The only reason I can think of is that you want to cache the received data in the stores (I do this).
In the most simple Flux implementations, all Actions are created from only the View and Server. For example, a user visits a "Profile" view, the view calls a profileRequest action creator, the ApiUtils is called, some data comes in, a ServerAction is created, the ProfileStore updates itself and the ProfileView does accordingly.
With caching: the ProfileView ask the ProfileStore for some profile, the store doesn't have it, so returns an empty object with state 'loading', and calls ApiUtils to fetch that profile (and forgets about it). When the call finishes, a ServerAction will be created, the ProfileStore will update itself, etc. This works fine. You could also call and ActionCreator from the store, but I don't see the benefit.
MartyJS does something similar. Some flux implementations do the same with promises.
I think the important part is: when data comes back into the system, a ServerActionCreator is called with the new data. This then flows back into the stores.
I believe stores should only query data, all state-changing actions (updating stuff) should be user-initiated (come from views). Engineers from Facebook wrote about this here: https://news.ycombinator.com/item?id=7719957
Is there like a virtual one-sided arrow connecting from Store to Action Creators?
If you want your stores to be smart: yes. If you want your app to be simple: no, go through Views.
What are the Callbacks between Dispatcher and Store?
These are the dispatch handlers you find in stores. The dispatcher fires an action, stores listen to this fire event and do something with the action/payload.
What's the Web API here? Is this where you'd apply a RESTful API? Is there an example of this somewhere?
This is where your ajax calls go. Typically this will mean some REST API, but could als be websockets or whatever. I've always loves this tutorial: http://fancypixel.github.io/blog/2015/01/29/react-plus-flux-backed-by-rails-api-part-2/
Disclaimers: these are my interpretations of Flux. Flux itself doesn't really solve fetching data, that's why they've come up with Relay and GraphQL at FB