I'm creating a testing application that has 1000s of questions hosted on firebase. To prevent downloading the questions multiple times, I've implemented a questions service where in the constructor I download the questions:
this.db.list("questions/", { preserveSnapshot: true}).subscribe(snapshots => {...}
This downloads the questions and pushes them to a questions array so that I don't have to re download until the next session. I also have a function to serve the questions:
getQuestion(){
return this.questions[0];
}
However, because of the asynchronous nature of firebase, often times the data is not yet downloaded before getQuestion() is called, so it returns undefined.
Is there a proper way to implement this data store type pattern in angular, and make sure the async call in the constructor finishes before getQuestion() gets called?
I've tried adding a variable ready, initializing it to false, and setting it to true when the async call returns. Then, getQuestions() is modified to look like:
getQuestion(){
while(!this.ready()){}
return this.questions[0];
}
However this just causes the app to hang.
It's almost never necessary to use preserveSnapshot. Not having to worry about snapshots is one of the main benefits of using AngularFire. Just write this.db.list(PATH).subscribe(list =>.
You're confusing "downloading" with "subscribing". It is hardly ever a good idea to subscribe inside a service, and store the data locally--you'll never be exactly sure when the subscribe handler has run, as you have found.
Instead, the service should provide an observable, which consumers--usually components-will consume. Those consumers can subscribe to the observable and do whatever they want, include storing the data statically, or, preferably, you can subscribe to the observable directly within a template using the async pipe.
The general rule is to subscribe as late as possible--ideally in the template. Write your code as a set of observables which you map and filter and compose.
Firebase caches results and in general you don't need to worry about caching yourself.
Call getQuestion() function after Data from FireBase was downloaded.
Use blow code:
this.db.list("questions/").subscribe(list => {...} //etc
Related
Straight to the point, I am running an http server in Node.js managing a hotel's check-in/out info where I write all the JSON data from memory to the same file using "fs.writeFile".
The data usually don't exceed 145kB max, however since I need to write them everytime that I get an update from my DataBase, I have data loss/bad JSON format when calls to fs.writeFile happen one after each other immediately.
Currently I have solved this problem using "fs.writeFileSync" however I would like to hear for a more sophisticated solution and not using the easy/bad solution of sync function.
Using fs.promises results in the same error since again I have to make multiple calls to fs.promises.
According to Node's documentation , calling fs.writefile or fs.promises multiple times is not safe and they suggest using a filestream, however this is not currently an option.
To summarize, I need to wait for fs.writeFile to end normally before attempting any repeated write action, and using the callback is not useful since I don't know a priori when a write action needs to be done.
Thank you very much in advance
I assume you mean you are overwriting or truncating the file while the last write request is still being written. If I were you, I would use the promises API and heed the warning from the documentation:
It is unsafe to use fsPromises.writeFile() multiple times on the same file without waiting for the promise to be settled.
You can await the result in a traditional loop, or very carefully use .then() to "synchronize" your callbacks, but if you're not doing anything else in your event loop except reading from your database and writing to this file, you might as well just use writeFileSync to keep things simple/safe. The asynchronous APIs (callback and Promises) are intended to allow your program to do other things in the meantime; if this is not necessary and the async APIs add troublesome complexity for your code, just use the synchronous APIs. That's true for any node API or library function, not just fs.writeFile.
There are also libraries that will perform atomic filesystem operations for you and abstract away the implementation details, but I think these are probably overkill for you unless you describe your use case in more detail. For example, why you're dumping a database to disk as JSON as fast/frequently as you can, rather than keeping things in memory or using event-based incremental updates (e.g. a real, local database with atomicity and consistency guarantees).
thank you for your response!
Since my app is mainly an http server,yes I do other things rather than simply input/output, although with not a great amount of requests. I will review again the promises solution but the first time I had no luck.
To explain more I have a:function updateRoom(data){ ...update things in memory... writetoDisk(); }
and the function writetoDisk(){
fsWriteFile(....)
}
Making the function writetoDisk an async function and implementing "await" inside it still does not solve the problem since the updateRoom function will call the writetoDisk without waiting for it to end.
The ".then" approach can not be implemented since my updateRoom is being called constantly and dynamically .
If you happen to know 1-2 thing about async-await you are more than welcome to explain me a bit more, thanks again nevertheless!
I have an angular (9) component which gets BehaviourSubjects. I learn from many sources like this to use the async-pipe when displaying observables content (instead of subscribing it in ngInit). There's also the trick, using *ngIf* with as to not repeat it all the time. But since they are BehaviourSubjects after all, I could could simply do
<div>{{behaviourSubject.getValue()}}</div>
or whatever. Actually it's seems much cleaner to me then using 'async' and practically leads to less problems here and there. But I am nor sure if this is an okay pattern or has it serious disadvateges?
I'd refer you to Ben Lesh's (author of RxJS) answer on this topic here
99.9% of the time you should NOT use getValue()
There are multiple reasons for that...
In Angular, you won't be able to use the OnPush ChangeDetectionStrategy. Not using it makes your app slower because Angular will constantly try to sync the value with the cached view value. In your case, it even needs to call the getValue function first.
When the BehaviourSubject errors or completes you'll not be able to call getValue.
Generally the use of getValue, and I'd argue even BehaviourSubject, is not necessary, because you can express most Observables by only using pipeable operators on another source Observable. The only real place where Subjects are necessary is when you need to convert an otherwise unobservable event to an Observable.
While it might look cleaner not to use async, you're actually moving the hard work to Angular which needs to do figure out when it should call getValue().
BehaviorSubject often live inside services in order to dispatch new values to other services/components to keep them up to date.
A good practice is to declare the BehaviorSubject as private and to only exposes him .asObservable(), so consumers aren't allowed to change its value directly.
That's why we have to use the async pipe on the provided observable source.
Second reason: async pipes are automatically unsubscribing from the observables they're fed with. [Edition]: as the comparison is with .getValue() which provide the value of the subject without the need to subscribe, there is no explicit benefit of the pipe of a subject in this use case.
Calling methods within templates expressions would be the first thing you would want to avoid in Angular.It is considered bad practice to call a method within the template.Click here for more information around that.
As Gerome mentioned, it would be a right approach to expose behaviour subject as an observable and subscribing to it within the template using async pipe, and since its a behaviour subject, it will always have latest values emitted as well on subscription, hence you can avoid using getValue() method as well.
If your property is of BehaviourSubject type it's totally fine to use getValue() in the template. The difference between getValue() and | async as value is that getValue() is called every time of change detection to detect rerender case, but because there's nothing behind than return this._value it's totally fine.
Is writing to firebase cloud firestore asynchronous in javascript?
And if it is, what is a way to deal with it?
My program is supposed to update a value into the database, and then pull that value out soon after, to do something with it.
Most of the times, the correct value comes out, however, sometimes the previous value is outputted, but when the page is refreshed and the same value is read from the database again, the correct value gets outputted.
So, I am assuming the issue must be that writing to the database is asynchronous, just like reading a database is.
".then" didn't seem to work.
Here is a simplified version of my code:
function updateDB{
db.collection("rooms").doc("roomsDoc").update({
roomTime: timeInput //timeInput is a variable defined (not shown in code here)
}).then
{
readDB();
}
}
function readDB(){
db.collection("rooms").doc("roomsDoc").get().then(function(doc) {
console.log(doc.data().roomTime);
});
}
The console.log is what outputs the wrong value sometimes.
yes it is asynchronous.
a good way to handle the problem you are having is to async/await.
first, you need to make the function you are doing this in an async function
then do something like this
async function FunctionName(){
// do some initial stuff
await write to the database
// do some intermediate stuff
await read from the database
}
All JavaScript client/browser APIs that work with files and networks are asynchronous - that is the nature of JavaScript.
then() does, in fact, work on the promises returned by Firestore's APIs (and all promises, for that matter). There are lots of examples of this in the documentation.
Without seeing your code, we don't know what you might be doing wrong.
Yes, it is asynchronous.
You should check the official documentation regarding this topic, it was very useful to me when trying to fetch data from the database/API:
Firebase Cloud Firestore
First you have to initialize your document ref and fetch data with get() async function and wait for the response using either then() or async/await method.
Hope this helps!
I think you made a typo here:
.then
{
readDB();
}
in this way, you are not really waiting for the completion of the first operation. To get what you want you need to write:
.then(() => readDB())
I am developing an app in aureliajs. The development process is started for many months and now, the back-end developers want to make their services versioned. So I have a web service to call to get the version of each server side (web api) app and then, for the further requests, call the right api address including its version.
So, in the app.js I am requesting the system meta and storing it somewhere. But some components get initialized before this request gets done. So they won't find the version initialized and requesting the wrong server data.
I want to make the app.js constructor wait until this data is retrieved. For example something like this:
export class App {
async constructor(...) {
...
await this.initializeHttp();
...
}
initializeHttp(){
// get the system meta from server
}
}
but this solution is not applicable. Because the constructor can't be async. So how should I block the job until the system meta is retrieved?
UPDATE
The question is not a duplicate of this question. In that question, there is a place in outer class to await for the initialization job; although in my question, the main problem is, where to put this await-tion. So the question is not just about async function in constructor, but is about blocking all aurelia jobs until async job resolves.
Aurelia provides many ways to handle asynchronous flow. If your custom element is a routed component, then you can leverage activate lifecycle to return a promise and initialize the http service asynchronously.
Otherwise, you can use CompositionTransaction to halt the process further, before you are done with initialization. You can see a preliminary example at https://tungphamblog.wordpress.com/2016/08/15/aurelia-customelement-async/
You can also leverage async nature of configure function in bootstrapping an Aurelia application to do initialization there:
export function configure(aurelia) {
...
await aurelia.container.get(HttpServiceInitializer).initialize();
}
I've been following lots of meteor examples and working through discover meteor, and now I'm left with lots of questions. I understand subscribe and fetch are ways to get "reactivity" to work properly, but I still feel unsure about the relationship between find operations and subscriptions/fetch. I'll try to ask some questions in order to probe for some holistic/conceptual answers.
Question Set 1:
In the following example we are fetching 1 object and we are subscribing to changes on it:
Meteor.subscribe('mycollection', someID);
Mycollection.findOne(someID);
Does order of operations matter here?
When does this subscription "expire"?
Question Set 2:
In some cases we want to foreign key lookup and use fetch like this:
MyCollection2.find({myCollection1Id: doc1Id}).fetch();
Do we need also need a MyColletion2.subscribe when using fetch?
How does subscribe work with "foreign keys"?
Is fetch ~= to a subscription?
Question Set 3:
What is an appropriate use of Tracker.autorun?
Why/when should I use it instead of subscribe or fetch?
what happens when you subscribe and find/fetch
The client calls subscribe which informs the server that the client wants to see a particular set of documents.
The server accepts or rejects the subscription request and publishes the matching set of documents.
Sometime later (after network delay) the documents arrive on the client. They are stored in a database in the browser called minimongo.
A subsequent fetch/find on the collection in which the aforementioned documents are stored will query minimongo (not the server).
If the subscribed document set changes, the server will publish a new copy to the client.
Recommended reading: understanding meteor publications and subscriptions.
question 1
The order matters. You can't see documents that you haven't subscribed for (assuming autopublish is off). However, as I point out in common mistakes, subscriptions don't block. So a subscription followed by an immediate fetch is should return undefined.
Subscriptions don't stop on their own. Here's the breakdown:
A global subscription (one made outside of your router or template) will never stop until you call its stop method.
A route subscription (iron router) will stop when the route changes (with a few caveats).
A template subscription will stop when the template is destroyed.
question 2
This should be mostly explained by the first section of my answer. You'll need both sets of documents in order to join them on the client. You may publish both sets at once from the server, or individually - this is a complex topic and depends on your use case.
question 3
These two are somewhat orthogonal. An autorun is a way for you to create a reactive computation (a function which runs whenever its reactive variables change) - see the section on reactivity from the docs. A find/fetch or a subscribe could happen inside of an autorun depending on your use case. This probably will become more clear once you learn more about how meteor works.
Essentially, when you subscribe to a dataset, it fills minimongo with that data, which is stored in the window's local storage. This is what populates the client's instance of that Mongo with data, otherwise, basically all queries will return undefined data or empty lists.
To summarize: Subscribe and Publish are used to give different data to different users. The most common example would be giving different data based on roles. Say, for instance, you have a web application where you can see a "public" and a "friend" profile.
Meteor.publish('user_profile', function (userId) {
if (Roles.userIsInRole(this.userId, 'can-view', userId)) {
return Meteor.users.find(userId, {
fields: {
public: 1,
profile: 1,
friends: 1,
interests: 1
}
});
} else {
return Meteor.users.find(userId, {
fields: { public: 1 }
});
}
});
Now if you logged in as a user who was not friends with this user, and did Meteor.subscribe('user_profile', 'theidofuser'), and did Meteor.users.findOne(), you would only see their public profile. If you added yourself to the can-view role of the user group, you would be able to see public, profile, friends, and interests. It's essentially for security.
Knowing that, here's how the answers to your questions breaks down:
Order of operations matters, in the sense that you will get undefined unless it's in a reactive block (like Tracker.autorun or Template.helpers).
you still need to use the subscribe when using fetch. All fetch really does is return an array instead of a cursor. To publish with foreign keys is a pretty advanced problem at times, I recommend using reywood:publish-composite, which handles the annoying details for you
Tracker.autorun watches reactive variables within the block and will rerun the function when one of them changes. You don't really ever use it instead of subscribing, you just use it to watch the variables in your scope.