Firebase offers some overall Analytics in their App Dashboard, however, I need to know whether my stored data are ever used or they are just lying idly on a per node basis.
Why? It's simple: we are learning while developing, which makes the app a very fast evolving one. Not only the logic changes, but also the data stored need to be refactored from time to time. I would like to get rid of abandoned and forgotten data. Any ideas?
In best case, I would like to know this:
When was a node used last time? (was it used at all?)
How many times was it used in 1h/24h/1w/1M?
Differentiate between read/write operations
2017 update
Cloud Functions trigger automatically and run on server.
https://firebase.google.com/docs/functions/
https://howtofirebase.com/firebase-cloud-functions-753935e80323
2016 answer
So apparently the Firebase itself doesn't provide any of this.
The only way I can think of right now is to create wrappers for firebase query and write functions and either do the statistics in a client app or create a devoted node for storing the statistical data.
In case of storing the data in firebase, the wrapper for writing functions (set, update, push, remove, setWithPriority) is relatively easy. The query functions (on, once) will have to write in a successCallback.
Related
I am listening for new Firebase Realtime Database documents with code something like this:
firebase.database().ref(path)
.orderByChild('timestamp')
.on('child_added', snap => {
...
});
where timestamp is set on the server with firebase.database.ServerValue.TIMESTAMP. I would like to have documents always handled in timestamp order, but I am aware that documents I add locally may arrive in the above code out of order.
I can check for and fix mis-ordered arrivals but I'd prefer not to if there is some way to have this not happen. I know about this answer (and answers that link to it) but I believe that applies to an earlier API without ordering methods like orderByChild.
I believe that I should be able to get timestamp order if I always add documents using a transaction and pass false in the applyLocally argument. I am wondering if it also works to add documents from a separate Javascript context on the same client (e.g. from a Web Worker) without a transaction.
Will either or both of these approaches guarantee timestamp ordering? Is there any other way to achieve this? Among approaches that work, is one clearly superior or are there trade-offs among them?
The local estimate/latency compensation event is only fired on the client that performs the write operation. So if you perform a write operation in a different context, the original context will only see the operation when it comes from the server.
You might even be able to accomplish this by using two FirebaseApp instances, although I couldn't get that working in a quick test here myself.
In my app, I have a list that requires an "or" condition. But, as the docs say:
In this case, you should create a separate query for each OR condition and merge the query results in your app.
As a result, in my service, I'm managing two queries and surfacing them as a single observable list to consumers.
The problem comes in with updating. I have the choice of doing extra work to match up the item needing update to the correct collection so I can do the following:
myCollection.doc(item.id).update(item);
or I can make this much more simple and just:
angularFirestore.doc(`path/to/${item.id}`).update(item);
I'm operating under the assumption that the first method will result in faster updates as I'm using the same reference that it would optimistically update instantly. And that the latter will be slower in that it would be more round about by updating the persistence layer and then the collection referencing getting notified about later (probably still a small time).
All of the above is assumption, however. I back this just with a few random instances where I've seen it take a second or two for an update or delete to show up in an other part of the view, but I haven't been able to actually inspect the process.
Does anyone know if the above is correct? Should I be doing the extra work to write through the collection references or does angularfire(and/or firestore) handle this and make them effectively the same operation under the hood?
AngularFire2 is a thin wrapper around RxFire, which itself is a relatively thin wrapper around the Firebase JavaScript SDK.
There should be no significant performance difference between updating a document through AngularFire or updating it directly through the JavaScript SDK. In both cases the majority of the time is spent in the JavaScript SDK, and on the wire between the client and server. For this reason I typically update directly through the JavaScript SDK, since it's often a bit more direct and the AngularFire abstraction has little advantage for me in write operations. Given that AngularFire is built on top of this SDK, it picks up the changes instantly even when they're not made through AngularFire.
If you have an instance where this does not seem to be the case, I recommend creating a question with the minimal, complete/standalone code that reproduces that problem.
I'm wondering if it is common practice to use an in-memory database for a test environment, instead of MySQL (that has to be used for development/production).
If it makes sense, how do I set this up?
I think I can create config/test.json as shown in their chat example, my app.js still requires Knex though.
Should I do something along the lines of
const knex = (NODE_ENV !== 'test') ? require('./knex') : undefined;
and then configure it only if knex !== undefined?
If I do so, all my models have to be set up twice (once for Knex, once for testing without it).
What is the right/standard way to go about this?
EDIT:
As suggested below, I use a different schema for testing.
This is done by declaring a different connection string in config/test.json.
This question is solved, thank you!
if it is common practice to use an in-memory database for a test environment, instead
Sadly it is a common practise, but not particulary good one. When you use different database for testing and other for production your tests are actually not testing that the application code is working in real database.
Other negative effect is also that you cannot use special features of any of those databases, but the code would have to use that subset of DB features, which are supported by both of the databases.
I would ran all the tests with all the supported real databases to actually make sure that code works on every targeted setup.
ps. Someone was telling to use mocks for abstracting database... that is another bad practise. They work for some small part of testing, but in general case you need to run tests against real database to be sure that code works correctly. Important thing is to setup tests in a way that you have a fast way of truncating old data and populating new test data in.
According to this documentation, and this accompanying example, Firebase tends to follow the following flow when transforming newly written data:
Client writes data to Firebase, which is immediately accepted
The supplied Cloud Function is triggered, which transforms the data (in the example above, it removes swear words)
The transformed data is written again, overwriting the original data written in step 1
Maybe I'm missing something here, but this flow seems to present some problems. For example, if there is an error in step 2 above, and step 3 is never fired, the un-transformed data will just linger in the database. It seems like it would be better to transform the data as soon as it hits the server, but before writing. This would be followed by a single write operation, which will leave no loose artifacts behind if it fails. Is there any way in the current Firebase + Google Cloud Functions stack to add these types of pre-write data transforms?
My (tentative and weird) solution so far is to have a "shadow" /_temp/{endpoint} area in my Firebase db, so that when I want to write to /{endpoint}, I write there instead, which then triggers the relevant cloud function to do the transformation before writing to /{endpoint}. This at least prevents potentially incomplete data from leaking into my database, but it seems very inelegant and "hacky."
I'd also be interested to know if there are any server-side methods for transforming data before responding to read requests.
There is no hook in the Firebase Database (neither through Cloud Functions nor elsewhere) that allows you to modify values before they're written to the database. The temporary queue is the idiomatic way to address this use-case. It functions pretty similar to a moderator queue in most forum software.
You could use a HTTP Function to create an endpoint that your code calls and then perform the transformation there. You could use a similar pattern for reading data, although you'd have to rebuild the realtime synchronization capabilities of Firebase yourself.
I created an app that stores, compares, filters and takes statistics out of a collection of records. I've done it so it works offline, as in some user cases the user might not have constant (or at all) access to internet.
My problem is that after I've included ~60 records, the app starts to behave really slow. For instance, I list a collection of simple objects from LocalStorage into a ng-model (Select list), and after those ~60 records are in, to open the Select box will be seriously slowed down.
What could the problem be? I'm thinking, either some function is sucking more resources than necessary, or LocalStorage is not intended for such uses?
I'm starting to get into PouchDB, would you say that migrating all to Pouch instead of LocalStorage would be a good move?
I can't paste the whole controller here as it's huge, but I've put an online version for testing. You can see it here.
For you not to have to create 60 records just to see the effect, you can download this CSV and import it in the app.
In order to import, the pass for Edit Mode is: admin
Let's see if someone has a tip for this one!
I see you are storing all your records inside a single LocalStorage value (with the key being recordspax). So yeah, that will get quite slow, because your app has to 1) JSON parse/stringify and 2) store/retrieve the entire list every time you read/write data to the database.
Basically you are reading your entire database in and out of disk for every operation. Since both LocalStorage and JSON stringify/parse happen synchronously on the main thread, it can block DOM rendering and will thus slow down your app.
PouchDB could be a help here, but you could also benefit from something simpler like LocalForage, or simply changing your DB design so that every record has its own key/value rather than storing everything into a single key with a single value.
(Both LocalForage and PouchDB use IndexedDB/WebSQL rather than LocalStorage, meaning that database operations are not synchronous and do not block the DOM. However, you still don't want to stuff everything into a single document and therefore read the entire DB in and out of disk. :))