Firebase child() vs. Angularfire $firebaseObject - javascript

I am trying to understand WHEN Firebase actually loads the data to the client vs. doing "lazy load" (only download the data when it's needed). The reason is that I am saving images (base64) in Firebase (please don't ask why as it's only few hundred MBs). So there are two choices:
// With typical Firebase
var imagesRef = Ref.child('images');
// With Angularfire
var imagesObj = $firebaseObject(Ref.child('images'));
Ref is just a reference to my Firebase URL.
I know with Angularfire, there is $loaded() which makes me think Angularfire actually loads all the data AT ONCE and makes it available when you call $firebaseObject() right away. Is it correct?
As for using child(), I don't see any load() event to catch based on the documentation. Maybe I missed it. But does it load all data from the server to client?
If I have like 500MB of images, I definitely don't want this load-all-at-once happening.

firebase retrieve the data when you call .on on a ref
As not widely know, all the data are retrieved in one piece (wether you call .on 'value' or .on 'child_added'), so you better paginate your result using orderByFirst / Last, or using firebase util
What angular fire does when you instanciate a firebaseObject / array is calling on 'value' / 'child_added' from within the constructor function of the instance, so yes , the data are retrieved almost instantly (but in a defer hence the $loaded() function).
Check out the source code of the Object manager and the constructor of $firebaseObject for instance, it's pretty clear

Related

How many times should I call firebase.analytics()?

When using Firestore, I see people using this pattern:
export const db = firebase.firestore();
And then use the db reference across the app to access the Firestore interface.
But I don't usually do this. I prefer to use it like:
firebase.firestore().collection("myCollection").get() // I USE IT LIKE THIS ACROSS THE APP
Whenever I need Firestore, I always call firebase.firestore()
Basically I'm getting the same reference over and over again to the Cloud Firestore service interface.
And it works just fine.
QUESTION
Can the same concept be applied to the firebase.analytics() call?
I.e: can I call it multiple times throughout my app (like the Firestore example) or will I be logging the same default events over and over again on each call?
Like: calling firebase.analytics() everytime I need the Analytics interface.
Because I know that just by calling it once, you're already logging (sending) out some default events.
Would it make any difference to use it like this:
export const analytics = firebase.analytics();
And then use the analytics to log events, instead of firebase.analytics().logEvent() everytime ?
firebase.analytics() just returns a singleton object, the same every time. All of the Firebase product entrypoints exposed by the firebase object are all that way. Whatever method you want to use to get that singleton object is completely up to you.

Firebase "update" operation downloads data?

I was profiling a "download leak" in my firebase database (I'm using JavaScript SDK/firebase functions Node.js) and finally narrowed down to the "update" function which surprisingly caused data download (which impacts billing in my case quite significantly - ~50% of the bill comes from this leak):
Firebase functions index.js:
exports.myTrigger = functions.database.ref("some/data/path").onWrite((data, context) => {
var dbRootRef = data.after.ref.root;
return dbRootRef.child("/user/gCapeUausrUSDRqZH8tPzcrqnF42/wr").update({field1:"val1", field2:"val2"})
}
This function generates downloads at "/user/gCapeUausrUSDRqZH8tPzcrqnF42/wr" node
If I change the paths to something like this:
exports.myTrigger = functions.database.ref("some/data/path").onWrite((data, context) => {
var dbRootRef = data.after.ref.root;
return dbRootRef.child("/user/gCapeUausrUSDRqZH8tPzcrqnF42").update({"wr/field1":"val1", "wr/field2":"val2"})
}
It generates download at "/user/gCapeUausrUSDRqZH8tPzcrqnF42" node.
Here is the results of firebase database:profile
How can I get rid of the download while updating data or reduce the usage since I only need to upload it?
I dont think it is possible in firebase cloudfunction trigger.
The .onWrite((data, context) has a data field, which is the complete DataSnapshot.
And there is no way to configure not fetching its val.
Still, there are two things that you might do to help reduce the data cost:
Watch a smaller set for trigger. e.g. functions.database.ref("some/data/path") vs ("some").
Use more specific hook. i.e. onCreate() and onUpdate() vs onWrite().
You should expect that all operations will round trip with your client code. Otherwise, how would the client know when the work is complete? It's going to take some space to express that. The screenshot you're showing (which is very tiny and hard to read - consider copying the text directly into your question) indicates a very small amount of download data.
To get a better sense of what the real cost is, run multiple tests and see if that tiny cost is itself actually just part of the one-time handshake between the client and server when the connection is established. That cost might not be an issue as your function code maintains a persistent connection over time as the Cloud Functions instance is reused.

What is the proper way to get data out of Relay Modern's internal store?

We are currently exploring some of the undocumented apis in Relay Modern, and so far the best way we have found to grab data out of the Relay Store for use in the application is to call environment.lookup with a selector derived from a graphql query.
this happened because it seems like the RecordSource.get method returns the object from the store but doesn't fetch any data for nodes nested under it. Is there a better way to fetch an object and all connected nodes to it?
Our use case is we are using applyOptimisticUpdate to update Relay's store so that changes that are made prior to saving on our admin application are visible to all components that have requested that piece of data. So once we are done making these changes we'd like to requery the relay Store to get the current state of a record, clean it up for real mutation purposes, and send the updated payload to the server.
Any insights would be appreciated and I will add documentation to Relay with findings (if so desired).
Relay exposes a commitLocalUpdate function, which takes the environment and an updater function as arguments. The updater works the same way as the ones you use in mutations -- it receives a proxy of the store which you can modify and return. You could use the body of this function to read data from the store and emit any side-effects you need.
It feels a bit like abusing the API though, so there's probably a more correct way of doing this.

Google Realtime undocumented feature: 'toJson'?

There is a method 'toJson' on the Google Realtime model object, as returned from Document.getModel()
doc.getModel().toJson()
Seems to take two parameters, but there is no documentation available from Google developers or elsewhere. How should it be used?
We are working on updating our documentation, but I can provide a short summary:
toJson(opt_appId, opt_revision) converts a Realtime model to a JSON representation in the same format as returned by the export REST endpoint (https://developers.google.com/drive/v2/reference/realtime/get). You can upload that JSON to the import REST endpoint (https://developers.google.com/drive/v2/reference/realtime/update) to update an existing Realtime document or create a new document. If you supply opt_appId or opt_revision, they are passed through to the returned JSON. The export REST endpoint supplies these values automatically, but the import endpoint ignores them, so they are optional and purely for your reference.
gapi.drive.realtime.loadFromJson(json) creates a in-memory Realtime document from JSON. The JSON can come either from toJson() or from the export REST endpoint. An in-memory document never communicates with the Realtime server (isInGoogleDrive is always false) so you are responsible for persisting the contents using toJson() and any storage you want (e.g. HTML5 local storage, storage on your server, or anything else which can store JSON).
gapi.drive.realtime.newInMemoryDocument() creates a new blank in-memory document. As with loadFromJson() this in-memory document is not synchronized to Google Drive, so you are responsible for persisting it.
In-memory documents fire event listeners like any other Realtime document and most of th API works normally, including undo/redo. However, there is no collaboration, data is not automatically stored, and some features (such as getCollaborators()) return generic data instead of user-specific data.
We will be releasing additional documentation soon, but this is a supported feature, not an internal API.
So your last comment jogged my memory. While undocumented, I think this is actually the end point for the following Drive API calls: https://developers.google.com/drive/v2/reference/realtime/get. It's a bit weird because its not under the Realtime API and instead documented under the Drive API, but you can both export as JSON as well as import a JSON structure. Presumably the toJson function will spit out compliant JSON to be used with these functions, but I have not tested this.
Edit: Also - it looks like they have more fun undocumented features that go hand-in-hand with this. In particular gapi.drive.realtime.loadFromJson is available, which presumably is the actual loading counterpart to toJson. There is also the gapi.drive.realtime.newInMemoryDocument fn that is exposed which is likely an internal function to initialize the document loaded from 'loadFromJson'. Additionally there is 'Document.isinGoogleDrive' which very likely determines if you are using an in-memory document vs a Drive backed one. Fun stuff :).

dojo.store.Observable, JSON REST and queryEngine

Does anybody know how to use the JsonRest store in dojo witn an Observable weapper, like the one in dojo.store.Observable?
What do I need, server side, to implement the store and make it work as an Observable one? What about the client side?
The documentation says http://dojotoolkit.org/reference-guide/1.7/dojo/store/Observable.html
If you are using a server side store like the JsonRest store, you will need to provide a queryEngine in order for the update objects to be properly included or excluded from queries. If a queryEngine is not available, observe listener will be called with an undefined index.
But, I have no idea what they mean. I have never created a store myself, and am not 100% familiar with queryEngine (to be honest, I find it a little confusing). Why is queryEngine needed? What does the doc mean by "undefined index"? And how do you write a queryEngine for a JsonRest store? Shouldn't I use some kind of web socket for an observable REST store, since other users might change the data as well?
Confused!
I realize this quesiton is a bit old, but here's some info for future reference. Since this is a multi-part question, I'll break it down into separate pieces:
1) Server-side Implementation of JsonRest
There's a pretty decent write up on implementing the server side of JsonRest Store. It shows exactly what headers JsonRest will generate and what content will be included in the rest. It helps form a mental model of how the JsonRest api is converted into HTTP.
2) Query Engine
Earlier in the same page, how query() works client side is explained. Basically, the query() function needs to be able to receive an object literal (ex: {title:'Learning Dojo',categoryid:5}) and return the objects in the store that match those conditions. "In the store" meaning already loaded into memory on the client, not on the server.
Depending on what you're trying to do, there's probably no need to write your own queryEngine anyway -- just use the built-in SimpleQueryEngine if you're building your own custom store. The engine just needs to be handed an object literal and it adds the whole dojo query() api for you.
3) Observables
My understanding is that the Observables monitor client side changes in the collection of objects (ex: adding or removing a result) or even within a specific object (ex: post 5 has changed title). It does NOT monitor changes that happen server-side. It simply provides a mechanism to notify other aspects of the client-side app that data changed so that all aspects of the page stay synchronized.
There's a whole write up on using Observables under the headings 'Collection Data Binding' and 'Object Data Binding: dojo/Stateful'.
4) Concurrency
There's two things you'd want to do in order to keep your client side data synchronized with the server side data: a) polling for changes from other users on the server, b) using transactions to send data to the server.
a) To poll for changes to the data, you'd want to have your object store track the active query in a variable. Then, use setTimeout() or setInterval() to run the query in the background again every so often. Make sure that widgets or other aspects of your application use Observables to monitor changes in the query result set(s) they depend on. That way, changes on the server by other users would automatically be reflected throughout your application.
b) Use transactions to combine actions that must be combined. Then, make sure the server sends back HTTP 200 Status codes (meaning 'It Worked!'). If the transactions returns a HTTP status in the 400s, then it didn't work for some reason, and you need to requery the data because something changed on the backend. For example, the record you want to update was deleted, so you can't update it. There's a write up on transactions as well under the heading 'Transactional'

Categories