There is a method 'toJson' on the Google Realtime model object, as returned from Document.getModel()
doc.getModel().toJson()
Seems to take two parameters, but there is no documentation available from Google developers or elsewhere. How should it be used?
We are working on updating our documentation, but I can provide a short summary:
toJson(opt_appId, opt_revision) converts a Realtime model to a JSON representation in the same format as returned by the export REST endpoint (https://developers.google.com/drive/v2/reference/realtime/get). You can upload that JSON to the import REST endpoint (https://developers.google.com/drive/v2/reference/realtime/update) to update an existing Realtime document or create a new document. If you supply opt_appId or opt_revision, they are passed through to the returned JSON. The export REST endpoint supplies these values automatically, but the import endpoint ignores them, so they are optional and purely for your reference.
gapi.drive.realtime.loadFromJson(json) creates a in-memory Realtime document from JSON. The JSON can come either from toJson() or from the export REST endpoint. An in-memory document never communicates with the Realtime server (isInGoogleDrive is always false) so you are responsible for persisting the contents using toJson() and any storage you want (e.g. HTML5 local storage, storage on your server, or anything else which can store JSON).
gapi.drive.realtime.newInMemoryDocument() creates a new blank in-memory document. As with loadFromJson() this in-memory document is not synchronized to Google Drive, so you are responsible for persisting it.
In-memory documents fire event listeners like any other Realtime document and most of th API works normally, including undo/redo. However, there is no collaboration, data is not automatically stored, and some features (such as getCollaborators()) return generic data instead of user-specific data.
We will be releasing additional documentation soon, but this is a supported feature, not an internal API.
So your last comment jogged my memory. While undocumented, I think this is actually the end point for the following Drive API calls: https://developers.google.com/drive/v2/reference/realtime/get. It's a bit weird because its not under the Realtime API and instead documented under the Drive API, but you can both export as JSON as well as import a JSON structure. Presumably the toJson function will spit out compliant JSON to be used with these functions, but I have not tested this.
Edit: Also - it looks like they have more fun undocumented features that go hand-in-hand with this. In particular gapi.drive.realtime.loadFromJson is available, which presumably is the actual loading counterpart to toJson. There is also the gapi.drive.realtime.newInMemoryDocument fn that is exposed which is likely an internal function to initialize the document loaded from 'loadFromJson'. Additionally there is 'Document.isinGoogleDrive' which very likely determines if you are using an in-memory document vs a Drive backed one. Fun stuff :).
Related
We are currently exploring some of the undocumented apis in Relay Modern, and so far the best way we have found to grab data out of the Relay Store for use in the application is to call environment.lookup with a selector derived from a graphql query.
this happened because it seems like the RecordSource.get method returns the object from the store but doesn't fetch any data for nodes nested under it. Is there a better way to fetch an object and all connected nodes to it?
Our use case is we are using applyOptimisticUpdate to update Relay's store so that changes that are made prior to saving on our admin application are visible to all components that have requested that piece of data. So once we are done making these changes we'd like to requery the relay Store to get the current state of a record, clean it up for real mutation purposes, and send the updated payload to the server.
Any insights would be appreciated and I will add documentation to Relay with findings (if so desired).
Relay exposes a commitLocalUpdate function, which takes the environment and an updater function as arguments. The updater works the same way as the ones you use in mutations -- it receives a proxy of the store which you can modify and return. You could use the body of this function to read data from the store and emit any side-effects you need.
It feels a bit like abusing the API though, so there's probably a more correct way of doing this.
According to this documentation, and this accompanying example, Firebase tends to follow the following flow when transforming newly written data:
Client writes data to Firebase, which is immediately accepted
The supplied Cloud Function is triggered, which transforms the data (in the example above, it removes swear words)
The transformed data is written again, overwriting the original data written in step 1
Maybe I'm missing something here, but this flow seems to present some problems. For example, if there is an error in step 2 above, and step 3 is never fired, the un-transformed data will just linger in the database. It seems like it would be better to transform the data as soon as it hits the server, but before writing. This would be followed by a single write operation, which will leave no loose artifacts behind if it fails. Is there any way in the current Firebase + Google Cloud Functions stack to add these types of pre-write data transforms?
My (tentative and weird) solution so far is to have a "shadow" /_temp/{endpoint} area in my Firebase db, so that when I want to write to /{endpoint}, I write there instead, which then triggers the relevant cloud function to do the transformation before writing to /{endpoint}. This at least prevents potentially incomplete data from leaking into my database, but it seems very inelegant and "hacky."
I'd also be interested to know if there are any server-side methods for transforming data before responding to read requests.
There is no hook in the Firebase Database (neither through Cloud Functions nor elsewhere) that allows you to modify values before they're written to the database. The temporary queue is the idiomatic way to address this use-case. It functions pretty similar to a moderator queue in most forum software.
You could use a HTTP Function to create an endpoint that your code calls and then perform the transformation there. You could use a similar pattern for reading data, although you'd have to rebuild the realtime synchronization capabilities of Firebase yourself.
I am building a Chrome Extension that requires Parse User sessions. Because localstorage is domain specific, I need to use chrome.storage so it can be accessed while on any site.
The current Parse Javascript SDK uses localstorage to store a user object and create sessions. How can I switch this so Parse uses chrome.storage?
Here's my current thinking atm:
on login
store sessiontoken in chrome.storage
Then when I visit a site:
if chrome.storage.sessiontoken
create parse session with chrome.storage.sessiontoken
I was wondering if anyone's come across a simpler way?
I would love to just do:
window.localstorage = chrome.storage
But imagine that will lead to problems. Anyone solved this?
Basically you need to write your own storage controller(with chrome api ofcourse) and then let parse to use it.
Since this thread is newer, here's the detailed solution:
By default, Parse javascript SDK use window.localStorage api to store data.
However, in Chrome app, localStorage is replaced by chrome.storage api, which offers better non-blocking storage solution.
Parse SDK is actually prepared for different type of storage(both sync and async), and let your set your own custom storage controller
As #andrewimm (from github) pointed out, you need to call Parse.CoreManager.setStorageController(YOUR_STORAGE_CONTROLLER) before you call Parse.initialize.
An example custom storage controlled is Parse React Native storage controller( which is also async), which could be found at: https://github.com/ParsePlatform/Parse-SDK-JS/blob/master/src/StorageController.react-native.js
A Sync controller object need to set its async property to 0, and have
getItem, setItem,removeItem, and clear function implemented
An Async controller object need to set async property to 1, and have getItemAsync, setItemAsync,removeItemAsync,clearimplemented
All you need to do is to follow the react native example and build yourself your own storage controller (with chrome storage api), and let Parse to use it instead of use localStorage
Original Github issue thread:https://github.com/ParsePlatform/Parse-SDK-JS/issues/72
You can't directly substitute localstorage with chrome.storage, because one is synchronous and one is not, not to mention different API methods.
You cannot anyhow wrap it in a way that it becomes fully synchronous. However, here are some ideas:
Work with storage in the background script. There, the domain for localStorage is fixed.
Make a local synchronous copy of the storage; something like
var localData = {};
chrome.storage.local.get(null, function(data) {
localData = data;
doSomethingElse(); // Mind that this is async
})
However, saving this cache is going to be a problem. You have to intercept writes and commit it to chrome.storage and likewise update it on onChanged event - but all that will be asynchronous, which may not work.
In short, if a library uses localStorage internally, you won't be able to replace it adequately without rewriting the library or keeping it to background.
I am trying to understand WHEN Firebase actually loads the data to the client vs. doing "lazy load" (only download the data when it's needed). The reason is that I am saving images (base64) in Firebase (please don't ask why as it's only few hundred MBs). So there are two choices:
// With typical Firebase
var imagesRef = Ref.child('images');
// With Angularfire
var imagesObj = $firebaseObject(Ref.child('images'));
Ref is just a reference to my Firebase URL.
I know with Angularfire, there is $loaded() which makes me think Angularfire actually loads all the data AT ONCE and makes it available when you call $firebaseObject() right away. Is it correct?
As for using child(), I don't see any load() event to catch based on the documentation. Maybe I missed it. But does it load all data from the server to client?
If I have like 500MB of images, I definitely don't want this load-all-at-once happening.
firebase retrieve the data when you call .on on a ref
As not widely know, all the data are retrieved in one piece (wether you call .on 'value' or .on 'child_added'), so you better paginate your result using orderByFirst / Last, or using firebase util
What angular fire does when you instanciate a firebaseObject / array is calling on 'value' / 'child_added' from within the constructor function of the instance, so yes , the data are retrieved almost instantly (but in a defer hence the $loaded() function).
Check out the source code of the Object manager and the constructor of $firebaseObject for instance, it's pretty clear
Does anybody know how to use the JsonRest store in dojo witn an Observable weapper, like the one in dojo.store.Observable?
What do I need, server side, to implement the store and make it work as an Observable one? What about the client side?
The documentation says http://dojotoolkit.org/reference-guide/1.7/dojo/store/Observable.html
If you are using a server side store like the JsonRest store, you will need to provide a queryEngine in order for the update objects to be properly included or excluded from queries. If a queryEngine is not available, observe listener will be called with an undefined index.
But, I have no idea what they mean. I have never created a store myself, and am not 100% familiar with queryEngine (to be honest, I find it a little confusing). Why is queryEngine needed? What does the doc mean by "undefined index"? And how do you write a queryEngine for a JsonRest store? Shouldn't I use some kind of web socket for an observable REST store, since other users might change the data as well?
Confused!
I realize this quesiton is a bit old, but here's some info for future reference. Since this is a multi-part question, I'll break it down into separate pieces:
1) Server-side Implementation of JsonRest
There's a pretty decent write up on implementing the server side of JsonRest Store. It shows exactly what headers JsonRest will generate and what content will be included in the rest. It helps form a mental model of how the JsonRest api is converted into HTTP.
2) Query Engine
Earlier in the same page, how query() works client side is explained. Basically, the query() function needs to be able to receive an object literal (ex: {title:'Learning Dojo',categoryid:5}) and return the objects in the store that match those conditions. "In the store" meaning already loaded into memory on the client, not on the server.
Depending on what you're trying to do, there's probably no need to write your own queryEngine anyway -- just use the built-in SimpleQueryEngine if you're building your own custom store. The engine just needs to be handed an object literal and it adds the whole dojo query() api for you.
3) Observables
My understanding is that the Observables monitor client side changes in the collection of objects (ex: adding or removing a result) or even within a specific object (ex: post 5 has changed title). It does NOT monitor changes that happen server-side. It simply provides a mechanism to notify other aspects of the client-side app that data changed so that all aspects of the page stay synchronized.
There's a whole write up on using Observables under the headings 'Collection Data Binding' and 'Object Data Binding: dojo/Stateful'.
4) Concurrency
There's two things you'd want to do in order to keep your client side data synchronized with the server side data: a) polling for changes from other users on the server, b) using transactions to send data to the server.
a) To poll for changes to the data, you'd want to have your object store track the active query in a variable. Then, use setTimeout() or setInterval() to run the query in the background again every so often. Make sure that widgets or other aspects of your application use Observables to monitor changes in the query result set(s) they depend on. That way, changes on the server by other users would automatically be reflected throughout your application.
b) Use transactions to combine actions that must be combined. Then, make sure the server sends back HTTP 200 Status codes (meaning 'It Worked!'). If the transactions returns a HTTP status in the 400s, then it didn't work for some reason, and you need to requery the data because something changed on the backend. For example, the record you want to update was deleted, so you can't update it. There's a write up on transactions as well under the heading 'Transactional'