I have simple CRUD application with the backend written in Flask, datastore is Mongo and frontend in AngularJS.
I would like to augment the application to allow for CRUD operations even when it's offline and automatically sync when a data connection is available. What is the best correct technology to do this with the minimum amount of extra development?
I've looked at Meteor which could solve the problem but would involve re-writting the app in Meteor.
I've also looked at Breeze which look like it might be a better option and allow me to keep using Angular and Flask.
Is it a lot of data? Adding offline capabilities to an existing application will always have some impact.
You could try using some HTML5 features directly: HTML5 Application Caching is made for offline access and allows to download all the needed artifacts for the webapp to be able to work offline, so that solves a part of the problem.
The other part is the the data, it could be loaded into the browser using either the browser HTML5 Local Storage or IndexedDB.
Local Storage allows to store String associated to keys, so for storing JSON you need to stringify it first. IndexedDB supports more data types,both datastores have a Javascript API.
So it would be a matter of choosing one datastore and making a sync module that periodically pings back the server and when the connection is available sync the local datastore.
The alternative is to use some offline first framework like hoodie, this an article from a hoodie developer for how to use hoodie together with angular. He took the Angular TODO sample application and modified it to use hoodie.
This together with application cache might be a lower impact way to get offline capability to your Angular app.
Related
I am creating an app in Electron using React. The first version will be a purely desktop based app using a local database for data storage etc. However, eventually I would like to utilise as much of the same code base as possible and deploy the same app as a cloud-based alternative (by just leaving out the electron component).
I'd like some advice, thoughts, opinions from the community on if I should use axios for creating internal restAPI infrastructure rather than a API using javascript modules/functions etc. for performing requests such as getting data from a database.
My thinking is then that when I port the application to the cloud, I really just need to change the restAPI locations to point to a cloud version which would then get the data from a cloud database rather than a local database etc.
Would exposing such APIs locally within Electron using axios pose any security risks or other issues that I may need to consider?
Perhaps this answer/discussion may also provide useful for the community in the future.
Looking forward to thoughts, or even new suggestions on how this may be best approached.
Thanks.
I have created an Ionic app which uses pouchdb to store items locally and then syncs them with remotely with couchdb
I am looking to create a REST api for this app which uses the items stored in my couchdb from the app. A web app will also show all of the items from the mobile ionic app.
I have experience with Node using Express. Would node/express be the best/easiest thing to use when building the api build? Or is there another much simpler more popular way?
You can use Firebase https://firebase.google.com/, it' a mobile platform that helps you quickly develop high-quality apps
If you're not completely set on using NoSQL on your API, you should really take a look at Django Rest Framework.
If you can map your data to objects, I'm sure you can go from zero knowledge to having an API up in hours. The quickstart tutorial is fully functional and you can set that up in just minutes! Plus, having more frameworks and languages under your belt can only be a good thing.
There's a Node client which you can use in your Expresss server for Couch DB called Nano , just like you would use mongoose for dealing with MongoDB . Since CouchDB already provides the ability to retrieve data using http, so you can even use an http wrapper like request but you can use Nano for a higher level abstraction.
You can then go ahead build an API that simply proxies those requests to your CouchDB instance.
I'm developing an app for Firefox OS and I need to retrieve/sent data from/to my DB. I also need to use this data in my logic implementation which is in JS.
I've been told that I cannot implement PHP in Firefox OS, so is there any other way to retrieve the data and use it?
PS: This is my first app that I'm developing, so my programming skills are kind of rough.
You can use a local database in JS, e.g. PouchDB, TaffyDB, PersistenceJS, LokiJS or jStorage.
You can also save data to a backend server e.g. Parse or Firebase, using their APIs.
Or you can deploy your own backend storage and save data to it using REST.
You should hold on the basic communication paradigms when sending/receiving data from/to a DB. In your case you need to pass data to a DB via web and application.
Never, ever let an app communicate with your DB directly!
So what you need to do first is to implement a wrapper application to give controlled access to your DB. Thats for example often done in PHP. Your PHP application then offers the interfaces by which external applications (like your FFOS app) can communicate with the DB.
Since this goes to very basic programming knowledge, please give an idea of how much you know about programming at all. I then consider offering further details.
It might be a bit harder to do than you expect but it can be easier than you think. Using mysql as a backend has serious implication. For example, mysql doesn't provide any http interfaces as far as I know. In other words, for most SQL based databases, you'll have to use some kind of middleware to connect your application to the database.
Usually the middleware is a server that publish some kind of http api probably in a rest way or even rpc such as JSONrpc. The language in which you'll write the middleware doesn't really matter. The serious problem you'll face with such variant is to restrict data. Prevent other users to access data to which they shouldn't have access.
There is also an other variant, I'd say if you want to have a database + synchronization on the server. CouchDB + PouchDB gives you that for free. I mean it's really easy to setup but you'll have to redesign some part of your application. If your application does a lot of data changes it might end up filling your disks but if you're just starting, it's possible that this setup will be more than enough.
I'm building a relatively complex and data heavy web application in AngularJS. I'm planning to use php as a RESTful backend (with symfony2 and FOSRESTbundle). I have spent weeks looking around for different solutions to on/off line synchronization solutions and there seem to be many half solutions (see list below for some examples). But non of them seem to fit my situation perfectly. How do I go about deciding which strategy will suite me?
What issues that might determine “best practices” for building an on/off line synchronization system in AngularJS and symfony 2 needs some research, but on the top of my head I want to consider things like speed, ease of implementation, future proof (lasting solution), extensibility, resource usage/requirements on the client side, having multiple offline users editing the same data, how much and what type of data to store.
Some of my requirements that I'm presently aware of are:
The users will be offline often and then needs to synchronize (locally created) data with the database
Multiple users share some of the editable data (potential merging issues needs to be considered).
User's might be logged in from multiple devices at the same time.
Allowing large amount of data to be stored offline(up to a gigabyte)
I probably want the user to be able to decide what he wants to store locally.
Even if the user is online I probably want the user to be able to choose whether he uses all (backend) data or only what's available locally.
Some potential example solutions
PouchDB - Interesting strategies for synchronizing changes from multiple sources
Racer - Node lib for realtime sync, build on ShareJS
Meteor - DDP and strategies for sync
ShareJS - Node.js operational transformation, inspired by Google Wave
Restangular - Alternative to $resource
EmberData - EmberJS’s ORM-like data persistence library
ServiceWorker
IndexedDB Polyfill - Polyfill IndexedDB with browsers that support WebSQL (Safari)
BreezeJS
JayData
Loopback’s ORM
ActiveRecord
BackBone Models
lawnchair - Lightweight client-side DB lib from Brian Leroux
TogetherJS - Mozilla Labs’ multi-client state sync/collaboration lib.
localForage - Mozilla’s DOMStorage improvement library.
Orbit.js - Content synchronization library
(https://docs.google.com/document/d/1DMacL7iwjSMPP0ytZfugpU4v0PWUK0BT6lhyaVEmlBQ/edit#heading=h.864mpiz510wz)
Any help would be much appreciated :)
You seem to want a lot of stuff, the sync stuff is hard... I have a solution to some of this stuff in an OSS library I am developing. The idea is that it does versioning of local data, so you can figure out what has changed and therefore do meaningful sync, which also includes conflict resolution etc. This is sort-of the offline meteor as it is really tuned to offline use (for the London Underground where we have no mobile data signals).
I have also developed an eco system around it which includes a connection manager and server. The main project is at https://github.com/forbesmyester/SyncIt and is very well documented and tested. The test app for the ecosystem will be at https://github.com/forbesmyester/SyncItTodoMvc but I have yet to write virtually any docs for it.
It is currently using LocalStorage but will be easy to move to localForage as it actually is using a wrapper around localStorage to make it an async API... Another one for the list maybe?
To work offline with your requeriments I suggest to divide problem into two scenarios: content (html, js, css) and data (API REST).
The content
Will be stored offline by appcache for small apps or for advanced cases with the awesome serviceworkers. Chrome 40+.
The data
Require solve the storage and synchronization and It becames a more difficult problem.
I suggest a deep reading of the Differential Synchronization algorimth, and take next tips in consideration:
Frontend
Store the resource and shadow (using for example url as key) into the localstorage for small apps or into more advanced alternatives (pouchdb,indexdb,...). With the resource you could work offline and when needs synchronize with the server use jsonpath to get diffs between the resource-shadow and to send it to server the PATCH request.
Backend
At backend take in consideration storage the shadow copies into redis.
The two sides (Frontend/Backend) needs to identify the client node, to do so you could use x- syn-token at HTTP header (send it in all request of the client with angular interceptors).
https://www.firebase.com/
it's reliable and proven, and can be used as a backend and sync library for what you're after. but, it costs, and requires some integration coding.
https://goinstant.com/ is also a good hosted option.
In some of my apps, I prefer to have both: syncing db source AND another main database. (mogno/express, php/mysql, etc..). then each db handles what's its best with, and it's features (real-time vs. security, etc...). This is true regardless to sync-db provider (be it Racer or Firebase or GoInstant ...)
The app I am developing has many of the same requirements and is being built in AngularJS. In terms of future proofing, there are two main concerns that I have found, one is hacking attempts requiring encryption and possible use of one time keys and an backend key manager and the other is support for WebSQL being dropped by the standards consortium in preference to indesedDB. So finding an abstraction layer that can support both is important. The solution set I have come up with is fairly straight forward. Where offline data is is loaded first into the UI and a request goes out to the REST Server if in an online state. As for resolving data conflicts in a multi user environment, that becomes a business rule decision. My decision was to simplify the matter and not delve into data mergers but to use a microtime stamp comparison to determine which version should be kept and pushed out to clients. When in offline mode, store data as a dirty write and the push to server when returning to an online state.
Or use ydn-db, which I am evaluating now as it has built in support for AWS and Google cloud storage built in.
Another suggestion:
Yjs leverages an OT-like algorithm to share a wide range of supported data types, and you have the option to store the shared data in IndexedDB (so it is available for offline editing).
I have a simple offline html5/javascript single-html-file web application that I store in my dropbox. It's a sort of time tracking tool I wrote, and it saves the application data to local storage. Since its for my own use, I like the convenience of an offline app.
But I have several computers, and I've been trying to come up with any sort of hacky way to synchronize this app's data (which is currently using local storage) between my various machines.
It seems that chrome allows synchronization of data, but only for chrome extensions. I also thought I could perhaps have the web page automatically save/load its data from a file in a dropbox folder, but there doesn't appear to be a way to automatically sync with a specific file without user prompting.
I suppose the "obvious" solution is to put the page on a server and store the data in a database. But suppose I don't want a solution which requires me to maintain apps on a server - is there another way, however hacky, to cobble together synchronization?
I even looked for a while to see if there was a vendor offering a web database service - where I could, say, post/get a blob of json on demand, and then somehow have my offline app sync with this service, but the same-origin policy seems to invalidate that plan (and besides I couldn't find such a service).
Is there a tricky/sneaky solution to this problem using chrome, or google drive, or dropbox, or some other tool I'm not aware of? Or am I stuck setting up my own server?
I have been working on a Project that basically gives you versioned localStorage with support for conflict resolution if the same resource ends up being edited by two different clients. At this point there are no drivers for server or client (they are async in-memory at the moment for testing purposes) but there is a lot of code and abstraction to make writing your own drivers really easy... I was even thinking of doing a dropbox/google docs driver myself, except I want DynamoDB/MongoDB and Lawnchair done first.
The code is not dependent on jQuery or any other libraries and there's a pretty full features (though ugly) demo for it as are well.
Anyway the URL is https://github.com/forbesmyester/SyncIt
Apparently, I have exactly the same issue and invetigated it thoroghly. The best choice would be remoteStorage, if you could manage to make it work. It allows to use 3rd party server for data storage or run your own instance.