Updatable offline storage / database - javascript

Currently I'm trying to learn nativescript and for this I thought about doing an App like 'Anki'
But while thinking about the data storage I stumpled upon the problem on how to save my flash cards locally for keeping the app offline (for example with SQLite), save the users time when to reflect each card (e.g. to show again in 10 minutes or 1 day) AND have an update functionality to update the database with new cards without deleting the users data.
What's the best way to solve that problem, especially when I want to provide the updates with an App-Update and without fetching everything from an external database?
I don't have any code yet, therefore a recommendation on how to solve that would be nice.

There is several methods in NativeScript you can use:
NativeScript-Sqlite (disclaimer: I'm the author)
This allows full access to Sqlite for saving and loading items; you can have as big of databases as you need and Sqlite is very fast. Sqlite's biggest drawback is speed of writes; if you have a LOT of writing it can be slower than just writing to a file yourself.
NativeScript-LocalStorage (disclaimer again: I'm the author)
This is more geared to smaller data sizes; as when the app starts and saves it has to load the entire json backed data store into memory. This is really fast over all; but not something you want to use for 10's of thousands of records.
NativeScript-Couchbase
This uses sqlite for local storage and can use couchbase for the remote storage; very nice for having syncable storage - couchbase can be your own server or a leased or rented server.
NativeScript-Firebase
This is also very useful for having syncable storage; however Google charges for FireBase at a certain point.
Built in AppSettings.
This is really designed for a few application settings, not designed for lots of data. But useful for the smaller amounts of data.
Role your own to the file system.
I have done this in a couple of my projects; basically a hybrid between my localstorage plugin and a mini-sql type system. One project was very much write dependent so it made more sense to generate the 20 or so separate files on the phone for each table because I could save them much quicker than inserting/replacing > 100,000 records each time the app started up into sqlite. Had minimal searching needs.
Your storage really needs to be dependent upon what you are doing; it is a balancing act. Lots of searchable data; sqlite wins in almost all cases. Lots of frequent writing; something you create might be a lot faster.

Related

How do I manage data for a Point Of Sales Android app written in react-native

I am making a react-native POS application on Android. I am storing data(items, employees, etc.) in an online MySQL database. I also need to sync orders across all POS's of a restaurant.
The implementation I have right now is fetching all the data needed when the app opens and storing it in redux, then accessing the data from redux, essentially using redux as a local database. Is this the best way to do things? Will there be performance issues if the dataset is large? Should I use a local sqlite database?
This in my opinion is not the right way of doing it.
1) If the dataset is large is going to take a long time to download all the state to redux and also is going to make the application allocate a huge amount of memory to store all the data.
2) I you have multiple users modifying the database locally how do you synch all the changes.What do you do with concurrency, two user modifying the same record, etc.
I think what you need to do is to implement a backend service that centralise all the changes, that page and allow search in the dataset.That way your ui only download the data it needs and all crud operations are in synch and manage by the server.

Pure JSON database

I have a ReactJS application which is a personal/news site (half of both). It has a publications section which requires a database to list and store all of the entries. The only user that will be able to edit/add/delete publications is the admin. Also, the format of every publication will be the same. So a separate backend application would be like inventing a bicycle. Too complex for a simple thing like publications.
The first thing i tried to use is the Firebase from google. (https://firebase.google.com/). It is really simple, has real-time synchronization (which is useless in my case) and also does all the dirty job for you. That is really awesome, but i got a problem. The website contains many high-resolution images and many of them have to be loaded at the same time. I realised, that the website takes about 2-3 seconds to load and that's with only a couple of images.
Than i thought about a local storage on the same host. I did a few test with local storage and, of course, it's many times faster (3s vs milliseconds). So now i think about a JSON file which will contain all the information (Image paths, paragraph text, names e.t.c).
1 - will it be safe to store everything on the same host with such an easy to access DB? It's okay if users can read the files cause they will be available on the website anyway. But is there a possibility that they also will be able to change them?
2 - to manipulate the DB i think about making a simple app-interface which will allow to edit the publications visually. But will it be safe (x2) to store that interface on the same host? Of course it won't be running on a public IP and will be stored in another directory.

Best way to periodically save javascript client data to server database and stay in sync?

I have created an application using javascript library D3. Users will constantly click and drag to frequently change graphical elements and I currently save the data in 3-4 local javascript objects and arrays. I want to save the data to the server periodically rather than after each change. Also I want them to be able to work if they are not connected. From twenty years ago, I imagine doing this manually where on the client side records are flagged as “new”, “revised”, and “deleted”. Every 10 seconds client data is saved via AJAX and either an object is updated or a SQL statement is executed. An id is returned from the database and saved on the client side to track each record for future modifications.
Note the data must be organized in a database for ease of separating elements for reuse. When the user is connected, updates every 5-10 seconds are fine. Then I can use an inexpensive and slow server. Of course a tool that deals with records that might not fully update is good, perhaps some transactional functionality.
There will be no separate mobile application. I can modify my javascript objects to be json compliant if need be. I see there are “offline-first” frameworks and javascript "state containers". Redux caught my eye, especially when I saw its use climbing over the years according to Google Trends. I’ve read about so many options and am thoroughly confused by all these. Here is a mish mash of tools I looked at: Store.js, now.js, indexedDB, couchDB, pouchDB, Cloudant, localForage, WebSQL, Polymer App Toolbox, Hoodie framework, Ionic and angular, and Loopback. Not to mention XHR, web sockets.
I have used MVC like Laravel and Zend, both are with PHP and MySql. I wonder if I could integrate the suggested solution. Thanks.
Related: How do I sync data with remote database in case of offline-first applications?
Saving the data locally using PouchDb and then syncing it with a CouchDb database (or IBM's Cloudant service) when a network connection is available is a well-trodden path for this sort of requirement. But your question is asking for an opinion, so there will be many other perfectly valid solutions to this.

Best practice for on/off line data synchronization using AngularJS and Symfony 2

I'm building a relatively complex and data heavy web application in AngularJS. I'm planning to use php as a RESTful backend (with symfony2 and FOSRESTbundle). I have spent weeks looking around for different solutions to on/off line synchronization solutions and there seem to be many half solutions (see list below for some examples). But non of them seem to fit my situation perfectly. How do I go about deciding which strategy will suite me?
What issues that might determine “best practices” for building an on/off line synchronization system in AngularJS and symfony 2 needs some research, but on the top of my head I want to consider things like speed, ease of implementation, future proof (lasting solution), extensibility, resource usage/requirements on the client side, having multiple offline users editing the same data, how much and what type of data to store.
Some of my requirements that I'm presently aware of are:
The users will be offline often and then needs to synchronize (locally created) data with the database
Multiple users share some of the editable data (potential merging issues needs to be considered).
User's might be logged in from multiple devices at the same time.
Allowing large amount of data to be stored offline(up to a gigabyte)
I probably want the user to be able to decide what he wants to store locally.
Even if the user is online I probably want the user to be able to choose whether he uses all (backend) data or only what's available locally.
Some potential example solutions
PouchDB - Interesting strategies for synchronizing changes from multiple sources
Racer - Node lib for realtime sync, build on ShareJS
Meteor - DDP and strategies for sync
ShareJS - Node.js operational transformation, inspired by Google Wave
Restangular - Alternative to $resource
EmberData - EmberJS’s ORM-like data persistence library
ServiceWorker
IndexedDB Polyfill - Polyfill IndexedDB with browsers that support WebSQL (Safari)
BreezeJS
JayData
Loopback’s ORM
ActiveRecord
BackBone Models
lawnchair - Lightweight client-side DB lib from Brian Leroux
TogetherJS - Mozilla Labs’ multi-client state sync/collaboration lib.
localForage - Mozilla’s DOMStorage improvement library.
Orbit.js - Content synchronization library
(https://docs.google.com/document/d/1DMacL7iwjSMPP0ytZfugpU4v0PWUK0BT6lhyaVEmlBQ/edit#heading=h.864mpiz510wz)
Any help would be much appreciated :)
You seem to want a lot of stuff, the sync stuff is hard... I have a solution to some of this stuff in an OSS library I am developing. The idea is that it does versioning of local data, so you can figure out what has changed and therefore do meaningful sync, which also includes conflict resolution etc. This is sort-of the offline meteor as it is really tuned to offline use (for the London Underground where we have no mobile data signals).
I have also developed an eco system around it which includes a connection manager and server. The main project is at https://github.com/forbesmyester/SyncIt and is very well documented and tested. The test app for the ecosystem will be at https://github.com/forbesmyester/SyncItTodoMvc but I have yet to write virtually any docs for it.
It is currently using LocalStorage but will be easy to move to localForage as it actually is using a wrapper around localStorage to make it an async API... Another one for the list maybe?
To work offline with your requeriments I suggest to divide problem into two scenarios: content (html, js, css) and data (API REST).
The content
Will be stored offline by appcache for small apps or for advanced cases with the awesome serviceworkers. Chrome 40+.
The data
Require solve the storage and synchronization and It becames a more difficult problem.
I suggest a deep reading of the Differential Synchronization algorimth, and take next tips in consideration:
Frontend
Store the resource and shadow (using for example url as key) into the localstorage for small apps or into more advanced alternatives (pouchdb,indexdb,...). With the resource you could work offline and when needs synchronize with the server use jsonpath to get diffs between the resource-shadow and to send it to server the PATCH request.
Backend
At backend take in consideration storage the shadow copies into redis.
The two sides (Frontend/Backend) needs to identify the client node, to do so you could use x- syn-token at HTTP header (send it in all request of the client with angular interceptors).
https://www.firebase.com/
it's reliable and proven, and can be used as a backend and sync library for what you're after. but, it costs, and requires some integration coding.
https://goinstant.com/ is also a good hosted option.
In some of my apps, I prefer to have both: syncing db source AND another main database. (mogno/express, php/mysql, etc..). then each db handles what's its best with, and it's features (real-time vs. security, etc...). This is true regardless to sync-db provider (be it Racer or Firebase or GoInstant ...)
The app I am developing has many of the same requirements and is being built in AngularJS. In terms of future proofing, there are two main concerns that I have found, one is hacking attempts requiring encryption and possible use of one time keys and an backend key manager and the other is support for WebSQL being dropped by the standards consortium in preference to indesedDB. So finding an abstraction layer that can support both is important. The solution set I have come up with is fairly straight forward. Where offline data is is loaded first into the UI and a request goes out to the REST Server if in an online state. As for resolving data conflicts in a multi user environment, that becomes a business rule decision. My decision was to simplify the matter and not delve into data mergers but to use a microtime stamp comparison to determine which version should be kept and pushed out to clients. When in offline mode, store data as a dirty write and the push to server when returning to an online state.
Or use ydn-db, which I am evaluating now as it has built in support for AWS and Google cloud storage built in.
Another suggestion:
Yjs leverages an OT-like algorithm to share a wide range of supported data types, and you have the option to store the shared data in IndexedDB (so it is available for offline editing).

key/value store with good performance for multiple tenants

im running a multi tenant GAE app where each tenant could have from a few 1000 to 100k documents.
at this moment im trying to make a MVC javascript client app (the admin part of my app with spine.js) and i need CRUD endpoints and the ability to get a big amount of serialized objects at once. for this specific job appengine is way to slow. i tried to store serialized objects in the blobstore but between reading/writing and updating stuff to the blobstore it takes too much time and the app gets really slow.
i thought of using a nosql db on an external machine to do these operations over appengine.
a few options would be mongodb, couchdb or redis. but i am not sure about how good they perform with that much data and concurrent requests/inserts from different tenants.
lets say i have 20 tenants and each tenant has 50k docs. are these dbs capable to handle this load?
is this even the right way to go?
Why not use the much faster regular appengine datastore instead of blobstore? Simply store your documents in regular entities as Blob property. Just make sure the entity size doesn't exceed 1 MB in which case you have to split up your data into more then one entity. I run an application whith millions of large Blobs that way.
To further speed up things use memcache or even in-memory cache. Consider fetching your entites with eventual consistency which is MUCH faster. Run as many database ops in parallel as possible using either bulk operations or the async API.
The overhead of making calls from appengine to these external machines is going to be worse than the performance you're seeing now (I would expect). why not just move everything to a non-appengine machine?
I can't speak for couch, but mongo or redis are definitely capable of handling serious load as long as they are set up correctly and with enough horsepower for your needs.

Categories