Business logic with CouchDB - javascript

I need to design an offline-first application with sync capabilities so I decided to go with CouchDB. Since I will deploy this application on user's workstation he/she has the capability to tamper with data (in his/her local database e.g. PouchDB). AFAIK, CouchDB only offers validation functions (which only has access to incoming document, its previous version and userCtx) to avoid this, but most of the times this validation depends on the business logic. Is there any way to manage this scenario?

Anyway, if the user tampers the "local" db, its modifications will be deleted when the sync occurs (if the remote db is the master).
Or if you choose 2 ways sync, the master db is modified by the user...

Related

Using Firestore with offline persistence - Desktop (web)

Offline persistence in Firestore enables the browser to store records that were not uploaded to the server (offline) even after the session was closed (Browser exit)
Please see: https://firebase.google.com/docs/firestore/manage-data/enable-offline
However, firestore does not offer any officially supported way to clear the Chace when a user logs out from his session. Please refer to: https://youtu.be/qGAIimfrBB4?t=257
Recently they released the function clearPersistence, but they clearly state that is not meant for security reasons and recommend to disable Persistence if security is an important factor for you. Please see: https://firebase.google.com/docs/reference/android/com/google/firebase/firestore/FirebaseFirestore#clearPersistence()
Note: clearPersistence() is primarily intended to help write reliable
tests that use Cloud Firestore. It uses an efficient mechanism for
dropping existing data but does not attempt to securely overwrite or
otherwise make cached data unrecoverable. For applications that are
sensitive to the disclosure of cached data in between user sessions,
we strongly recommend not enabling persistence at all.
I want to understand better what's the security hole with using "ClearPersistence" on logout of the user.
Anyone experienced with that? Any other working solution that enables you to remove all the Firestore cache after a logout?
There is no guarantee that your code will run in the browser (or any other client). For example: a malicious user can take the configuration data from your application, and call the API to get access to the same data in your project, but then store it wherever they want.
Another malicious user might prevent the app from clearing the local cache, or quickly copy the local cache file to another location to have a copy before it is cleared.
And these are just two if the simplest examples. The simple fact is that you should assume that any data that exists/persists on the client can be seen by any user who has access to that client.

Saving Firebase RTDB bandwidth by performing initial sync from JSON file

Firebase Realtime Database offers powerful synchronisation across clients... But it does seem to have rather high bandwidth charges.
I was hence wondering if it is possible to perform an "initial sync" on clients, so to speak, where we load a reasonably recent version of the database from elsewhere (with lower bandwidth charges), and then have the Firebase SDK (flutter, web) sync from there instead of downloading all the required parts of the database. Maybe a "load from JSON" sort of function or similar.
Seeing as to how the Firebase SDKs do seem to store some state locally on clients (for offline operation) and sync up with Firebase once online again, I was wondering if it might be possible to:
set the local state to our "recent version" downloaded from another source, along with that version's timestamp
trick the client SDK into thinking that we are recovering from an "offline" state
then let the SDK communicate with the Firebase server to get changes since our last timestamp
Are there any approaches to achieve the effect of saving bandwidth through performing an initial sync from a cheaper source? Thanks in advance!
Unfortunately every get() and every realtime listener will load all data it needs from the database. I had the same idea when started working with the Firebase RTDB but unfortunately at this moment it's not possible.
The only way to reduce the bandwith is to read data in smaller chunkgs as possible.
You might want to take a look at AceBase, which is an open source alternative for the Firebase RTDB. It offers the same functionality, has powerful indexing and querying options, offline support, synchronization etc. Easy to setup and you can host it anywhere you want. You can even use it as a standalone realtime database in the browser, so you could also use it in combination with Firebase to perform custom synchronization between your front and backend db's. AceBase is free and its full source code is public.

storing logged user data from front-end

I'm using angularjs to front-end and laravel framework to back-end. my database is Mysql.
I need to store all user's logs. Since I don't want to store user's logs inside mysql I chose MongoDB.
now I don't want to use laravel to store user's logs,Instead, I want to use nodejs.
At a glance:
laravel - mysql : to store data
nodejs - mongoDB : to store user's logs
My question,
I want to send user's logs from angularjs to nodejs and storing them inside mongoDB.
I think most systems stores user's logs from server-side(here laravel to mongodb or mysql),but I send user's logs from front-end to nodejs to storing logs. of course the connection between angularjs and nodejs has a hashing method.
What are the advantages and disadvantages of this method?
I have built a framework that logs all kinds of user actions from the front end JS app to an ajax call. For performance reasons it is based on asynchronous evenrs and buffers actions until some number are queued (generally 10) or a window close event. This data is stored in a back end database (ours is a fixed SQL schema so we have normalized all logs to a specific format).
We have found this a very good debugging and auditing tool. It does consume a modest amount of bandwidth and we only have a few hundred simultaneous users, but I would think with a good server design it could scale well (we use two sets of servers in two data centers).
It does help to classify the "actions" so we have some nav actions for focus changes, some api actions for other ajax calls, error logs for anything unusual, some input actions for form data edits, etc. We always log a session id, the user id, timestamp, etc. Best part is the framework does it all and the app writer does not need to think about it unless they explicitly want to call a logging function.
Your mileage will vary based on your requirements/environment, but it works very well for us.

How to handle users in `_users` database with many applications in the same CouchDB instance?

According to Matt Woodward's Blog, in The Definitive Guide to CouchDB Authentication and Security he points out some things about CouchDB that I'm not sure I understand completly.
He says:
"Basically the way security works in CouchDB is that users are stored in the _users database (or elsewhere if you like; this can be changed in the config file)...".
So, all users of the whole CouchDB are stored in a single database, right? Which means that if I have more than one application running in different databases within the same CouchDB I'd have to handle users who want to access both application, correct?
He also says
"Database readers can only read documents and views on a specific database, and have no other permissions".
Then, he adds
"By default all databases are read/write enabled for anonymous users, even if you define database admins on a database".
So anonymous users can or can't read documents in a specific database?
I'll start out by saying that those articles, while still informative, are several years old and possibly outdated. I would recommend reading through the official documentation if you are trying to learn about CouchDB.
Now to answer your question. (more information about security here and here)
In CouchDB, security is something you can incrementally build up as you are developing your application. The default is very open, and you lock things down by adding configuration. (in what I think is a pretty intuitive fashion)
By default, CouchDB is in "Admin Party" mode, which means anyone can read and write anything. (because every user, including anonymous users, are treated as admins)
Once you add any admin users to your server, (via configuration, not the _users database) the party is over. What this means is that some actions can now only be performed by the admin's you've explicitly defined. (such as creating databases, setting config, etc)
In this state, anonymous users can still read/write normal documents in any database that has been created. (design documents can only be modified by admins) If you are ready to start locking down individual databases, you can do that by specifying users/roles in the security object for a given database.
When people use the term "database reader", they mean that a user has been added as a "member" in the security object. (either by their username, or their role) By specifying any members or admins in the security object for a database, then only those users will have permissions inside the database, all others will be disallowed.
To summarize, anonymous users can read/write anything by default. Once an admin is designated, security tightens and certain actions can only be done by that admin. If you specify database members/admins, the security for a database tightens even more, only allowing those users to even read the database.

When to call the backend and when to store locally (angularjs)

I have an ionic app and a Parse.com backend. My users can perform CRUD functions on exercise programmes, changing every aspect of the programme including adding, deleting, editing the exercises within it.
I am confused about when to save, when to call the server and how much data can be held in services / $rootScope?
Typical user flow is as below:
Create Programme and Client (Create both on server and store data in $localStorage).
User goes to edit screen where they can perform CRUD functions on all exercises within the programme. Currently I perform a server call on each function so it is synced to the backed.
The user may go back and select a different programme - downloading the data and storing it localStorage again.
My question is how can I ensure that my users data is always saved to the server and offer them a responsive fast user experience.
Would it be normal to have a timeout function that triggers a save periodically? On a mobile the amount of calls to the server is quite painful over a poor connection.
Any ideas on full local / remote sync with Ionic and Parse.com would be welcome.
From my experience, the best way to think of this is as follows:
localStorage is essentially a cache layer, which if up to date is great because it can reduce network calls. However it is limited to the current session, and should be treated as volatile storage.
Your server is your source of truth, and as such, should always be updated.
What this means is, for reads, localstorage is great, you don't need to fetch your data a million times if it hasn't changed. For writes, always trust your server for long term storage.
The pattern I suggest is, on load, fetch any relevant data and save it to local storage. Any further reads should come from local storage. Edits, should go directly to the server, and on success, you can write those changes to localstorage. This way, if you have an error on save, the user can be informed, and/or you can use localstorage as a queue to continue trying to post the data to the server until a full success.
This is called "offline sync" or sometimes "4 ways data binding". The point is to cache data locally and sync it with a remote backend. This is a very common need, but the solutions are unfornately not that common... The ideal flow would follows this philosophy:
save data locally
try to sync it with server (performing auto merges)
And
Periodically sync, along with a timer and maybe some "connection resumed" event
This is very hard to achieve manually. If been searching modules for a long time, and the only ones that come to my mind don't realy fit your needs (become they often are backend providers that give you frontend connectors; and you already have an opiniated backend), but here they are anyway:
Strongloop's Loopback.io
Meteor
PouchDB

Categories