How to validate read and secure documents in CouchDB? - javascript

In 10 Common Misconceptions about CouchDB, Joan Touzet is asked (30:16) if CouchDB will have a way to secure/validate reads on specific documents and/or specific fields of a document.
Joan says that if someone has access to the database, he/she can access all documents in that database.
So she says that there are a few ways to accomplish that:
(30:55) Cloudant was working on field level security access. Have they implemented it yet? Is it open-sourced?
(32:10) You should create separate document in a separate database.
(32:20) Filtered replications. She mentions that it slows 'things' down. She means that the filter slows the replication, correct?
Also, according to rcouch wiki (https://github.com/rcouch/rcouch/wiki/Validate-documents-on-read), it implements a validate_doc_read function (I haven't tested it, though). Does CouchDB has anything like it?
As far as I can see, the best approach is to model the database according to my problem (one database for this, another for that, one for this person, another for that person) and do filtered replications when necessary. Any suggestions?

Related

Meteor.js - Should you denormalize data?

This question has been driving me crazy and I can't get my head around it. I come from a MySQL relational background and have been using Meteorjs and Mongo. For the purposes of this question take the example of posts and authors. One Author to Many Posts. I have come up with two ways in which to do this:
Have a single collection of posts - Each post has the author information embedded into the document. This of course leads to denormalization and issues such as if the author name changes how do you keep the data correct.
Have two collections: posts and authors - Each post has an author ID which references the authors collection. I then attempt to do a "join" on a non relational database while trying to maintain reactivity.
It seems to me with MongoDB degrees of denormalization is acceptable and I am tempted to embed as implementing joins really does feel like going against the ideals of Mongo.
Can anyone shed any light on what is the right approach especially in terms of wanting my app data to scale well and be manageable?
Thanks
Denormalisation is useful when you're scaling your application and you notice that some queries are taking too much time to complete. I also noticed that most Mongodb developers tend to forget about data normalisation but that's another topic.
Some developers say things like: "Don't use observe and observeChanges because it's slow". We're building real-time applications so that a normal thing to happen, it's a CPU intensive app design.
In my opinion, you should always aim for a normalised database design and then you have to decide, try and test which fields, that duplicated/denormalised, could improve your app's performance. Example: You remove 1 query per user. The UI need an extra field and it's fast to duplicated it, etc.
With the denormalisation you've an extra price to pay. You've to update the denormalised fields according to the main collection.
Example:
Let's say that you Authors and Articles collections. On each article you have the author name. The author might change his name. With a normalised scenario, it works fine. With a denormalised scenario you have to update the Author document name AND every single article, owned by this author, with the new name.
Keeping a normalised design makes you life easier but denormalisation, eventually, becomes necessary.
From a MeteorJs perspective: With the normalised scenario you're sending data from 2 Collections to the client. With the denormalised scenario, you only send 1 collection. You can also reactively join on the server and send 1 collection to the client, although it increases the RAM usage because of MergeBox on the server.
Denormalisation is something that it's very specify for you application needs. You can use Kadira to find ways of making your application faster. The database design is only 1 factor out of many that you play with when trying to improve performance.

How do I only allow access to one document for each client using CouchDB (Cloudant)?

I have a JavaScript application which uses a PouchDB instance to store data. I'd like to replicate that data to a Cloudant instance.
Most of the clients using my app and generating the data are anonymous. I'd like to still collect their data without requiring them to log in or sign up. All of the data they generate is stored in a single document. As you can probably tell, this presents a security challenge.
On the one hand, I'd like to permit anyone to read and write to my CouchDB instance, but I only want to give them access to their data. So, if an anonymous user creates a document, I'd like to only allow them to read/write to that document and not others. I don't want them to simply be able to download my entire database.
Reading the Cloudant and CouchDB documentation, it doesn't seem entirely clear how to achieve this. It looks like the following are possibilities:
Create a new database user each time an anonymous user starts generating data and only give that user access to the document they're going to create.
Create a new database for each anonymous user and somehow replicate that into the centralized database.
Figure out how to securely and transparently authenticate anonymous users.
I'm at a loss, probably due to my inexperience with Couch. How would you implement this?
I'm sure the explanation above will need clarification, so please ask away. Thanks in advance for everyone's help.

How to store documents like google docs?

I'm interested how does google docs store documents on server side because I need to create similar application.
Does it use pure RTF/ODF files or own database?
How do they make possible versioning and undo/redo feature?
If anybody have knowing according this question please share with me.
To answer you question specifically to how Google Docs works. They use a technology called
Operational Transformation
You may be able to use one of operational transformation engines listed on: https://en.wikipedia.org/wiki/Operational_transform#OT_software
The basic idea is that every operation has a context, e.g. "delete the fourth word in the fifth paragraph" or "add an input box after the button". The clients all send each other operations thru the server. The clients and server each keep their own version of the document and apply operations as they come.
When operations have overlapping contexts, there are a bunch of rules that kick in to resolve conflicts. Like you can't modify something that's been deleted, so the delete must come last in a sequence of concurrent operations on that context.
It's possible that the various clients and server will get out of sync, so you need a secondary algorithm to maintain consistency. One way would be to reload the data from the server whenever a conflict is detected.
--This is an answer I got from a professor when I asked the same thing a couple of years ago.
You should use a database. Perhaps a table storing each document revision. First, find a way to determine whether an update is significant or not. You can store minor changes client side for redo/undo, and then, either periodically or per some condition (e.g., user hits save), create a database entry per revision (you can store things like bytes changed, bytes added, bytes deleted, etc.).
Take a look at MediaWiki, which is open source, and essentially does what you're asking (i.e., take a look at their tables and code).
RTF/ODF would typically be generated, and served, when a user requests exporting the document.
Possibly, you should consider utilizing Google Drive's public API. See link for details.

which is better, searching in javascript or database?

I have a grid(employee grid) which has say 1000-2000 rows.
I display employee name and department in the grid.
When I get data for the grid, I get other detail for the employee too(Date of Birth, location,role,etc)
So the user has option to edit the employee details. when he clicks edit, I need to display other employee details in the pop up. since I have stored all the data in JavaScript, I search for the particular id and display all the details. so the code will be like
function getUserDetails(employeeId){
//i store all the employeedetails in a variable employeeInformation while getting //data for the grid.
for(var i=0;i<employeeInformation.length;i++){
if(employeeInformation[i].employeeID==employeeId){
//display employee details.
}
}
}
the second solution will be like pass employeeid to the database and get all the information for the employee. The code will be like
function getUserDetails(employeeId){
//make an ajax call to the controller which will call a procedure in the database
// to get the employee details
//then display employee details
}
So, which solution do you think will be optimal when I am handling 1000-2000 records.
I don't want to make the JavaScript heavy by storing a lot of data in the page.
UPDATED:
so one of my friend came up with a simple solution.
I am storing 4 columns for 500 rows(average). So I don't think there should not be rapid slowness in the webpage.
while loading the rows to the grid, under edit link, I give the data-rowId as an attribute so that it will be easy to retrieve the data.
say I store all the employee information in a variable called employeeInfo.
when someone clicks the edit link.. $(this).attr('data-rowId') will give the rowId and employeeInfo[$(this).attr('data-rowId')] should give all the information about the employee.
instead of storing the employeeid and looping over the employee table to find the matching employeeid, the rowid should do the trick. this is very simple. but did not strike me.
I would suggest you make an AJAX call to the controller. Because of two main reasons
It is not advisable to handle Database actiity in javascript due to security issues.
Javascript runs on client side machine it should have the least load and computation.
Javascript should be as light as possible. So i suggest you do it in the database itself.
Don't count on JavaScript performance, because it is heavily depend on computer that is running on. I suggest you to store and search on server-side rather than loading heavy payload of data in Browser which is quite restricted to resources of end-user.
Running long loops in JavaScript can lead to an unresponsive and irritating UI. Use Ajax calls to get needed data as a good practice.
Are you using HTML5? Will your users typically have relatively fast multicore computers? If so, a web-worker (http://www.w3schools.com/html/html5_webworkers.asp) might be a way to offload the search to the client while maintaining UI responsiveness.
Note, I've never used a Worker, so this advice may be way off base, but they certainly look interesting for something like this.
In terms of separation of concerns, and recommended best approach, you should be handling that domain-level data retrieval on your server, and relying on the client-side for processing and displaying only the records with which it is concerned.
By populating your client with several thousand records for it to then parse, sort, search, etc., you not only take a huge performance hit and diminish user experience, but you also create many potential security risks. Obviously this also depends on the nature of the data in the application, but for something such as employee records, you probably don't want to be storing that on the client-side. Anyone using the application will then have access to all of that.
The more pragmatic approach to this problem is to have your controller populate the client with only the specific data which pertains to it, eliminating the need for searching through many records. You can also retrieve a single object by making an ajax query to your server to retrieve the data. This has the dual benefit of guaranteeing that you're displaying the current state of the DB, as well as being far more optimized than anything you could ever hope to write in JS.

EnsureIndex for likes in MongoDB

well, i am creating a network that allows users creating posts and like them.
Asking on stackoverflow i've understood how to structure my database:
A collection which includes a document for each post.
A collection which includes a document for each like, in each of these documents there is a reference to post is referenced to.
When i want to get ALL likes about a post i can query the like collection looking for the reference to that post.
And till here i am ok. But assuming i'll have millions documents in like collection, i wondered how could i query and search among them in not too long time.
And i was advised of ensureIndex, in this case, i have to ensureindex of the field which contains reference to a post.
But when do i have to create this index? is enough to create it once (for example when i set up my database) and it will be as default in mongodb or do i have to do it during application life-time? thank you
But assuming i'll have millions documents in like collection, i wondered how could i query and search among them in not too long time.
I assume you would most likely want to do a count on the likes as an example?
You can't, instead you use optimizations to combat this. A count on millions of rows might get a bit slow.
A typical scenario are counters in SQL techs that you use to amend the parent row with a sum figure of its children.
Same applies to MongoDB.
You would aggregate important data to the top.
If you require to actually query the likes to show some who have liked it then you limit those likes. Google+ and other networks tend to limit the amount of likes they show to about 1,000.
And i was advised of ensureIndex,
Adding indexes to a database does help with actually searching for documents.
But when do i have to create this index? is enough to create it once
Yes, MongoDB will manage the index itself. You only need to ensure it once.

Categories