How to deal with user permissions in single page application - javascript

I'm working on a single page enterprise application with a pretty complex logic about user permissions. The huge part of it works entirely on client communicating with backend server using AJAX sending JSON back and forth. The tricky part is that I need to implement permission mechanism as on per-entity basis, and I dont know how to do it the right way.
To explain myself clearly here the example code, I have 2 entity classes on the backend User and Node:
class User {
Long id;
}
class Node {
Long id;
String name;
Status status;
Node parent;
List<User> admins;
}
enum Status {
STATUS_1, STATUS_2
}
I send JSON of parent node to the server:
{id: 1, name: "Node name 1", status: 'STATUS_1'}
And recieve JSON with a bunch of child nodes:
[
{id: 11, name: "Node name 1.1", status: 'STATUS_1'},
{id: 12, name: "Node name 1.2", status: 'STATUS_1'}
]
On the client they are displayed in a tree-like structure, like this:
Now the tricky part:
Simple user that works with application can see tree, but can't change anything.
User can change node name if he is among admins of node or any of its parent nodes.
Admins can also change status of node, from STATUS_1 to STATUS_2, but only if all child nodes has STATUS_2 status.
There is a list of super adminstrators that can do whatever they want: change properties of any node, change status as they want.
So somehow, during rendering of the tree on the client, I need to know what user can or cannot do with each of the node on the page. I can't just assign user a role within a whole application because user rights vary from one node to another. Also I can't see whole picture on the client side because child nodes may be not loaded. How can I manage user permissions in situation like this? What's the proper way or pattern to use?
Should I attach some role object to each node, or maybe a bunch of flags representing what user can or cannot do like that:
{
id: 12,
name: "Node name 1.2",
status: "STATUS_1",
canChangeName: true,
canChangeStatus: false
}
That looks pretty silly to me.

I usually solve complex (and not so complex) permission-based tasks in the application using ACL classes.
I have simple, lighweight classes, that take a model, permissions for which are being checked, and a user object into a constructor. They have a bunch of methods with names canXXXX(). These methods can optionally take some parameters also if that is needed.
If you have the same model classes on front and back, you even might be able to reuse ACLs in both cases.
Can you use this approach?

Related

How best can I retrieve all comments for a given chat room in Firebase on the web?

I have a real time firebase app with chatrooms and comments. A comment belongs to a single chatroom and a chatroom can have many comments. I'd like retrieve just the comments for a given room, but right now I'm only able to get all of them.
Every time a comment is saved, I also save its id to the room to which it belongs. That way, every room has a list of its comment ids. I can retrieve a chatroom's list of child comment ids using chatRooms/${id}/commentIds.
// data structure is basically like this:
chatRooms: {
ROOMID123: {
title: "Room A",
commentIds: {
COMMENTIDABC: true,
COMMENTIDXYZ: true
}
}
},
comments: {
COMMENTIDABC: {
message: "some message",
parentRoomId: ROOMID123
},
COMMENTIDXYZ: {
message: "Another message",
parentRoomId: ROOMID123
}
}
I can get the comment ids for a given room, based on the room's id, like this:
firebase.database().ref(`chatRooms/${chatRoomId}/commentIds`).on('value',
snapshot => {
const commentsObject = snapshot.val();
const commentIdsList = Object.keys(commentsObject);
});
Would it be better for me to
a) use that list of commentIds to retrieve only the comments for a given room? If so, what query should I use?
b) use the chatRoom's id to retrieve every comment with a matching parentRoomId? If so, I don't know how to do this despite reading through the docs.
Thank you!
I'd propose a third option: store the comments for each chat room in a separate parent node. So something like:
commentsPerRoom: {
ROOMID123: {
COMMENTIDABC: {
message: "some message",
},
COMMENTIDXYZ: {
message: "Another message",
}
}
}
With the above structure you can retrieve the comments for a specific room with a single direct lookup:
firebase.database().ref(`commentsPerRoom/${chatRoomId}`).on('value',
Reasons I'd use this data structure over your current one:
Storing the comments as a single list means you'd have to query that list. And while Firebase Realtime Database scales quite well, querying for data is always going to have scalability limits. That's why the above data structure allows a direct look up, instead of requiring a query.
Loading the individual comments through the comment IDs is definitely also possible, and not nearly as slow as some developers think due to the fact that Firebase pipelines the requests over a single connection. But it seems unneeded here, since the comments already have a strong 1:n association with the room they belong to. Each comment only belongs to one room, so you might as well model that relationship in how you structure the data, and save yourself all the lookups.
Retrieving comments for a specific room is by far the most common use-case, and this data structure allows that use-case the most efficiently.

Write an object containing an array of objects to a mongo database in Meteor

In my user collection, I have an object that contains an array of contacts.
The object definition is below.
How can this entire object, with the full array of contacts, be written to the user database in Meteor from the server, ideally in a single command?
I have spent considerable time reading the mongo docs and meteor docs, but can't get this to work.
I have also tried a large number of different commands and approaches using both the whole object and iterating through the component parts to try to achieve this, unsuccessfully. Here is an (unsuccessful) example that attempts to write the entire contacts object using $set:
Meteor.users.update({ _id: this.userId }, {$set: { 'Contacts': contacts}});
Thank you.
Object definition (this is a field within the user collection):
"Contacts" : {
"contactInfo" : [
{
"phoneMobile" : "1234567890",
"lastName" : "Johnny"
"firstName" : "Appleseed"
}
]
}
This update should absolutely work. What I suspect is happening is that you're not publishing the Contacts data back to the client because Meteor doesn't publish every key in the current user document automatically. So your update is working and saving data to mongo but you're not seeing it back on the client. You can check this by doing meteor mongo on the command line then inspecting the user document in question.
Try:
server:
Meteor.publish('me',function(){
if (this.userId) return Meteor.users.find(this.userId, { fields: { profile: 1, Contacts: 1 }});
this.ready();
});
client:
Meteor.subscribe('me');
The command above is correct. The issue is schema verification. Simple Schema was defeating the ability to write to the database while running 'in the background'. It doesn't produce an error, it just fails to produce the expected outcome.

Synchronize Data across multiple occasionally-connected-clients using EventSourcing (NodeJS, MongoDB, JSON)

I'm facing a problem implementing data-synchronization between a server and multiple clients.
I read about Event Sourcing and I would like to use it to accomplish the syncing-part.
I know that this is not a technical question, more of a conceptional one.
I would just send all events live to the server, but the clients are designed to be used offline from time to time.
This is the basic concept:
The Server stores all events that every client should know about, it does not replay those events to serve the data because the main purpose is to sync the events between the clients, enabling them to replay all events locally.
The Clients have its one JSON store, also keeping all events and rebuilding all the different collections from the stored/synced events.
As clients can modify data offline, it is not that important to have consistent syncing cycles. With this in mind, the server should handle conflicts when merging the different events and ask the specific user in the case of a conflict.
So, the main problem for me is to dertermine the diffs between the client and the server to avoid sending all events to the server. I'm also having trouble with the order of the synchronization process: push changes first, pull changes first?
What I've currently built is a default MongoDB implementation on the serverside, which is isolating all documents of a specific user group in all my queries (Currently only handling authentication and server-side database work).
On the client, I've built a wrapper around a NeDB store, enabling me to intercept all query operations to create and manage events per-query, while keeping the default query behaviour intact. I've also compensated for the different ID systems of neDB and MongoDB by implementing custom ids that are generated by the clients and are part of the document data, so that recreating a database won't mess up the IDs (When syncing, these IDs should be consistent across all clients).
The event format will look something like this:
{
type: 'create/update/remove',
collection: 'CollectionIdentifier',
target: ?ID, //The global custom ID of the document updated
data: {}, //The inserted/updated data
timestamp: '',
creator: //Some way to identify the author of the change
}
To save some memory on the clients, I will create snapshots at certain amounts of events, so that fully replaying all events will be more efficient.
So, to narrow down the problem: I'm able to replay events on the client side, I'm also able to create and maintain the events on the client and serverside, Merging the events on serverside should also not be a problem, Also replicating a whole database with existing tools is not an option as I'm only syncing certain parts of the database (Not even entire collections as the documents are assigned different groups in which they should sync).
But what I am having trouble with is:
The process of determining what events to send from the client when syncing (Avoid sending duplicate events, or even all events)
Determining what events to send back to the client (Avoid sending duplicate events, or even all events)
The right order of syncing the events (Push/Pull changes)
Another Question I would like to ask, is whether storing the updates directly on the documents in a revision-like style is more efficient?
If my question is unclear, duplicate (I found some questions, but they didnt help me in my scenario) or something is missing, please leave a comment, I will maintain it as best as I can to keep it simple, as I've just written everything down that could help you understand the concept.
Thanks in advance!
This is a very complex subject, but I'll attempt some form of answer.
My first reflex upon seeing your diagram is to think of how distributed databases replicate data between themselves and recover in the event that one node goes down. This is most often accomplished via gossiping.
Gossip rounds make sure that data stays in sync. Time-stamped revisions are kept on both ends merged on demand, say when a node reconnects, or simply at a given interval (publishing bulk updates via socket or the like).
Database engines like Cassandra or Scylla use 3 messages per merge round.
Demonstration:
Data in Node A
{ id: 1, timestamp: 10, data: { foo: '84' } }
{ id: 2, timestamp: 12, data: { foo: '23' } }
{ id: 3, timestamp: 12, data: { foo: '22' } }
Data in Node B
{ id: 1, timestamp: 11, data: { foo: '50' } }
{ id: 2, timestamp: 11, data: { foo: '31' } }
{ id: 3, timestamp: 8, data: { foo: '32' } }
Step 1: SYN
It lists the ids and last upsert timestamps of all it's documents (feel free to change the structure of these data packets, here I'm using verbose JSON to better illustrate the process)
Node A -> Node B
[ { id: 1, timestamp: 10 }, { id: 2, timestamp: 12 }, { id: 3, timestamp: 12 } ]
Step 2: ACK
Upon receiving this packet, Node B compares the received timestamps with it's own. For each documents, if it's timestamp is older, just place it in the ACK payload, if it's newer place it along with it's data. And if timestamps are the same, do nothing- obviously.
Node B -> Node A
[ { id: 1, timestamp: 11, data: { foo: '50' } }, { id: 2, timestamp: 11 }, { id: 3, timestamp: 8 } ]
Step 3: ACK2
Node A updates it's document if ACK data is provided, then sends back the latest data to Node B for those where no ACK data was provided.
Node A -> Node B
[ { id: 2, timestamp: 12, data: { foo: '23' } }, { id: 3, timestamp: 12, data: { foo: '22' } } ]
That way, both node now have the latest data merged both ways (in case the client did offline work) - without having to send all your documents.
In your case, your source of truth is your server, but you could easily implement peer-to-peer gossiping between your clients with WebRTC, for example.
Hope this helps in some way.
Cassandra training video
Scylla explanation
I think that the best solution to avoid all the event order and duplication issues are to use the pull method. In this way every client maintains its last imported event state (with a tracker for example) and ask the server for the events generated after that last one.
An interesting problem will be to detect the breaking of business invariants. For that you could store on the client the log of applied commands also and in case of a conflict (events were generated by other clients) you could retry the execution of commands from the command log. You need to do that because some commands will not succeed after re-execution; for example, a client saves a document after other user deleted that document in the same time.

How to handle model / API response translation in AngularJS?

I have an angular application that requests data from a JSON API. A sample API response might be:
{
id: 1,
name: 'JJ',
houseId: 2
}
In my angular application I will have models representing a User, which also has a reference to a House object:
{
id: 1,
firstName: 'JJ',
surname: '',
house: {
id: 2,
address: 'XXX'
}
}
The application model and API responses differ in that there is one field for the name in the API response, but two in my application model. Is there an 'angular' way I can do some transformation from an API call response object to my application model to ensure that I am always dealing with consistent objects in my controllers/services?
Related to this, the API responds with the database id of the house object associated with that user, and not with the full house object included in the JSON. Is there a way to set my object up to automatically resolve this when needed?
As an example, I would like to display this user, with his address. If the object was fully resolved I could use 'user.house.address'. However, using the plain JSON response object, this would be undefined. Instead of having to explicitly resolve the house object by using the house API with the houseId, I would like this to happen 'behind the scenes' by previously stating how such an id would be resolved if the object is accessed.
Or am I approaching this the wrong way and the API response should be used to dictate the data structure of my application and explicit lookups via object id's is the preferred way?

Use JS to execute MySQL queries and the security issues it involves

I've been searching around the internet for a way to define a query in JavaScript, pass that query to PHP. Let PHP set up a MySQL connection, execute the query and return the results json encoded.
However my concern is with the security of this method since users could tamper with the queries and do things you don't want them to do or request data you do not want them to see.
Question
In an application/plugin like this, what kind of security measures would you suggest to prevent users from requesting information I don't want them to?
Edit
The end result of my plugin will be something like
var data = Querier({
table: "mytable",
columns: {"column1", "column2", "column3"},
where: "column2='blablabla'",
limit: "10"
});
I'm going to let that function make an AJAX request and execute a query in PHP using the above data. I would like to know what security risks this throws up and how to prevent them.
It's unclear from your question whether you're allowing users to type queries that will be run against your database, or if your code running in the browser is doing it (e.g., not the user).
If it's the user: You'd have to really trust them, since they can (and probably will) destroy your database.
If it's your code running in the browser that's creating them: Don't do that. Instead, have client-side code send data to the server, and formulate the queries on the server using full precautions to prevent SQL Injection (parameterized queries, etc.).
Re your update:
I can see at least a couple issues:
Here's a risk right here:
where: "column2='blablabla'"
Now, suppose I decide to get my hands on that before it gets sent to the server and change it to:
where: "column2=');DROP TABLE Stuff; --"
You can't send a complete WHERE clause to the server, because you can't trust it. This is the point of parameterized queries:
Instead, specify the columns by name and on the PHP side, be sure you're doing correct handling of parameter values (more here).
var data = Querier({
table: "mytable",
columns: {"column1", "column2", "column3"},
where: {
column2: {
op: '=',
value: 'blablabla'
}
}
limit: "10"
});
Now you can build your query without blindly trusting the text from the client; you'll need to do thorough validation of column names, operators, etc.
Exposing information about your scheme to the entire world is giving up information for free. Security is an onion, and one of the outer layers of that onion is obscurity. It's not remotely sufficient unto itself, but it's a starting point. So don't let your client code (and therefore anyone reading it) know what your table names and column names are. Consider using server-side name mapping, etc.
Depending on how you intend to do, you might have a hole bigger than the one made in this economy or no hole at all.
If you are going to write the query on client-side, and send to php, I would create a user with only select, insert, delete and update, without permissions to access any other database.
Ignore this if you use SQlite.
I advise against this!
If you build the query on server-side, just stuff to the server the data you want!
I would change the code into something like this:
var link = QuerierLink('sql.php');//filename to use for the query
var data = Querier('users',link);//locks access to only this table
data.select({
columns: ['id','name','email'],
where: [
{id:{'>':5}},
{name:{'like':'%david%'}}
],
limit:10
});
Which, on server-side, would generate the query:
select `id`,`name`,`email` from `db.users` where `id`>5 and `name` like '%david%' limit 10
This would be a lot better to use.
With prepared statements, you use:
select `id`,`name`,`email` from `db.users` where `id`>:id and `name` like :name limit 10
Passing to PDO, pseudo-code:
$query='select `id`,`name`,`email` from `'.$database_name.'.users` where `id`>:id and `name` like :name limit 10';
$result=$PDO->exec($query,array(
'id'=>5,
'name'=>'%david%'
)
);
This is the prefered way, since you have more control over what is passed.
Also, set the exact database name along the name of the table, so you avoid users accessing stuff from other tables/databases.
Other databases include information_schema, which has every single piece of information from your entire databasem, including user list and restrictions.
Ignore this for SQlite.
If you are going to use MySQL/MariaDB/other you should disable all read/write permissions.
You really don't want anyone writting files into your server! Specially into any location they wish.
The risk: They have a new puppy for the attackers to do what they wish! This is a massive hole.
Solution: Disable FILE privileges or limit the access to a directory where you block external access using .htaccess, using the argument --secure_file_priv or the system variable ##secure_file_priv.
If you use SQlite, just create a .sqlite(3) file, based on a template file, for each client connecting. Then you delete the file when the user closes the connection or scrap every n minutes for files older than x time.
The risk: Filling your disk with .sqlite files.
Solution: Clear the files sooner or use a ramdisk with a cron job.
I've wanted to implement something like this a long ago and this was a good way to exercice my mind.
Maybe I'll implement it like this!
Introducing easy JavaScript data access
So you want to rapidly prototype a really cool Web 2.0 JavaScript application, but you don't want to spend all your time writing the wiring code to get to the database? Traditionally, to get data all the way from the database to the front end, you need to write a class for each table in the database with all the create, read, update, and delete (CRUD) methods. Then you need to put some marshalling code atop that to provide an access layer to the front end. Then you put JavaScript libraries on top of that to access the back end. What a pain!
This article presents an alternative method in which you use a single database class to wrap multiple database tables. A single driver script connects the front end to the back end, and another wrapper class on the front end gives you access to all the tables you need.
Example/Usage
// Sample functions to update authors
function updateAuthorsTable() {
dbw.getAll( function(data) {
$('#authors').html('<table id="authors"><tr><td>ID</td><td>Author</td></tr></table>');
$(data).each( function( ind, author ) {
$('#authors tr:last').after('<tr><td>'+author.id+'</td><td>'+author.name+'</td></tr>');
});
});
}
$(document).ready(function() {
dbw = new DbWrapper();
dbw.table = 'authors';
updateAuthorsTable();
$('#addbutton').click( function() {
dbw.insertObject( { name: $('#authorname').val() },
function(data) {
updateAuthorsTable();
});
});
});
I think this is exactly what you're looking for. This way you won't have to build it yourself.
The more important thing is to be careful about the rights you grant to your MySQL user for this kind of operations.
For instance, you don't want them to DROP a database, nor executing such request:
LOAD DATA LOCAL INFILE '/etc/passwd' INTO TABLE test FIELDS TERMINATED BY '\n';
You have to limit the operations enabled to this MySQL user, and the tables he has accessed.
Access to total database:
grant select on database_name.*
to 'user_name'#'localhost' identified by 'password';
Access to a table:
grant select on database_name.table_name
to 'user_name'#'localhost' identified by 'password';
Then... what else... This should avoid unwanted SQL injection for updating/modifying tables or accessing other tables/databases, at least, as long as SELECT to a specific table/database is the only privillege you grant to this user.
But it won't avoid an user to launch a silly bad-performance request which might require all your CPU.
var data = Querier({
table: "mytable, mytable9, mytable11, mytable12",
columns: {"mytable.column1", "count(distinct mytable11.column2)",
"SUM(mytable9.column3)"},
where: "column8 IN(SELECT column7 FROM mytable2
WHERE column4 IN(SELECT column5 FROM mytable3)) ",
limit: "500000"
});
You have to make some check on the data passed if you don't want your MySQL server possibly down.

Categories