In my application I receive json data in a post request that I store as raw json data in a table. I use postgresql (9.5) and node.js .
In this example, the data is an array of about 10 quiz questions experienced by a user, that looks like this:
[{"QuestionId":1, "score":1, "answerList":["1"], "startTime":"2015-12-14T11:26:54.505Z", "clickNb":1, "endTime":"2015-12-14T11:26:57.226Z"},
{"QuestionId":2, "score":1, "answerList":["3", "2"], "startTime":"2015-12-14T11:27:54.505Z", "clickNb":1, "endTime":"2015-12-14T11:27:57.226Z"}]
I need to store (temporarily or permanently) several indicators computed by aggregating data from this json at quizz level, as I need these indicators to perform other procedures in my database.
As of now I was computing the indicators using javascript functions at the time of handling the post request and inserting the values in my table alongside the raw json data. I'm wondering if it wouldn't be more performant to have the calculation performed by a stored trigger function in my postgresql db (knowing that the sql function would need to retrieve the data from inside the json raw data).
I have read other posts on this topic, but it was asked many years ago and not with node.js, so I thought people might have some new insight on the pros and cons of using sql stored procedures vs server-side javascript functions.
edit: I should probably have mentioned that most of my application's logic already mostly lies in postgresql stored procedures and views.
Generally, I would not use that approach due to the risk of getting the triggers out of sync with the code. In general, the single responsibility principle should be the guide: DB to store data and code to manipulate it. Unless you have a really pressing business need to break this pattern, I'd advise against it.
Do you have a migration that will recreate the triggers if you wipe the DB and start from scratch? Will you or a coworker not realise they are there at a later point when reading the app code and wonder what is going on? If there is a standardised way to manage the triggers where the configuration will be stored as code with the rest of your app, then maybe not a problem. If not, be wary. A small performance gain may well not be worth the potential for lost developer time and shipping bugs.
Currently working somewhere that has gone all-in on SQL functions.. We have over a thousand.. I'd strongly advise against it.
Having logic split between Javascript and SQL is a real pain when debugging issues especially if, like me, you are much more familiar with JS.
The functions are at least all tracked in source control and get updated/created in the DB as part of the deployment process but this means you have 2 places to look at when trying to follow the code.
I fully agree with the other answer, single responsibility principle, DB for storage, server/app for logic.
Related
In the design stage for an app that collects large amounts of data...
Ideally, I want it to be an offline-first app and was looking to Pouchdb/Counchdb - However, the data needs to be kept for years for legal reasons, and my concern is that this is going to consume too much local storage over time.
My thoughts were:
handle sync between pouchdb and couchdb myself, allowing me to purge inactive documents from the local store without impacting the couchdb. This feels messy and probably a lot of work
Build a local store using dexie.js and completely write the sync function. It also looks hard work, but may be less as I'm not trying to mess with a sync function
Search harder :)
Conceptually, I guess I'm looking for a 'DB cache' - holding active json document versions and removing documents that have not been touched for X period. It might be that 'offline' mode is handled separate to the DB cache..
Not sure yet if this is the correct answer..
setup a filter on couchdb to screen out old documents (lets say we have a 'date_modified' field in the doc and we filter out any docs with date_modified older than one month)
have a local routine on the client that deletes documents from the local pouchdb that are older than one month ( actually using the remove() method against the local pouchdb, not updating it with _deleted:true) - from https://pouchdb.com/2015/04/05/filtered-replication.html it appears removed documents don't sync.
docs updated on the Pouchdb will replicate normally
there might be a race condition here for replication, we'll see
I have some doubts about the best approaches and performance with firebase 3.x
My question might be stupid but I am relative new to this and need to understand this better.
If I have for example a Firebase object with thousands or even millions of entries, with for example user comments and I do a simple:
$scope.user_comments = $firebaseArray(ref.child('user_comments'));
What happens actually? Do I get transferred the entire data already to my browser in this case or is this more like an open DB connection and I get only the data transferred which I would call later like for example from only one user_id in this case?
What I mean is if this is more like for example in MySQL that I connect to the DB but did not send the data back to my browser until I select a bunch of data or is the simple command
$scope.user_comments = $firebaseArray(ref.child('user_comments'));
already transferring the entire object to my browser and local memory.
Sorry if this is somehow a stupid question but I wonder what I do best with big object structures and how I distribute them later to make sure I dont transfer unneeded data nonstop.
thanks for some input on this in advance.
I recently followed a tutorial to create a node.js server connecting to orchestrate.io database. The problem is I now want to point the server at a mongodb hosted on mongolab - currently I am declaring a variable:
var db = require('orchestrate')(APIKEY);
which allows me to retrieve data using something like:
db.get('collection', key)
.then(function(result){
console.log(result.body);
});
My question is - Is there any way I can switch the value of 'db' to point at a mongolab database without changing the structure of the get request?
I work at Orchestrate and we do not believe in data lock-in. I hope you'll reconsider using our service, but here's some advice if you choose to leave...
It sounds like your code is fairly minimal, so you may be best off recreating your Node server with another tutorial specific to Mongo.
That said, if you are using simple key-value storage, it should be as easy as rewriting the db.get Orchestrate lines to be db.find functions from MongoDB. If you've loaded a lot of data you could export it from Orchestrate, then import into Mongo (either manually, or using another tool).
If you're using some advanced, built-in Orchestrate features, such as full-text search, relation graphing, time-series data, and geographic look-ups, it may take some more effort (and MongoDB experience) to switch. If you'd like these features in a highly scalable database-as-a-service that you don't have to maintain, you know where to find us.
I have a grid(employee grid) which has say 1000-2000 rows.
I display employee name and department in the grid.
When I get data for the grid, I get other detail for the employee too(Date of Birth, location,role,etc)
So the user has option to edit the employee details. when he clicks edit, I need to display other employee details in the pop up. since I have stored all the data in JavaScript, I search for the particular id and display all the details. so the code will be like
function getUserDetails(employeeId){
//i store all the employeedetails in a variable employeeInformation while getting //data for the grid.
for(var i=0;i<employeeInformation.length;i++){
if(employeeInformation[i].employeeID==employeeId){
//display employee details.
}
}
}
the second solution will be like pass employeeid to the database and get all the information for the employee. The code will be like
function getUserDetails(employeeId){
//make an ajax call to the controller which will call a procedure in the database
// to get the employee details
//then display employee details
}
So, which solution do you think will be optimal when I am handling 1000-2000 records.
I don't want to make the JavaScript heavy by storing a lot of data in the page.
UPDATED:
so one of my friend came up with a simple solution.
I am storing 4 columns for 500 rows(average). So I don't think there should not be rapid slowness in the webpage.
while loading the rows to the grid, under edit link, I give the data-rowId as an attribute so that it will be easy to retrieve the data.
say I store all the employee information in a variable called employeeInfo.
when someone clicks the edit link.. $(this).attr('data-rowId') will give the rowId and employeeInfo[$(this).attr('data-rowId')] should give all the information about the employee.
instead of storing the employeeid and looping over the employee table to find the matching employeeid, the rowid should do the trick. this is very simple. but did not strike me.
I would suggest you make an AJAX call to the controller. Because of two main reasons
It is not advisable to handle Database actiity in javascript due to security issues.
Javascript runs on client side machine it should have the least load and computation.
Javascript should be as light as possible. So i suggest you do it in the database itself.
Don't count on JavaScript performance, because it is heavily depend on computer that is running on. I suggest you to store and search on server-side rather than loading heavy payload of data in Browser which is quite restricted to resources of end-user.
Running long loops in JavaScript can lead to an unresponsive and irritating UI. Use Ajax calls to get needed data as a good practice.
Are you using HTML5? Will your users typically have relatively fast multicore computers? If so, a web-worker (http://www.w3schools.com/html/html5_webworkers.asp) might be a way to offload the search to the client while maintaining UI responsiveness.
Note, I've never used a Worker, so this advice may be way off base, but they certainly look interesting for something like this.
In terms of separation of concerns, and recommended best approach, you should be handling that domain-level data retrieval on your server, and relying on the client-side for processing and displaying only the records with which it is concerned.
By populating your client with several thousand records for it to then parse, sort, search, etc., you not only take a huge performance hit and diminish user experience, but you also create many potential security risks. Obviously this also depends on the nature of the data in the application, but for something such as employee records, you probably don't want to be storing that on the client-side. Anyone using the application will then have access to all of that.
The more pragmatic approach to this problem is to have your controller populate the client with only the specific data which pertains to it, eliminating the need for searching through many records. You can also retrieve a single object by making an ajax query to your server to retrieve the data. This has the dual benefit of guaranteeing that you're displaying the current state of the DB, as well as being far more optimized than anything you could ever hope to write in JS.
I am playing around with CouchDB to test if it is "possible" [1] to store scientific data (simulated and experimental raw data + metadata). A big pro is the schema-less approach of CouchDB: we have to be very flexible with the metadata, as the set of parameters changes very often.
Up to now I have some code to feed raw data, plots (both as attachments), and hierarchical metadata (as JSON) into CouchDB documents, and have written some prototype Javascript for filtering and showing. But the filtering is done on the client side (a.k.a. browser): The map function simply returns everything.
How could I change the (or push a second) map function of a specific _design-document with simple browser-JS?
I do not think that a temporary view would yield any performance gain...
Thanks for your time and answers.
[1]: of course it is possible, but is it also useful? feasible? reasonable?
[added]
Ah, the jquery.couch.js (version 0.9.0) provides a saveDoc() function, which could update the _design document with the new map function.
But I also tried out the query function, which uses a temporary view. Okay, "do not use this in the real product, only during development"... But scientific research is steady development, right?
Temporary views are getting cached, as I noticed, and it works well for ~1000 documents per DB. A second plus: all users (think of 1 to 3, so a big user management is quit of an overkill) can work with their own temporary view.
Never ever use temporary views. They are really only there for dev and debugging purposes. For more information, see http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views (specifically the bold "NOTE").
And yes, because design documents are really just documents with special powers, you can run you GET/POST/PUT/DELETE methods on them. However, you will usually need admin privileges to do this. So, if you are allowing a client side piece of software to do that, you are making your entire database public for read/write access - this may be fine for your application, but is important to remember.
Ex., if you restrict access to your database, but put the username and password in client side javascript, then anyone can see that username and password.
Cheers.
I´ve written an helper functions for jquery.couch and design docs, take a look at:
https://github.com/grischaandreew/jquery.couch.js