Object inside Array as External File for Express - javascript

I searched around and found pieces of what I want to do and how to do it but I have a feeling combining them all is a process I shouldnt do and that I should write it a different way.
Currently I had a small application that would use a MSSQL library in Node to Query a SQL Server with a sql command and get the results and store it as an object. Which then I use express and some javascript to decipher it or modify it before calling it with an AJAX call and responding it as a proper JSON object.
SQLDB -> NodeJs -> API -> localhost
My problem now is I want to repurpose this and expand it. Currently storing the SQLDB responses as objects inside an array is becoming a huge memory problem. Considering some of these requests can be hundred thousands of rows with hundreds of columns, the node process begins eatingup outrageous amounts of RAM.
I then thought, maybe I could easily just take that object when it comes through in the result and write it to file. Then when ajax calls come to expressjs, it can read from the file and res.json from there.
Will this work if say 50-200 some people request data at the same time? Or should I look for another method?

Related

How to handle aws dynamo db 400 KB record limit without changing my codebase

In aws dynamo db we cannot store more than 400KB data in a single record [Reference].
Based on suggestions online I can either compress the data before storing or upload part of it to aws s3 bucket which I am fine by
But my application (javascript/express server plus many js lambdas/microservices) is too large and adding the above logic which require a heavy re-write and extensive testing. Currently there is an immediate requirement from a big client that demands >400KB storage in db, so is there any alternative way to solve the problem that doesn't make me change my existing code to fetch the record from db.
I was thinking more in these lines:
My backend makes a dynamo db call to fetch the record as its doing now (we use a mix of vogels and aws-sdk to make db calls) -> The call is intercepted by a lambda (or something else) which handles the necessary compression/decompression/s3 with dynamodb and returns the data to the backend.
Is the above approach possible to do and if yes then how can i go about implementing it? Or if you have a better way, please do tell.
PS. Going forward I will definitely re-write my codebase to take care of this, what I am asking for is an immediate stopgap solution.
Split the data into multiple items. You’ll have to change a little client code but hopefully you have a data access layer so it’s just a small change in one place. If you don’t have a DAL, from now on always have a DAL. :)
For the payload of a big item, use the regular item as the manifest which can point at the segmented items. Then batch get items those segmented items.
This assumes compression alone isn’t always sufficient. If it is, do that.

Getting only one part of an HTTP GET Request

I want to use the Rest API of Github, but I only need a small part of the info returned in JSON from the GET request. Is it possible to get exactly what I need in order to avoid the extra network usage? This will be done in javascript, but I guess if it's possible it would work on different frameworks.

Choosing the right pattern for filter large arrays - Rest vs Local Array

Which is the best approach for an app that aims to filter data, like 5000+ records, by keeping the response speed in focus?
Filter local memory arrays
Query to db through http API request calls
For my app I use angularjs, php and SQLite3. Right now I load all record from slite db to my table and then filter this field by search. All works great, but when I exceed 3000 records I notice a certain slowing down. By limiting the search on two fields, I get better performance.
My doubt is if changing the model and querying the db I get a better performance or not.
Local array advantages
I can use JavaScript Array map() Method
low consuming data bandwidth
I can see all records in table before filter
I can work, after loading data, in offline.
Local array disadvantages
slowing down performance over 2000 record.
So can you help me to evaluate advantages and disadvantages if I make http API call for any filter action request keeping in focus the performances?
I can't tell about caching in PHP, but for the AngularJS end, there's an approach you can follow:
When the user searches for the first time, fetch the result(s) from db.
Make 2 copies of the data: one presented to the user directly, another can be stores in a local json with a key value pair approach.
3.Next time the user searches for anything, look into the local json first for the result. If the data is present locally, no need for the db query, else make the db query and repeat step 2.
The idea is not to make user wait for every search, you cannot simply call all 5000+ records at once and store locally, and you definitely cannot make db queries every-time since RDMS having that much records simply have low performance issues.
So this seems best to me.

Firebase performance doubts with big data

I have some doubts about the best approaches and performance with firebase 3.x
My question might be stupid but I am relative new to this and need to understand this better.
If I have for example a Firebase object with thousands or even millions of entries, with for example user comments and I do a simple:
$scope.user_comments = $firebaseArray(ref.child('user_comments'));
What happens actually? Do I get transferred the entire data already to my browser in this case or is this more like an open DB connection and I get only the data transferred which I would call later like for example from only one user_id in this case?
What I mean is if this is more like for example in MySQL that I connect to the DB but did not send the data back to my browser until I select a bunch of data or is the simple command
$scope.user_comments = $firebaseArray(ref.child('user_comments'));
already transferring the entire object to my browser and local memory.
Sorry if this is somehow a stupid question but I wonder what I do best with big object structures and how I distribute them later to make sure I dont transfer unneeded data nonstop.
thanks for some input on this in advance.

postgresql stored procedures vs server-side javascript functions

In my application I receive json data in a post request that I store as raw json data in a table. I use postgresql (9.5) and node.js .
In this example, the data is an array of about 10 quiz questions experienced by a user, that looks like this:
[{"QuestionId":1, "score":1, "answerList":["1"], "startTime":"2015-12-14T11:26:54.505Z", "clickNb":1, "endTime":"2015-12-14T11:26:57.226Z"},
{"QuestionId":2, "score":1, "answerList":["3", "2"], "startTime":"2015-12-14T11:27:54.505Z", "clickNb":1, "endTime":"2015-12-14T11:27:57.226Z"}]
I need to store (temporarily or permanently) several indicators computed by aggregating data from this json at quizz level, as I need these indicators to perform other procedures in my database.
As of now I was computing the indicators using javascript functions at the time of handling the post request and inserting the values in my table alongside the raw json data. I'm wondering if it wouldn't be more performant to have the calculation performed by a stored trigger function in my postgresql db (knowing that the sql function would need to retrieve the data from inside the json raw data).
I have read other posts on this topic, but it was asked many years ago and not with node.js, so I thought people might have some new insight on the pros and cons of using sql stored procedures vs server-side javascript functions.
edit: I should probably have mentioned that most of my application's logic already mostly lies in postgresql stored procedures and views.
Generally, I would not use that approach due to the risk of getting the triggers out of sync with the code. In general, the single responsibility principle should be the guide: DB to store data and code to manipulate it. Unless you have a really pressing business need to break this pattern, I'd advise against it.
Do you have a migration that will recreate the triggers if you wipe the DB and start from scratch? Will you or a coworker not realise they are there at a later point when reading the app code and wonder what is going on? If there is a standardised way to manage the triggers where the configuration will be stored as code with the rest of your app, then maybe not a problem. If not, be wary. A small performance gain may well not be worth the potential for lost developer time and shipping bugs.
Currently working somewhere that has gone all-in on SQL functions.. We have over a thousand.. I'd strongly advise against it.
Having logic split between Javascript and SQL is a real pain when debugging issues especially if, like me, you are much more familiar with JS.
The functions are at least all tracked in source control and get updated/created in the DB as part of the deployment process but this means you have 2 places to look at when trying to follow the code.
I fully agree with the other answer, single responsibility principle, DB for storage, server/app for logic.

Categories