Handling large amounts of data in Angular - javascript

I'm going to build a SPA with Angular. The app will be composed of three tabs. Every tab will hold large amounts of data (basically 100-200 rows and various text fields, drop downs etc).
I'm facing objections from my colleagues to build this as a real SPA - they would like to separate this into three completely independent angular applications, living in the same asp.net MVC website.
My question is: Will holding such data on the client side cause browser or rendering issues? Are they right thinking this is dangerous?

I'd check to see how big a memory hit this is with a real data set and see if it would tax the resources available to most mobile devices.
My first thought would be 100 rows of data, with 100 columns each of 100 byte strings, would mean ~1MB of RAM per tab. Even my old phone would have enough RAM to handle that.
"Dangerous"? What can they do to manipulate that data in harmful ways?

~1MB estimate is based on raw data, if the page uses a lot of widgets which scales as data, it can quickly become nightmare unless data is loaded on demand

Related

Create a dynamic datatable dashboard with dynamic values in cells

My website needs to display a dashboard (made of a datatable) on a webpage, in which the data will evolve really quickly (1 to 2 seconds). The data are received from a rabbitmq message queue.
I have successfully achieved it with Dash. Everything works well, but I have one major problem. Dash seems to be really slow when there is a huge amount of data (approximately 300 columns and 5000 rows in my case) that are regularly updated.
As far as I went, I think that the major lack of speed comes from the fact that the values pushed in the datatables are in a dictionary and not dynamic (not an in-cell modification), so all the javascript loads slowly and is not that stable.
Which leads me to this question: what would be the best way to achieve the goals mentioned above?
I thought that using Django and creating a custom javascript datatable could do the job but I would like to get some advice and ideas before starting to code that (really) time consuming solution.

How much data should we cache in memory in single page applications?

I was curious to know if there is any limit for data caching in
Single page applications using shared service or ngrx.
Does caching too much data on front end impacts the overall
performance of web Application (DOM).
Lets say I have a very big complex nested object which I am caching in memory
Now assume that I want to use different subsets of object in different modules/components of our
application and for that I may need to do lot of mapping operations(using loops by matching the id's etc) on UI.
I was thinking in other way around that instead of doing so much operations on UI to extract the
relevant data why don't I use a simple API with having id parameter to fetch the relevant information if its not taking much time to get the data from backend.
url = some/url/{id}
So is it worth to cache more complex nested objects if we cant use its subset simply by its properties
obj[prop] and need to do lot of calculations on UI (looping etc) which actually is more time consuming than getting the data from rest API ?
Any help/explanation will be appreciated !!!
Thanks
Caching too much data in memory is not a good idea. It will affect your application performance. Causes performance degradation in a system having less memory.
theoretically cache memory is for keeping less amount of data. The maximum support size is 2GB. I think chrome is also supported up to that limit.
For keeping data of big size in client-side never use memory cache instead you should use client-side database/datastore. It uses disk space instead of memory.
There are number of web technologies that store data on the client-side like
Indexed Database
Web SQL
LocalStorage
Cookies
Depending upon the client application framework it can be decided.
By default browser uses 10% of disk space for these data stores. We also have option to increase that size.

Best way to Handle large amounts of Data

Currently I am experiencing speed issues with parts of my application that load large amounts of data into a reporting table. The data in the reporting table is pulling from multiple tables and running some complex queries, but required quires.
Besides optimizing the code, my question is how do you personally handle large amounts of data that need to be displayed to the user, and what is the best practice?
Currently I am processing all the data before hand and then generating a table via the data table javascript library.
Things I know:
The user doesn't need to see all the data at once
The user needs to be able to search through all the data
The user needs to be able to filter through the data
Is the best way really to just use a loading spinner, and load only a small portion of the data when the page first loads? then the rest of the data retrieval is through Ajax?
I feel like there has to be a better way
Thanks,
I think you're answering your own question a bit. Yes, it is better to not deliver the whole database to the user at once, this is why any RDBMS supports things like LIMIT. Your three criteria exactly match what a database system can do for you – queries of small subsets of data (i.e. pages), optionally filtered or matched based on a search query.
For simplicity of the front end, you can make the first page load via AJAX as well, though having it pre-rendered does make the page feel more responsive. Having said that, there are many existing solutions to this problem; some template engines and JS front-end frameworks (Vue.js SSR) support server-side pre-render.

Performance and Memory optimization hints of an knockout based application

We have made a web client where you can pushpin markers on a map and multiple users can comment on each of these markers. For the map we use leaflet and (what matters more) Knockout for the ViewModel of these Pushpins and the comments on it.
So the data model isn't too complicated: Each Pushpin has a Lat/Lon, Title, some Metadata (who and when created it) and an array of comments (each with Username, Timestamp, Text).
There are a couple of computeds in the view model (firstComment, lastComment, etc..), that Knockout has to keep up to date, so I think these are slowing it down a lot.
Everytime the app starts, we download the whole set of Pushpins (over 600 right now) as JSON and initialize the Knockout view model with it. The JSON already has about 1,2 MByte which lasts already 6 seconds to download. The initialization of the Knockout View Model then needs over 20 seconds. I created a splashscreen with some animated GIF, so the user doesn't think that the App doesn't work, but as Pushpins are getting more this behaviour gets worse.
Javascript also needs a lot of memory to build up the model. I think memory is needed for the Knockout model, and also for the markers for the leaflet layer, which is its own model associated with my knockout objects.
In Firefox Memory rises up to about 700MB when I open my web app. In IE9 its over 1 GB (!). Also when the knockout model is built up (mainly creating Knockout observables and pushing them to an observable array) browsers are not reacting anymore. In IE the website (and my splashscreen) isn't rendered at all until Knockout has done its job. In Firefox the website gets rendered first, then it freezes until the model is built up, then it comes back again. In Chrome its the same as it is in IE.
Another issue is, that memory does not get freed except when I close the browser. With every site reload another GB is allocated so I can easily fill up every Byte of my 8GB Laptop by refreshing my site eight times ... :(
I already thought about extending our REST Server API with some kind of pagination, and to lazy load much of the data as late as possible. But first I would like to know, what Knockout offers to get my model with a lot of data (in fact it isn't much at the moment, but it could get much) up and running without freezing the browser for a couple of seconds.
How do you handle a large amount of knockout objects in the browsers memory and how to you lazy bind it?

nodeJS, mongoDB, express, eJS - opinions on memory caching

I'm building my first site using this framework, i'm remaking a website i had done in PHP+mySQL and wish to know something about performance... In my website, i have two kinds of content:
Blog Posts (for 2 sections of the site) - these have the tendency to one day sum to thousands of records and, are more often updated
Static (sort of) data: this is information i keep in the database, like site section's data (title, metatags, header image url, fixed html content, javascript and css filenames to include in that section), that is rarely updated and it's very small in size.
While i was learning the basics on nodeJS, i started thinking of a way to improve the performance of the website, in a way i couldn't do with PHP. So, what i'm doing is:
When i run the app, the static content is all loaded into memory, i have a "model" Object for each content that stores the data in an array, has a method to refresh that data, ie, when the administrator updates something, i call refresh() to go get the new data from the database to that array. In this way, for every page load, instead of querying the database, the app queries the object in memory directly.
What i would like to know is if there should be any increase of performance, working with objects directly in memory or if constant queries to the database would work just as good or even better.
Any documentation supporting your answer will be much appreciated.
Thanks
In terms of the general database performance, MongoDB will keep your working set in memory - that's its basic method of operation.
So, as long as there is no memory contention to cause the data to get swapped out, and it is not too large to fit into your physical RAM, then the queries to the database should be extremely fast (in the sub millisecond range once you have your data set paged in initially).
Of course, if the database is on a different host then you have network latency to think about and such, but theoretically you can treat them as the same until you have a reason to question it.
I don't think there will be any performance difference. First thing is that this static data is probably not so big (up to 100 records?) and querying DB for it is not a big deal. Second thing (more important) is that most DB engines (including mongoDB) have caching systems built-in (although I'm not sure how they work in details). Third thing is that holding query results in memory does not scale well (for big websites) unless you use storage engine like Redis. And that's my opinion, although I'm not the expert.

Categories