I am working on an application in Angular JS 1.5 version.
When i submit a form, based on the input parameters passed, a query is made to the middle layer and data is sent back to the application.
The response received would have around million records. I need to display this data in the form of Graphs on the front-end Using Angular C3 js library.
Can some one please guide me how and where this data received should be stored.
Thanks in advance.
AngularJS is a client-side MVVM pattern. The data coming from your middle layer would be loaded into the viewmodel which will be databound to the view.
If you aren't worried about support for older browser, then localStorage or sessionStorage will probably be your best bet. Most modern browsers will support around 5MB, so you can determine if that's enough. Then in your angular app, you can load the data from the storage into the viewModel or a custom angular factory/service in angular.run() or the controllers themselves.
But unless you have a very very good reason to actually store 1 million+ records to a browser storage, you should just retrieve it your API when you need it.
On a side note, you will probably want to look the memory usage of your angular app. Especially with a lot of data held in memory, things like .watch() and ng-repeat can make your application performance suffer.
Related
I have the folllowing tech stack:
SQLServer/MVC4/WebAPI backend and a HTML5/JqueryMobile frontend. Data transferred via JSON.
I would like to know how I can reduce the data tranferred via JSON. i.e. I don't want get data I already have from the server?
Are there any libraries, or design patters to use or research to help me in this. What is the architecture commonly used to solve this.
To minimise transferred data. You can use local caching.
HTML5 local storage and sessionStorage.
http://www.w3schools.com/html/html5_webstorage.asp
If you will be handling real time data I like using Signal R to enable push updates to active subscribers, avoids the need for polling.
http://www.asp.net/signalr/
Remember to enable your webserver to cache data where appropriate.
Finally good old fashioned analysis of your objects to make sure you are not sending unecessary data. JSON.NET will allow very good control of what items appear in your serialized output.
I am new to MVVM , and i wanted to understand ,
if you have a model in the back end, say a c# library which is getting data from the database or any other service. how would the Model notify the viewmodel.
i understand in the MVVM INotifyprovier does that for WPF (just read it somewhere), but how about Web app based scenario, does Viewmodel in java-script always have to ping Model to identify if there is a change in the model and then propagate to UI.
I am assuming that the viewmodel would always have to send Ajax request to a asmx or api with a set interval of 10 minutes or so (just an example).
is that how it works end to end. any example will be great.
Because one of the properties is, automatic UI refresh, so I am assuming that fresh data needs to be asked from server at regular intervals. A lot of examples i see on the web is only interacting between ViewModel and View, but hardly i see any thing with Model, ViewModel and View all combined together.
I could think that if one uses SignalR which sends a ping from Server to the client, then the Viewmodel could be updated and hence the View.
But if every time you need to ask a fresh set of data from Sever, so what's with the hype of Knockout, Jquery has been doing that for a while, except the Declarative binding stuff, in the knockout library.
Appreciate if somebody could correct me.
Thanks
It is up to your client application to fetch new data since server doesn't have any concept of observables. There is also a useful mapping plugin that could automatically map your javascript data from the server into an observable. That way when you fetch data you don't have to re-map your data.
However, it is possible to notify client of changes from server. One way is to make server push changes to the client is via HTML5 WebSockets. SignalR is a good library candidate for that task. It that would open a WebSockets connections so server can notify client of changes. And you would use Service Broker SqlDependency to trigger event notifications on updates. An example could be found here.
Good luck!
Also here is a really good video to watch about knockoutjs that would give you understanding of the framework.
I'm working on a project and there is some battle between how some JS filtering should be implemented and I would like to ask you guys some input on this.
Today we have this site that displays a long list of repeated entries of data and some JS filtering would be nice for the users to navigate through. The usual stuff: keyword, order, date, price, etc. The question is not the use of JS, which is obvious, but the origin of the data. One person defends that the HTML itself should be used and that the JS should parse through it making the user's desired filtering. Another person defends that we should use a JSON generated in the server, and that JSON should be the data's origin.
What you guys think on this? What are the pros and cons?
As a final request, I would like you to be the most informative as possible since your answers will be used and referenced for all us in the company. (Yes, that is how we trust you!:)
The right action is matter of taste and system architecture as well as utility.
I would go with dynamically generated pages with JS and JSON -- These days I think you can safely assume that most browsers has Javascript enabled -- however you may need to make provisions for crawler (GoogleBot, Bing, Ask etc) as they may not fully execute all JS and hence may not index the page if you do figure out some kind of exception for supporting those.
Using JS+JSON also means that you make your code work so that support for mobile diveces is done client side, without the webserver having to create anything special.
Doing DOM manipulation as the alternative would not be my best friend, as the logic of the page control and layout is split-up in two places -- partly in the View controller on the webserver, and partly in the JavaScript -- it is in my opinion better to have it in one place and have the view controller only generate JSON and server the root pages etc.
However this is a matter of taste, and im not sure that I would be able to say that there is one correct and best solution.
I think it's a lot cleaner if the data is delivered in JSON and then the presentation HTML or view of that data is generated from that JSON with javascript.
This fits the more classic style of keeping core data structures separate from views. In this manner you can generate all types of views without having to constantly munge/revise the way you store, access and manipulate the data. You can even build classes and methods to develop a clean interface on your data that is entirely independent of how that data is displayed.
The only issue I see with that is if the browser doesn't support javascript and that browser is a desired viewer. In that case, you have to include a default HTML version from the server that will obviously not be manipulated and the JSON will be ignored.
The middle ground is that you include both JSON and the "default", initial HTML view of that data in rendered HTML. The view comes up quickly and non-JS browsers can see something useful. But, then any future manipulation of the view (sorting, for example) uses the JSON data and generates a new clean view from the JSON data. No data is then ever "parsed" from the HTML view.
In larger projects, this also can facilitate the separation of presentation from data manipulation so different people may work on creating HTML views vs. manipulate the data (like sorting).
I would make the multiple ajax calls to the server and have it return the sorted/filtered data. If you server backend is fast than it won't be very taxing and you could even cache the data between requests.
If you only have 50-100 items than it would be reasonable to send it all to the client and have javascript sort and filter it.
Some considerations to help make the decision
Is the information sensitive and unique? (this voids and benefit to caching in my first point)
What is the most common request that will happen and are you optimizing for that?
How much data is there? (tens of rows, hundreds, thousands, millions)?
Does you site have to work with JavaScript turned off? (supporting older browsers?)
Is your development team more comfortable doing this in the front-end or back-end?
The answer is that it depends on your situation.
I'm developing a Sencha Touch application that has multiple data stores. In order to improve performance, I would like to load the data to these stores in a single HTTP request.
For this to work, the server would output different JSON root elements, one for each store. How can this be done in Sencha Touch?
This question may also be useful for Ext JS developers, as I believe Ext JS is using the same data stores as Sencha Touch.
Any advice would be much appreciated.
Its possible! You will have to use an Ajax request to pull all the store data once. Then separate them on the client side and load appropriate data to the stores. You can make use of MemoryProxy class here.
Remember that you will not set a httpProxy to the stores. And data will be loaded to each store using the loadData method.
I would like to see a decent example of a mobile web app using the Sencha framework with a client side DB accessed with SQLite. I'm currently digesting JqTouch and kinda get the binding method used there from reading Jonathon Stark's "iPhone apps" book, but cant find any examples of accessing Senchas features ie listed elements with SQLite. The DB will be small; 30 records, with about 5 fields, mostly numeric, a few of them calculated. All the math is done in javascript and I have that part working (in dash code). I need to add, delete, and edit the records.
Any pointers or examples would be very much appreciated. I'm an old dog trying to learn new tricks. Thanks
Sencha is client-side Javascript, so your application actually runs on top of Safari. That means you can forget about accessing (or installing) your own SQLite database from within the browser sandbox.
Having said that, you want to learn some new tricks, so why dont you read up on localStorage and DOM Storage. Basically the HTML5 specification allows for offline database storage based on SQLite (imagine relational database cookies). There is 1 per domain and they can be up to 5MB in size. I believe the iPhone supports this as well.
Here are some links: Introduction some API Information and a nice little blog entry by a chap called Ben Lister
Your client side code (i.e. Sencha/Javascript) would not access the SQLLite database. It will either need to read JSON or XML from the server. You'll need server side code to read the data from the database and format it in a way that your Sencha data readers will understand.
What are you using server side? If it's PHP you should look into MDB2
I had very good experience integrating Lawnchair library with Sencha Touch. Take a look at their guide, it's very easy.
Looks like there is a SQLite proxy available for sencha 2 now. http://market.sencha.com/addon/sqliteproxy-
Check out this thread on the Sencha Forums - it's a user created proxy for SQLite which I've successfully used to put data into a SQLite DB. The proxy comes with an example, but I might try and make a slightly more complicated one at some point.
Sencha's local storage doesn't take advantage of SQLite via the JavaScript API in the browser, but does use local key:value storage and has it's own way of referencing data to make it pseudo relational. This is still part of the WebDB spec, which is probably still SQLite under the hood if I had to guess. It's more persistent than a cookie or session, regardless.
You can also receive XML/JSON from a server over JSONP or Ajax if you're on the same domain, create a model to handle that data as well and bind it to a local store so that your data is available offline.