I have a huge array(4MB) data on the server side. right now I'm using jquery getJSON method to get the array data and render the whole array on the browser. It turns out getting the array data is too slow. Probably the main time is spent on JSON parsing,probably not.
What is the best/fastest way to get such array kind of data from server?
Four MegaBytes is a lot of data to be sending client side in all one go? Rather than trying to speed up how fast JavaScript can process JSON, i suspect your best bet will be to figure out some tactics to break the data up a bit more (so you can work with less at a time).
I mean, do you really need all of it at once? Its probably worth looking in to adding some server side filtering to the JSON being returned so as to try and limit it only to the data needed to do whatever you apps supposed to be doing?
For example, if your planning to display a massive list of products, it may be worth just loading the first 50-100, then as the user scrolls down the page make a second call to load the next 50-100 etc?
Related
I'm inexperienced at JavaScript, and don't know how to optimize things for my situation.
I've written an autocomplete function using the JQueryUI Autocomplete plugin. The source for the completion is a JSON array, holding a few thousand items, that I load from my same server. This autocomplete will be attached to a search box that will be on every page of my site, so it'll get populated a lot; I don't want to request the same array every time anyone hits any page. The completion depends on database values, so I can't just put the array in static form in the code. However, it doesn't have to be perfectly synced; caching it for some amount of time would be fine.
Right now, I'm loading the array with $.getJSON. It seems that using an actual remote source is meant to be an AJAX thing where the server does the actual search itself as you type; I think this is probably overkill given that there are only a few thousand, rather than millions, of items--I don't want to fire a zillion requests every time someone types into the search box.
What is the correct way of handling this? I'm totally unfamiliar with how caching would work in JS, or if there's some built-in way to accomplish a similar thing.
I am working on an data visualisation app that will allow the user to filter the data that he sees by various criteria.
I want to keep as much logic as possible on Python/Django side, like this:
Data is passed from Django view to the template.
On the frontend, user filters the data through various controls: dropdowns, sliders etc.
The controls inputs are sent back to Django view (via AJAX post request?), which returns filtered data and sends it back to the template.
4.The template - the visualization - is updated with the filtered data.
Is this a good approach? My concern is that a lot of data will be flying around and the app might be unresponsive.
Another, possibly faster idea is to filter the data on client's side in JavaScript - but I would really like to leverage the great Python data munching libraries instead.
If you want to use DRF API, then go with it. A lot of websites have filtering features. I'd suggest you to take a look at django_filter package. It's possible to integrate it with DRF.
The worst thing in filtering data on client side is that you can't use pagination. Imagine that you have 500+ objects to filter, javascript filtering function is what really will make your app slow.
At the same time, if you have 20-30 objects to filter and this number won't grow up, then you can go with JS only and single endpoint: getAll()
Common approach is to set up javascript on_change handler and construct GET requests like(example from real project) this:
https://yourbackend.com/api/product/?status=not_published,published,inactive&search=132&moderation_status=declined,on_moderation,not_ready&ordering=desc&price_max=1000&page=1
DRF + django_filters will work just fine that, with minimum of your code
involved.
Well known pitfall on js side is to make request without timeout, eg user writes text and on every keyUP() event request being sent. Or he moves the slider and a lot of requests being made - you'll need to make request when users stop it, eg 300ms after he chosen value. See this question for reference.
Sure, there's one more point. Your database have to be normalised and have proper indexes. But you have to look at this side if you'll have really slow SQL queries.
Summing up: I'd choose thin js layer and do most of work on backend.
I have some data that I want to display on a web page. There's quite a lot of data so I really need to figure out the most optimized way of loading and parsing it. In CSV format, the file size is 244K, and in JSON it's 819K. As I see it, I have three different options:
Load the web page and fetch the data in CSV format as an Ajax request. Then transform the data into a JS object in the browser (I'm using a built-in method of the D3.js library to accomplish this).
Load the web page and fetch the data in JSON format as an Ajax request. Data is ready to go as is.
Hard code the data in the main JS file as a JS object. No need for any async requests.
Method number one has the advantage of reduced file size, but the disadvantage of having to loop through all (2700) rows of data in the browser. Method number two gives us the data in the end-format so there's no need for heavy client-side operations. However, the size of the JSON file is huge. Method number three has the advantage of skipping additional requests to the server, with the disadvantage of a longer initial page load time.
What method is the best one in terms of optimization?
In my experience, data processing times in Javascript are usually dwarfed by transfer times and the time it takes to render the display. Based on this, I would recommend going with option 1.
However, what's best in your particular case really does depend on your particular case -- you'll have to try. It sounds like you have all the code/data you need to do that anyway, so why not run a simple experiment to see which one works best for you.
I have a website that contains graphs which display employee activity records. There are tiers of data (ie: region -> state -> office -> manager -> employee -> activity record) and each time you click the graph it drills down a level to get to display more specific information. The highest level (region) requires me to load ~1000 objects into an array and the lowest level is ~500,000 objects. I am populating the graphs via a JSON formatted text file using:
$.ajax({url:'data/jsondata.txt', dataType: 'json',
success: function (data) {
largeArray = data.employeeRecords;
}
Is there an alternative method I could use without hindering response time/performance? I am caught up in the thought that I must pre-load all of the data client side otherwise there will be lagtime if I need to fetch it on a user click. If anyone can point me to best practices and maybe even explain what is considered "TOO MUCH" client side data i'd appreciate it.
FYI i'm restricted to using an old web server and if I want to do anything server side i'd be limited to classic ASP otherwise it has to be client side. thank you!
If your server responds quickly
In this case, you can probably simply load data on demand when a user clicks. The server is quick, so why bother trying to be smarter for no gain.
If the server is quick, but not quick enough, then you might be able to preload the next level while drawing the first. Eg if you have just rendered the graph at the "office" level, then silently preload the "manager" next level down data while the user is still reacting to the screen update.
If the server is too slow for data on demand
In this case you probably need to model exactly where it is slow and address that. There are several things in play here and your question doesnt exactly say.
Is the server slow to query the database, if yes fix it. There is little you can do client side to solve this.
Is the server slow to package for transmission? Harder to fix, server big enough?
Network transmission is slow? Hmmm, need to send less data or get users onto faster bandwidth.
Browser unpack time is slow? (ie delay decoding the data before your script can chart it). Change how you package the data, or send less data, such as chunks.
Can browsers handle 500,000 objects? You should be able to just monitor memory of tHe browser you are using, and there are opionions yes/no on this. Will really depend or target users browser/hardware.
You might like to look at this question What is the most efficient way of sending data for a very large playlist over http? as it shows an alternative way of sending and handling data which I've found to be much quicker for step 4 above. Of course, at 500k objects you will no longer be able to use localStorage, but I've been experimenting with downloading millions of array elements and it works ok. ( still WIP ) I dont use jquery, so not sure how useable this is either.
Best practice? Sorry cannot help with that part of the question.
I am continually pulling in a list (over 200) entries from a remote server and showing them in a table (many pieces of data per row). The data changes and fluctuates over time. What's the most efficient way in JS to render this to the screen. As the data changes do I simply wipe my DOM and re-render with the new data; do I have a list of ids that will simply update the correct tr dom entries?
Its already sorted and filtered to be the correct information (server side) so its just displaying it to the user I am concerned with.
Any suggestions would be helpful. I am using jQuery for my requests but that can be changed. I need to know the quickest (latency wise) way of pulling this large dataset and displaying it in a tabular format.
Edit: Pagination is not possible in this scenario.
Edit2: The backend is Python (WSGI) running in a load balancer and is capable of flushing out 600 reqs / second. The data is only available as JSON and the backend is more of an API.
Are you using HTML tables? Sometime ago I crossed a Microsoft blog from the IE team stating that rendering tables are slow. Very slow. Re-rendering the whole table everytime will probably be slower than updating its values, but that is just my guess.
In fact, my opinion here is that updating values will be faster than re-rendering elements in the page. To optimize your latency, you could keep a hash of objects mapped by, so you don't have to lookup the DOM everytime you update the values.
I quite like the 'Live Image Search' which provides more data as you scroll down.
If your data list is getting really large, consider not displaying all of that data to the user at one time (if that is possible in your situation). Not only will the user get lost looking at that much data, but the browser will have to render the whole list and slow you down.
You don't mention what server side technology you are using. If you are in .Net, there are a couple ASP.Net controls that use a data pager (like a grid view). There is also a PagedDataSource object you might look into using (which can be used with any ASP.Net control that has a DataSource property). Both will break up your data into pages, and only the viewed page will be rendered at one time. This decreases latency dramatically.