AngularJS app freezes when loading in new data to $scope - javascript

This is probably a difficult question to answer as I'm not sure what the root problem is here, would appreciate if someone would take a look though.
http://threadfinder.net/search%3FnameTags=jacket/0
If you continually scroll down, more items are loaded in using ngInfiniteScroll and this function:
$scope.moreProducts = function() {
if ($scope.busy || $scope.noMore){return;}
else
if (!($scope.busy)) {
$scope.busy = true;
$scope.itemsLoaded += 27;
var theQuery = $routeParams.query.replace(/\?|%3f/gi, '');
$scope.itemSearch.get({
query: theQuery,
page: $scope.itemsLoaded
}, function(data) {
if (data.posts.length <= 0) {
$scope.noMore = true;
} else {
$scope.noMore = false;
for (var i = 0; i < data.posts.length; i++) {
if ($scope.user) {
if (!($scope.user.likes.indexOf(data.posts[i]._id) === -1)) {
data.posts[i].liked = 'http://i.imgur.com/CcqoATE.png';
} else {
data.posts[i].liked = 'http://i.imgur.com/tEf77In.png';
}
$scope.posts.push(data.posts[i]);
}
}
$scope.busy = false;
}
});
}
}
(I'm using AngularJS Deckgrid for the tile layout, but I've tried disabling it and there is no big change in performance.)
If you keep scrolling, after you get to ~300 items loaded on the page performance starts to take a hit and the app freezes while new items are being loaded into scope.
I understand that perhaps it's just a fact that loading in all this data takes a big load on the page - with every 27 items loaded in (one infiniteScroll GET call) the total weight of the data being loaded in is about 30kb a pop, so at around 300 items there's ~900kb of data in scope. This data is as little as I can make it (~500 lines of JSON).
The question is:
Are there any recommendations, plugins, directives or best-use practices for AngularJS for when loading a lot of data in the page $scope?
For clarification, the app is built on Node/ExpressJS/MongoDB
EDIT: I've checked out this issue on two computers (both on OSX) and this issue is MUCH more prevalent in Chrome than in Safari. Chrome totally staggers when loading in the data, Safari is really smooth and only has any noticable lag when you get to 600+ items and even then it's nowhere near as bad as Chrome.

I took a look at the page with AngularJS Batarang, and it doesn't appear that your app is spending a lot of time in the digest cycle. If you take a look at Chrome's "Timeline" panel during the periods of UI lag, you can see the following:
(full size)
Most of the time is spent in lots of "Parse HTML." A quick Google searched turned up this Stack Overflow question, which has some answers that may be useful; in particular, the Groups post about manual string concatenation by be worth trying here. You may also consider trying to break up the big block of HTML parses into smaller chunks by adding new items to the scope in small batches (perhaps using $evalAsync or a few timers) to see if that helps.

Related

What is more effective: every time make an ajax request or slice a result in func?

I have a JSON data of news like this:
{
"news": [
{"title": "some title #1","text": "text","date": "27.12.15 23:45"},
{"title": "some title #2","text": "text","date": "26.12.15 22:35"},
...
]
}
I need to get a certain number of this list, depended on an argument in a function. As I understand, its called pagination.
I can get the ajax response and slice it immediately. So that every time the function is called - every time it makes an ajax request.
Like this:
function showNews(page) {
var newsPerPage = 5,
firstArticle = newsPerPage*(page-1);
xhr.onreadystatechange = function() {
if(xhr.readyState == 4) {
var newsArr = JSON.parse(xhr.responseText),
;
newsArr.news = newsArr.news.slice(firstArticle, newsPerPage*(page));
addNews(newsArr);
}
};
xhr.open("GET", url, true);
xhr.send();
Or I can store all the result in newsArr and slice it in that additional function addNews, sorted by pages.
function addNews(newsArr, newsPerPage) {
var pages = Math.ceil(amount/newsPerPages), // counts number of pages
pagesData = {};
for(var i=0; i<=pages; i++) {
var min = i*newsPerPages, //min index of current page in loop
max = (i+1)*newsPerPages; // max index of current page in loop
newsArr.news.forEach(createPageData);
}
function createPageData(item, j) {
if(j+1 <= max && j >= min) {
if(!pagesData["page"+(i+1)]) {
pagesData["page"+(i+1)] = {news: []};
}
pagesData["page"+(i+1)].news.push(item);
}
}
So, simple question is which variant is more effective? The first one loads a server and the second loads users' memory. What would you choose in my situation? :)
Thanks for the answers. I understood what I wanted. But there is so much good answers that I can't choose the best
It is actually a primarily opinion-based question.
For me, pagination approach looks better because it will not produce "lag" before displaying the news. From user's POV the page will load faster.
As for me, I would do pagination + preload of the next page. I.e., always store the contents of the next page, so that you can show it without a delay. When a user moves to the last page - load another one.
Loading all the news is definitely a bad idea. If you have 1000 news records, then every user will have to load all of them...even if he isn't going to read a single one.
In my opinion, less requests == better rule doesn't apply here. It is not guaranteed that a user will read all the news. If StackOverflow loaded all the questions it has every time you open the main page, then both StackOverflow and users would have huge problems.
If the max number of records that your service returns is around 1000, then I don't think it is going to create a huge payload or memory issues (by looking at the nature of your data), so I think option-2 is better because
number of service calls will be less
since user will not see any lag while paginating, his experience of using the site will be better.
As a rule of thumb:
less requests == better
but that's not always possible. You may run out of memory/network if the data you store is huge, i.e. you may need pagination on the server side. Actually server side pagination should be the default approach and then you think about improvements (e.g. local caching) if you really need them.
So what you should do is try all scenarios and see how well they behave in your concrete situation.
I prefer fetch all data but showing on some certain condition like click on next button data is already there just do hide and show on condition using jquery.
Every time call ajax is bad idea.
but you also need to call ajax for new data if data is changed after some periodic time

Meteor: Slow template render blocks method return value

I'm experiencing a very strange issue with Meteor whereby a single page is taking a very long time to load. I've gone down numerous routes to try and troubleshoot before identifying a cause which isn't really making sense to me.
Fundamentally I have a two-step process that collects some initial data and then inserts a couple of hundred records into the database. On the second page I then need to be able to assign users to each of the new records. I can also go back into the same page and edit the assigned users so in this second scenario the inserts have already happened - same lag.
The insert code is relatively straightforward.
Client Side
setup assessment meta-data & create new assessment record + items:
// set updatingAssessment variable to stop subs re-loading as records are inserted
Session.set('updatingAssessment', true);
Meteor.call('insertAssessment', assessment, function(err, result) {
if (err) { showError(err.reason); }
var assessmentId = result.toHexString();
// some code excluded to get 'statements' array
Meteor.call('insertAssessmentItems',
assessmentId,
statements,
function(err, result) {
Session.set('updatingAssessment', false);
});
}
// hide first modal and show 2nd
Session.set("assessmentStage1", false);
Session.set("assessmentStage2", true);
Server methods
Volume wise, we're inserting 1 assessment record and then ~200 assessment items
insertAssessment: function(assessment) {
this.unblock();
return Assessments.insert({
<<fields>>
}, function(err, result) { return result; });
},
insertAssessmentItems: function(assessmentId, statements) {
this.unblock();
for (i = 0; i < statements.length; i++) {
AssessmentItems.insert({
<<fields>>
}, function(err, result) { });
}
}
Initially I tried numerous SO suggestions around not refreshing subscriptions while the inserts were happening, using this.unblock() to free up DDP traffic, making queries on the slow page non-reactive, only returning the bare minimum fields, calling the inserts asynchronously, etc, etc... all of which made no difference with either the initial insert, or subsequent editing of the same records (i.e go out of the page and come back so the inserts have already happened).
After commenting out all database code and literally returning a string from the meteor method (with the same result / lag) I looked at the actual template generation code for the html page itself.
After commenting out various sections, I identified that a helper to build up a dropdown list of users (repeated for each of those ~200 records) was causing the massive spike in load time. When I commented out the dropdown creation the page load happens in <5s which is within a more acceptable range although still slower than i would like.
Two oddities with this, which I was hoping somebody could shed some light on:
Has anything changed recently in Meteor that would have caused a slowdown as I haven't significantly changed my code for a couple of months and it was working before, albeit still around 10 seconds to fully render the page. The page also used to be responsive when loaded, whereas now the CPU is stuck at 100% and selecting users is very laggy.
Why is the actual template loading blocking my server method returns? There is no doubt I need to re-write my template to be more efficient but I spent a lot of time trying to figure out why my server methods were not returning values quickly. I even analysed the DDP traffic and there was no activity while it was waiting for the template to load - no reloaded subs, no method returns, nothing.
Thanks in advance,
David

ExtJs 4.2 ref-selector vs Ext.getcmp() vs Ext.ComponentQuery.query()

I was asked to develop a tab panel with 6 tabs, each having 30 to 40 elements. Each tab is acting as a form in accumulating the details of a person and the last tab is a Summary page which displays all the values entered in the first five tabs. I was asked to provide summary as a tab because, the user can navigate to summary tab at any instance and look at the details entered by him/ or glace the summary. i am following ExtJs MVC pattern. Payload is coming from / going to Spring MVC Application. (JSON)
Using tab change event in controller and if the newtab is summary I am rendering the page with show hide functionality.
Method 1 :In controller I have used Ext.getCmp('id of each element inside the tabs') and show hide the components in summary tab based on the value entered by the user. This killed my app in IE8 popping a message saying that the "script is slow and blah blah..." i had to click on NO for 5 to 6 times for the summary tab to render and display the page.
Method 2 :In controller I used ref and selectos to acccess all the items in tabs. I have used itemId for each and every field in summary tab. like this.getXyz().show(). I thought it would be fast. Yes it was in Google chrome. but my app in IE8 is slow compared to goolge chrome/firefox
Any suggestions regarding this and plan to decrease page render time. The summary page has more than 1000 fields. Please feel free to shed ur thoughts or throw some ideas on this.
thank you!!
I've got a few suggestions you can try. First, to answer your title, I think the fastest simple way to lookup components in javascript is to build a hash map. Something like this:
var map = {};
Ext.each(targetComponents, function(item) {
map[item.itemId] = item;
});
// Fastest way to retrieve a component
var myField = map[componentId];
For the rendering time, be sure that the layout/DOM is not updated each time you call hide or show on a child component. Use suspendLayouts to do that:
summaryTabCt.suspendLayouts();
// intensive hide-and-seek business
// just one layout calculation and subsequent DOM manipulation
summaryTabCt.resumeLayouts(true);
Finally, if despite your best efforts you can't cut on the processing time, do damage control. That is, avoid freezing the UI the whole time, and having the browser telling the user your app is dead.
You can use setTimeout to limit the time your script will be holding the execution thread at once. The interval will let the browser some time to process UI events, and prevent it from thinking your script is lost into an infinite loop.
Here's an example:
var itemsToProcess = [...],
// The smaller the chunks, the more the UI will be responsive,
// but the whole processing will take longer...
chunkSize = 50,
i = 0,
slice;
function next() {
slice = itemsToProcess.slice(i, i+chunkSize);
i += chunkSize;
if (slice.length) {
Ext.each(slice, function(item) {
// costly business with item
});
// defer processing to give time
setTimeout(next, 50);
} else {
// post-processing
}
}
// pre-processing (eg. disabling the form submit button)
next(); // start the loop
up().down().action....did the magic. I have replaced each and every usage of Ext.getCmp('id'). Booooha... it's fast and NO issues.
this.up('tabpanel').down('tabName #itemIdOfField').actions.
actions= hide(), show(), setValues().
Try to check deferredRender is true. This should only render the active tab.
You also can try a different hideMode. Especially hideMode:'offsets ' sounds promising
Quote from the sencha API:
hideMode: 'offsets' is often the solution to layout issues in IE specifically when hiding/showing things
As I wrote in the comment, go through this performance guide: http://docs.sencha.com/extjs/4.2.2/#!/guide/performance
In your case, this will be very interesting for you:
{
Ext.suspendLayouts();
// batch of updates
// show() / hide() elements
Ext.resumeLayouts(true);
}

Sencha Touch Ext.navigation.View pop to root

I have an Ext.navigation.View in which I have pushed a few views. Certain user interactions require that I go directly back to the top level of the navigation view -- the equivalent of popToRootViewControllerAnimated: on a UINavigationController in iOS.
I have tried various things like:
while(navigationView.getItems().getCount() > 1)
navigationView.pop();
and
while(navigationView.canPop())
navigationView.pop();
Neither work. The first example seems to put me into an infinite loop which isn't too surprising. The second example only seems to pop one view off.
So the question: What is the proper way to pop to the root view in an Ext.navigation.View in Sencha Touch (version 2 developer preview)?
There has been a number of interim methods for achieving this.
The one I was using was to pop a number higher than the number of levels you would ever have e.g.
navigationView.pop(10);
and that worked fine, but I was never happy with that, but I see they have now introduced a reset method. Which you can call thus...
navigationView.reset();
Internally within the Sencha source code (see below) you can see that it does a similar job to what #Mithralas said, but just easier to write.
// From the Sencha source code
this.pop(this.getInnerItems().length);
popToRoot: function() {
this.pop(this.getItems().length - 1);
}
Ensure to use the NavigationView.reset() method. To make it clear, if your main navigation view is Main, you'd do something like this in the controller:
this.getMain().reset();
The solution turned out to be extending the navigation view with the following:
popToRoot: function(destroy)
{
var navBar = this.getNavigationBar(),
stackLn = this.stack.length,
stackRm;
//just return if we're at root
if(stackLn <= 1) return;
//just return if we're already animating
if(navBar && navBar.animating) return;
//splice the stack to get rid of items between top and root
stackRm = this.stack.splice(1, stackLn-2);
//remove views that were removed from the stack if required
if(destroy) {
stackRm.forEach(function(val, idx, arr) {
this.remove(val, true);
});
}
//clear out back button stack
navBar.backButtonStack = [];
//now we can do a normal pop
this.pop();
}

Loading large amount of data into memory - most efficient way to do this?

I have a web-based documentation searching/viewing system that I'm developing for a client. Part of this system is a search system that allows the client to search for a term[s] contained in the documentation. I've got the necessary search data files created, but there's a lot of data that needs to be loaded, and it takes anywhere from 8-20 seconds to load all the data. The data is broken into 40-100 files, depending on what documentation needs to be searched. Each file is anywhere from 40-350kb.
Also, this application must be able to run on the local file system, as well as through a webserver.
When the webpage loads up, I can generate a list of what search data files I need load. This entire list must be loaded before the webpage can be considered functional.
With that preface out of the way, let's look at how I'm doing it now.
After I know that the entire webpage is loaded, I call a loadData() function
function loadData(){
var d = new Date();
var curr_min = d.getMinutes();
var curr_sec = d.getSeconds();
var curr_mil = d.getMilliseconds();
console.log("test.js started background loading, time is: " + curr_min + ":" + curr_sec+ ":" + curr_mil);
recursiveCall();
}
function recursiveCall(){
if(file_array.length > 0){
var string = file_array.pop();
setTimeout(function(){$.getScript(string,recursiveCall);},1);
}
else{
var d = new Date();
var curr_min = d.getMinutes();
var curr_sec = d.getSeconds();
var curr_mil = d.getMilliseconds();
console.log("test.js stopped background loading, time is: " + curr_min + ":" + curr_sec+ ":" + curr_mil);
}
}
What this does is processes an array of files sequentially, taking a 1ms break between files. This helps prevent the browser from being completely locked up during the loading process, but the browser still tends to get bogged down by loading the data. Each of the files that I'm loading look like this:
AddToBookData(0,[0,1,2,3,4,5,6,7,8]);
AddToBookData(1,[0,1,2,3,4,5,6,7,8]);
AddToBookData(2,[0,1,2,3,4,5,6,7,8]);
Where each line is a function call that is adding data to an array. The "AddToBookData" function simply does the following:
function AddToBookData(index1,value1){
BookData[BookIndex].push([index1,value1]);
}
This is the existing system. After loading all the data, "AddToBookData" can get called 100,000+ times.
I figured that was pretty inefficient, so I wrote a script to take the test.js file which contains all the function calls above, and processed it to change it into a giant array which is equal to the data structure that BookData is creating. Instead of making all the function calls that the old system did, I simply do the following:
var test_array[..........(data structure I need).......]
BookData[BookIndex] = test_array;
I was expecting to see a performance increase because I was removing all the function calls above, this method takes slightly more time to create the exact data structure. I should note that "test_array" holds slightly over 90,000 elements in my real world test.
It seems that both methods of loading data have roughly the same CPU utilization. I was surprised to find this, since I was expecting the second method to require little CPU time, since the data structure is being created before hand.
Please advise?
Looks like there are two basic areas for optimising the data loading, that can be considered and tackled separately:
Downloading the data from the server. Rather than one large file you should gain wins from parallel loads of multiple smaller files. Experiment with number of simultaneous loads, bear in mind browser limits and diminishing returns of having too many parallel connections. See my parallel vs sequential experiments on jsfiddle but bear in mind that the results will vary due to the vagaries of pulling the test data from github - you're best off testing with your own data under more tightly controlled conditions.
Building your data structure as efficiently as possible. Your result looks like a multi-dimensional array, this interesting article on JavaScript array performance may give you some ideas for experimentation in this area.
But I'm not sure how far you'll really be able to go with optimising the data loading alone. To solve the actual problem with your application (browser locking up for too long) have you considered options such as?
Using Web Workers
Web Workers might not be supported by all your target browsers, but should prevent the main browser thread from locking up while it processes the data.
For browsers without workers, you could consider increasing the setTimeout interval slightly to give the browser time to service the user as well as your JS. This will make things actually slightly slower but may increase user happiness when combined with the next point.
Providing feedback of progress
For both worker-capable and worker-deficient browsers, take some time to update the DOM with a progress bar. You know how many files you have left to load so progress should be fairly consistent and although things may actually be slightly slower, users will feel better if they get the feedback and don't think the browser has locked up on them.
Lazy Loading
As suggested by jira in his comment. If Google Instant can search the entire web as we type, is it really not possible to have the server return a file with all locations of the search keyword within the current book? This file should be much smaller and faster to load than the locations of all words within the book, which is what I assume you are currently trying to get loaded as quickly as you can?
I tested three methods of loading the same 9,000,000 point dataset into Firefox 3.64.
1: Stephen's GetJSON Method
2) My function based push method
3) My pre-processed array appending method:
I ran my tests two ways: The first iteration of testing I imported 100 files containing 10,000 rows of data, each row containing 9 data elements [0,1,2,3,4,5,6,7,8]
The second interation I tried combining files, so that I was importing 1 file with 9 million data points.
This was a lot larger than the dataset I'll be using, but it helps demonstrate the speed of the various import methods.
Separate files: Combined file:
JSON: 34 seconds 34
FUNC-BASED: 17.5 24
ARRAY-BASED: 23 46
Interesting results, to say the least. I closed out the browser after loading each webpage, and ran the tests 4 times each to minimize the effect of network traffic/variation. (ran across a network, using a file server). The number you see is the average, although the individual runs differed by only a second or two at most.
Instead of using $.getScript to load JavaScript files containing function calls, consider using $.getJSON. This may boost performance. The files would now look like this:
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
}
After receiving the JSON response, you could then call AddToBookData on it, like this:
function AddToBookData(json) {
BookData[BookIndex].push([json.key,json.values]);
}
If your files have multiple sets of calls to AddToBookData, you could structure them like this:
[
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
},
{
"key" : 1,
"values" : [0,1,2,3,4,5,6,7,8]
},
{
"key" : 2,
"values" : [0,1,2,3,4,5,6,7,8]
}
]
And then change the AddToBookData function to compensate for the new structure:
function AddToBookData(json) {
$.each(json, function(index, data) {
BookData[BookIndex].push([data.key,data.values]);
});
}
Addendum
I suspect that regardless what method you use to transport the data from the files to the BookData array, the true bottleneck is in the sheer number of requests. Must the files be fragmented into 40-100? If you change to JSON format, you could load a single file that looks like this:
{
"file1" : [
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
},
// all the rest...
],
"file2" : [
{
"key" : 1,
"values" : [0,1,2,3,4,5,6,7,8]
},
// yadda yadda
]
}
Then you could do one request, load all the data you need, and move on... Although the browser may initially lock up (although, maybe not), it would probably be MUCH faster this way.
Here is a nice JSON tutorial, if you're not familiar: http://www.webmonkey.com/2010/02/get_started_with_json/
Fetch all the data as a string, and use split(). This is the fastest way to build an array in Javascript.
There's an excellent article a very similar problem, from the people who built the flickr search: http://code.flickr.com/blog/2009/03/18/building-fast-client-side-searches/

Categories