We have made a web client where you can pushpin markers on a map and multiple users can comment on each of these markers. For the map we use leaflet and (what matters more) Knockout for the ViewModel of these Pushpins and the comments on it.
So the data model isn't too complicated: Each Pushpin has a Lat/Lon, Title, some Metadata (who and when created it) and an array of comments (each with Username, Timestamp, Text).
There are a couple of computeds in the view model (firstComment, lastComment, etc..), that Knockout has to keep up to date, so I think these are slowing it down a lot.
Everytime the app starts, we download the whole set of Pushpins (over 600 right now) as JSON and initialize the Knockout view model with it. The JSON already has about 1,2 MByte which lasts already 6 seconds to download. The initialization of the Knockout View Model then needs over 20 seconds. I created a splashscreen with some animated GIF, so the user doesn't think that the App doesn't work, but as Pushpins are getting more this behaviour gets worse.
Javascript also needs a lot of memory to build up the model. I think memory is needed for the Knockout model, and also for the markers for the leaflet layer, which is its own model associated with my knockout objects.
In Firefox Memory rises up to about 700MB when I open my web app. In IE9 its over 1 GB (!). Also when the knockout model is built up (mainly creating Knockout observables and pushing them to an observable array) browsers are not reacting anymore. In IE the website (and my splashscreen) isn't rendered at all until Knockout has done its job. In Firefox the website gets rendered first, then it freezes until the model is built up, then it comes back again. In Chrome its the same as it is in IE.
Another issue is, that memory does not get freed except when I close the browser. With every site reload another GB is allocated so I can easily fill up every Byte of my 8GB Laptop by refreshing my site eight times ... :(
I already thought about extending our REST Server API with some kind of pagination, and to lazy load much of the data as late as possible. But first I would like to know, what Knockout offers to get my model with a lot of data (in fact it isn't much at the moment, but it could get much) up and running without freezing the browser for a couple of seconds.
How do you handle a large amount of knockout objects in the browsers memory and how to you lazy bind it?
Related
I have currently created several single-page web applications using createJS.
The basic principle is to preload images and sounds between the different screens in the application, and then releasing those said assets from memory when the screen is changed.
For properly destroying the assets, I am currently employing every single method provided in the documentation, like:
assetLoader.close();
assetLoader.removeAll();
assetLoader.destroy();
element.removeAllEventListeners();
createjs.Sound.removeAllSounds()
createjs.Ticker.removeEventListener("tick", stage);
stage.enableDOMEvents(false),
and also assigning null to every object created during the initialisation.
The end-result looks pretty good, as the JavaScript memory occupied by the appication remains constant all the time (30-60 MB), no matter how much I navigate between the screens of the app.
However, even though I have 0 memory leaks on the JavaScript side, the total RAM memory occupied by the web application keeps increasing all the time, without ever decreasing.
This causes all my applications to eventually reach the memory limit on mobile devices, and then crash.
Basically, I do not know what exactly is kept in the RAM memory and is never released. I am strongly suspecting that the image and sound assets preloaded in the page are never removed from the browser, even though the JS variables associated with them have long been released.
There are also no DOM elements that I can access or manually remove.
Chrome dev-tools are also insuficient in this case, they always seem to refer to the JavaScript memory only, when I take Heap snapshots or Record Allocation Timelines.
In that case, how can I release the memory occupied by those image and sound assets, if the methods provided in the documentation are not enough?
EDIT: I know reloading the page will clear that RAM memory, but since those are single-page web apps, I would very much like to avoid resorting to that.
I'm going to build a SPA with Angular. The app will be composed of three tabs. Every tab will hold large amounts of data (basically 100-200 rows and various text fields, drop downs etc).
I'm facing objections from my colleagues to build this as a real SPA - they would like to separate this into three completely independent angular applications, living in the same asp.net MVC website.
My question is: Will holding such data on the client side cause browser or rendering issues? Are they right thinking this is dangerous?
I'd check to see how big a memory hit this is with a real data set and see if it would tax the resources available to most mobile devices.
My first thought would be 100 rows of data, with 100 columns each of 100 byte strings, would mean ~1MB of RAM per tab. Even my old phone would have enough RAM to handle that.
"Dangerous"? What can they do to manipulate that data in harmful ways?
~1MB estimate is based on raw data, if the page uses a lot of widgets which scales as data, it can quickly become nightmare unless data is loaded on demand
we have built our service dailymus.es to be mobile friendly, but we are hitting on a range of performance issues when accessing it on the mobile phone.
Specifically, it crashes after a few "pages" and when we have a lot of content on the page.
I am suspecting that we have too many event handlers and/or memory leaks. What methods do you use to eliminate these problems with Backbone?
I suggest you test your site using Google Chrome's Developers Console. Use the Profile tab to examine the state of the heap.
Most leaks of backbone models/views are due to not detaching the DOM events from views and the binding (on) events from models.
Make sure to override the remove method of your backbone view and make sure you .off() from everything you set to .on(). Don't forget to call remove on sub-views.
To find leaks:
Take a snapshot
Run your code to create a view and then remove it
Take another snapshot
Compare the snapshots to find the new objects created who weren't released.
More about the Google Chrome Heap Profiler
Backbone wastes a lot of memory which is the hardest thing for mobile. There's a lot of techniques for object pooling on the DOM elements, updating elements instead of recreating templates, limit loading images until the last minute, holding any updates until right before the paint cycle.
Mobile web can be performant if the memory is managed properly. PrefView is a good example and can get 50FPS on long scroll list on an iPad mini. https://github.com/puppybits/BackboneJS-PerfView
I'm building my first site using this framework, i'm remaking a website i had done in PHP+mySQL and wish to know something about performance... In my website, i have two kinds of content:
Blog Posts (for 2 sections of the site) - these have the tendency to one day sum to thousands of records and, are more often updated
Static (sort of) data: this is information i keep in the database, like site section's data (title, metatags, header image url, fixed html content, javascript and css filenames to include in that section), that is rarely updated and it's very small in size.
While i was learning the basics on nodeJS, i started thinking of a way to improve the performance of the website, in a way i couldn't do with PHP. So, what i'm doing is:
When i run the app, the static content is all loaded into memory, i have a "model" Object for each content that stores the data in an array, has a method to refresh that data, ie, when the administrator updates something, i call refresh() to go get the new data from the database to that array. In this way, for every page load, instead of querying the database, the app queries the object in memory directly.
What i would like to know is if there should be any increase of performance, working with objects directly in memory or if constant queries to the database would work just as good or even better.
Any documentation supporting your answer will be much appreciated.
Thanks
In terms of the general database performance, MongoDB will keep your working set in memory - that's its basic method of operation.
So, as long as there is no memory contention to cause the data to get swapped out, and it is not too large to fit into your physical RAM, then the queries to the database should be extremely fast (in the sub millisecond range once you have your data set paged in initially).
Of course, if the database is on a different host then you have network latency to think about and such, but theoretically you can treat them as the same until you have a reason to question it.
I don't think there will be any performance difference. First thing is that this static data is probably not so big (up to 100 records?) and querying DB for it is not a big deal. Second thing (more important) is that most DB engines (including mongoDB) have caching systems built-in (although I'm not sure how they work in details). Third thing is that holding query results in memory does not scale well (for big websites) unless you use storage engine like Redis. And that's my opinion, although I'm not the expert.
I am building an ASP.Net MVC 2 application using jqGrid 3.8.2 (a javascript grid component) to present some data I have stored in a DB. On my page I also have a Google map with a tiled overlay.
I have noticed a significant worse performance in loading times of the map and the tile overlay in this application than what I have in other applications that does not use jqGrid. It would be natural with a slow-down if both jqgrid and the map were requesting data at the same time, but when I am zooming/panning the map there are no server requests run by the grid.
After doing some debugging in my code (adding/removing functionality bit by bit) I boiled it down to this: If I configure my jqgrid to use "datatype : local", it brings the performance back in the map!
Once I set "datatype: json" and "url : [myAspNetMvcController]" the loading of the map tiles takes a big hit.
My question is: Does anyone know why this happens? It seems that jqGrid is doing stuff continuously in the background even though it has not been asked to fetch any new data. I have breakpoints on the server, so I know that it does not fire requests. As I see it, it must be some jqgrid "magic" that causes the other javascript components on the page to run slowly, and hence causes the requests to be delayed.
It is very important for me to get to the bottom of this, and I really do not want to have to scrap jqGrid, since I really love it.
Will be thankful for all feedback that can point me in the right direction!
Found the answer, and it turned out not to be jqgrid that was the bad guy, but the server-side Session store! I used Session as a cache for the grid data, because I needed the filtered data for other purposes than the grid, and wanted to avoid redundant trips to the DB. Once I wrote something to the Session object, the server took a hit and started handling all incoming requests slower (often several seconds!). I have later learned that using the Session object for caching is not advised in most cases, but I still don't know why it would cause nasty side-effects like this. If someone would enlighten me, that would be great! It cannot be an issue of taking up alot of RAM on the server, because the performance dropped just by writing
Session["test"] = "test";
Since I actually needed the data to be cached in a session scope, I solved the problem by instead using the HttpContext.Cache and a session-specific key.