I am building an ASP.Net MVC 2 application using jqGrid 3.8.2 (a javascript grid component) to present some data I have stored in a DB. On my page I also have a Google map with a tiled overlay.
I have noticed a significant worse performance in loading times of the map and the tile overlay in this application than what I have in other applications that does not use jqGrid. It would be natural with a slow-down if both jqgrid and the map were requesting data at the same time, but when I am zooming/panning the map there are no server requests run by the grid.
After doing some debugging in my code (adding/removing functionality bit by bit) I boiled it down to this: If I configure my jqgrid to use "datatype : local", it brings the performance back in the map!
Once I set "datatype: json" and "url : [myAspNetMvcController]" the loading of the map tiles takes a big hit.
My question is: Does anyone know why this happens? It seems that jqGrid is doing stuff continuously in the background even though it has not been asked to fetch any new data. I have breakpoints on the server, so I know that it does not fire requests. As I see it, it must be some jqgrid "magic" that causes the other javascript components on the page to run slowly, and hence causes the requests to be delayed.
It is very important for me to get to the bottom of this, and I really do not want to have to scrap jqGrid, since I really love it.
Will be thankful for all feedback that can point me in the right direction!
Found the answer, and it turned out not to be jqgrid that was the bad guy, but the server-side Session store! I used Session as a cache for the grid data, because I needed the filtered data for other purposes than the grid, and wanted to avoid redundant trips to the DB. Once I wrote something to the Session object, the server took a hit and started handling all incoming requests slower (often several seconds!). I have later learned that using the Session object for caching is not advised in most cases, but I still don't know why it would cause nasty side-effects like this. If someone would enlighten me, that would be great! It cannot be an issue of taking up alot of RAM on the server, because the performance dropped just by writing
Session["test"] = "test";
Since I actually needed the data to be cached in a session scope, I solved the problem by instead using the HttpContext.Cache and a session-specific key.
Related
I have been developing a PHP application for quite a while. The (basic) idea is as follows; users can build web-pages using blocks. These blocks can contain images, text etc. Each of these blocks have their own options. These blocks are defined in Domain Driven Design through PHP.
I've build the application to use a php-based Controller that handles the requests from a jQuery/Javascript front. Each time the user edits an option its send to this Controller which unserialises a collection of blocks (php-objects) from Redis and/or the php-session and sets the the attributes of the blocks that are edited or adds/removes one of the blocks. This is to enforce the Domain logic.
Which was fine will developing for myself. I never kept race conditions and such in mind. While moving forward with the product I notice that people lose data. I'll explain what happens;
User edits an option of a block
press save
A request is made to the controller which,
unserialises the collection and
sets the blocks based on their uuid
puts the blocks back in the collection and
serialises the collection again.
There are scenario's where 2 concurrent request might be created which will override the edits of 1 of both requests.
I know I need to rewrite this part of the application. The question is what is the best approach. I could;
Implement some javascript library which will take me a lot of work because it would require me to rewrite that entire part of the application. Also I do not have a lot of experience implementing javascript based solutions. But I do not might stepping into something new. I do want to javascript testing to prevent future problems from occurring and enable cross-browser testing
Apply Redis / Session locking to only enable the controller to process a single request and prevent concurrent requests from overwriting the data set in the previous request. This will lower the chance of concurrent request and data loss, but not fully. People with real slow internet connection might get their connections losing when they might produce a lot of concurrent requests.
I'm curious what other approaches I might be missing, or if one of the two I mentioned above will suffice.
As far as I understand your problem, what you may want to implement is optimistic locking.
A simple way to implement it, is to version your aggregate.
Every time someone edits your object, increment its version.
When you POST your edited blocks, you send back the version on which you are trying to apply your changes.
then, when getting back your object from your persistence storage, you compare the version and ensure you are actually working on an up-to-date object.
If it is, save it, if it is not, reject the modification, notify the user, and reload the object, and take the appropriate action (it depends on your needs).
I need to get from thousands of online JSON about 300.000 final lines, equal to 30MB.
Being beginner in coding, I prefer to stick to JS to $getJSON data, cut it, append interesting parts to my <body>, and loop on the thousands online JSON. But I wonder :
can my web-browser handles 300.000 $getJSON queries and the resulting 30~50MB webpage without crashing ?
is it possible to use JS to write down a file with this results, so the script's works is constantly saved ?
I expect my script to run about 24 hours. Numbers are estimations.
Edit: I don't have server side knowledge, just JS.
A few things aren't right about your approach for this:
If what you are doing is fetching (and processing) data from another source then displaying it for a visitor, processing of this scale should be done separately and beforehand in a background process. Web browsers should not be used as data processors on the scale you're talking about.
If you try to display a 30-50MB webpage, your user is going to experience lots of frustrating issues - browser crashes, lack of responsiveness, timeouts, long load times, and so on. If you expect any users on older IE browsers, they might as well give up without even trying.
My recommendation is to pull this task out and do it using your backend infrastructure, saving the results in a database which can then be searched, filtered, and accessed by your user. Some options worth looking into:
Cron
Cron will allow you to run a task on a repeated and regular basis, such as daily or hourly. Use this if you want to continually update your dataset.
Worker (Heroku)
If running Heroku, take it out of the dyno and use a separate worker so as not to clog up any existing traffic on your app.
I'm currently building a portfolio-website for an architect that has a whole lot of images on its pages.
Navigation is done with history.js (=AJAX). In order to save loading time and make the whole thing more "snappy", I wrote a script that crawls the page body for links to other pages and fetches these automatically in the background. So far, it works like a charm.
It basically keeps a queue-Array that holds all the links. A setTimeout()-Function works through them and fetches each page using jQuery $.ajax(). The resulting HTML is stored in a Javascript Object.
Now, here's my question:
What are possible problems that might occur when using this on different machines/browsers/operation systems?
I'm thinking about:
max. javascript object/variable size (The fetched HTML is stored in an javascript object)
possible performance problems
max. number of asynchronous requests?
… anything you can think of?
Thanks a lot in advance,
a hobby programmer
Although it might be a good idea to cache the whole website on the client side there are a lot of things that can cause issues:
Memory
Unnecessary load on the webserver
Loading uneeded pages into memory
Some users have a limit to their internet so loading the entire website is not smart in those cases
Once the user naviagets away or refreshes the entire "cache" is gone
What I would do is first try to optimize the server side.
Add a bunch of caching mechanisms from the database to the user, the "Expires" header can really help you.
And if that doesn't help I would then think about caching some pages(which ones are for you to decide) in the offline cache, see (HTML 5 Offline Features)
That way you are safe even on page reload, keep the memory to a minimum and only load what you need.
PS: Don't try to reinvent stuff that the browser already has :P
You should queue the async requests, and only launch one at a time.
And since you're storing everything in variables, at some point you (the browser) may consume to much memory, and the whole thing can become very slow. I suggest you limit the size of your cache to a certain number of pages.
You can also try to not to store fetched content - just fetch it and throw-out. Browser still will cache fetched pages and images in its internal storage, so subsequent loads will be much faster (of course if ajax library does not forcibly disable the cache i.e. by using POST)
I'm rendering a news feed.
I'm planning to use Backbone.js for my javascript stuff because I'm sick of doing manual DOM binds with JQuery.
So right now I'm looking at 2 options.
When the user loads the page, the "news feed" container is blank. But the page triggers a javascript which renders the items of the news feed onto the screen. This would tie into Backbone's models and collections, etc.
When the user loads the page, the "news feed" is rendered by the server. Even if javascript was turned off, the items would still show because the server rendered it via a templating engine.
I want to use Backbone.js to keep my javascript clean. So, I should pick #1, right?? But #1 is much more complicated than #2.
By the way, the reason I'm asking this question is because I don't want to use the routing feature of Backbone.js. I would load each page individually, and use Backbone for the individual items of the page. In other words, I'm using Backbone.js halfway.
If I were to use the routing feature of Backbone.js, then the obvious answer would be #1, right? But I'm afraid it would take too much time to build the route system, and time should be balanced into my equation as well.
I'm sorry if this question is confusing: I just want to know the best practice of using Backbone.js and saving time as well.
There are advantages and disadvantages to both, so I would say this: choose the option that is best for you, according to your requirements.
I don't know Backbone.js, so I'm going to keep my answer to client- versus server-side rendering.
Client-side Rendering
This approach allows you to render your structure quickly on the server-side, then let the user's JavaScript pick up the actual content.
Pros:
Quicker perceived user experience: if there's enough static content on the initial render, then the user gets their page back (or at least the beginning of it) quicker and they won't be bothered about the dynamic content, because in all likelihood that will render reasonably quickly too.
Better control of caching: By requiring that the browser makes multiple requests, you can set up your server to use different caching headers for each URL, depending on your requirements. In this way, you could allow users to cache the initial page render, but require that a user fetch dynamic (changing) content every time.
Cons:
User must have JavaScript enabled: This is an obvious one and I shouldn't even need to mention it, but you are cutting out a (very small) portion of your user base if you don't provide a graceful alternative to your JS-heavy site.
Complexity: This one is a little subjective, but in some ways it's just simpler to have everything in your server-side language and not require so much back-and-forth. Of course, it can go both ways.
Slow post-processing: This depends on the browser, but the fact is that if a lot of DOM manipulation or other post-processing needs to occur after retrieving the dynamic content, it might be faster to let the server do it if the server is underutilized. Most browsers are good at basic DOM manipulation, but if you have to do JSON parsing, sorting, arithmetic, etc., some of that might be faster on the server.
Server-side Rendering
This approach allows the user to receive everything at once and also caters to browsers that don't have good JavaScript support, but it also means everything takes a bit longer before the browser gets the first <html> tag.
Pros:
Content appears all at once: If your server is fast, it will render everything all at once, and that's that. No messy XmlHttpRequests (does anyone still use those directly?).
Quick post-processing: Just like you wouldn't want your application layer to do sorting of a database queryset because the database is faster, you might also want to reserve a good amount of processing on the server-side. If you design for the client-side approach, it's easy to get carried away and put the processing in the wrong place.
Cons:
Slower perceived user experience: A user won't be able to see a single byte until the server's work is all done. Sure, the server is probably going to zip through it, but it's still a few extra seconds on the user's side and you would do them a favor by rendering what you can right away.
Does not scale as well because server spends more time on requests: It might be that you really want the server to finish a request quickly and move on to the next connection.
Which of these are most important to your requirements? That should inform your decision.
I don't know backbone, but here's a simple thought: if at all possible and secure, do everything on the client instead of the server. That way the server has less work to do and can therefore handle more connections and scale better.
But #1 is much more complicated than #2.
Not really. Once you get your hang of Backbone and jQuery and client-side templating (and maybe throw CoffeeScript into the mix, too), then this is not really difficult. In fact, it greatly simplifies your server code, as all the display-related functions are now removed. You could also even have different clients (mobile version, for example) running against the same server.
Even if javascript was turned off, the items would still show because the server rendered it via a templating engine.
That is the important consideration here. If you want to support users without Javascript, then you need a non-JS version.
If you already have a non-JS version, you can think about if you still need the "enhanced" version, and if you do, if you want to re-use the server-side templating you already have coded and tested and need to maintain anyway, or duplicate the effort client-side, which adds development cost, but as you say may provide a superior experience and lower the load on the server (although I cannot imagine that fetching rendered data versus fetching XML data makes that much of a difference).
If you do not need to support users without Javascript, then by all means, render on the client.
I think Backbone's aim is to organize a Javascript in-page client application. But first of all you should take a position on the next statement:
Even if javascript was turned off, the web-app still works in "post-back mode".
Is that one of your requirements? (This is not a simple requirement.) If no, then I'll advice you: "Do more JS". But if yes then I believe your best friend is jQuery load function.
A Note: I'm a Java programmer and there's a lot of server-side frameworks that bring the ability to write applications that work ajax-ly when js is enabled and switch on post-backs when it isn't. I think Wicket and Echo2 are two of them but it's meant they are server-side libraries...
Up to know, for DB driven web sites, I've used php (and CodeIgniter) to populate the data within the page prior to rendering, what I'm thinking about doing now is to develop a javascript (via jquery) page, make it as interactive as possible and then connect to the db through ajax/json calls - so NO data populated to the screen prior to rendering.
WHY? sort of an idea that I can, some day, hook the same web page to different data sources - a true separation of page from data - linking only via ajax.
I think the biggest issue could be performance...are there other things to watch out for? What's the best approach to handling security (stateless/sessionless)?
The biggest question is accessibility. What about those people using screenreaders, for which Javascript doesn't work? What about those on mobile phones (non-smartphones), again with very limited or no Javascript functionality? What about those people who have simply disabled JS? Event these days, you simply can't assume that everyone can use JS.
I like the original idea, but perhaps this would be better done via a simple server-side wrapper, which calls out to your data source but which can be quickly and easily changed to point at a different one.
Definitely something I've considered doing but you'd probably want to develop some kind of framework (or see if someone already has) if you're going to do this. Brute forcing this kind of thing will lead to a lot of redundant code and unnecessary hair loss. Perhaps a jQuery plugin? I'd be very interested to see what you came up with.