Client Side Storage/Caching Large HTML - javascript

I have a web page that references and initializes multiple instances of the same ASP.NET generic user control.
What I want to do, is to cache/store the entire contents (html) of those controls somewhere on the client using the jquery detach() method.
The solution of localStorage does not fit here as it has a limit to 5MB which is low for my needs.
For now, i am using a global variable and more specific a javascript array(key-value) to store the data.
What do you think of this solution? Will I notice any lags or performance issues on browser? Is there any limit for that global var?
Also, is there a better way to implement such a task?

For cross browser compatibility, you can try an AJAX call that pulls/increments in your massive data and cache the call (stored as JSON/JSONP). jQuery has a cache mechanism but the meat of the implementation is going to be on the headers of the page call. Specifically you're going to want to add Expires, Last-Modified, and Cache-Control on the pages you AJAX in.
Then you'll want to pull in the data asynchronously and do the appropriate UI manipulation (if needed).
You don't want to store massive data in a single variable since its going to take longer when it goes through the JS process.
localStorage is still an edge technology, is implemented differently across vendors, and isn't backwards compatible (although there are JavaScript libs that help mitigate backwards compatibility)
Cookies not big enough
On-Page JSON or JS Variable You lose abstraction and increase initial page weight (which is going to be unsatisfactory if you're on mobile)
Whatever implementation you do, I would run some simple benchmark performance tests so you have the metric to backup your code

This will cause browser lag and multiple issues. You can pretty much guarantee that a mobile browser isn't going to work in this scenario because no sensible mobile browser is going to let you download and store 5MB+ in the LocalStorage object. It is a really bad idea to put 5MB+ of HTML into the DOM of any browser and not expect any kind of performance issue.
If you're not concerned about mobile, then look at IndexedDB. It allows a greater amount of storage and it persists even after the session is closed. It is fairly well supported in recent Chrome and Firefox browsers, but requires IE10 or higher.

Related

Is there any difference between making DOM on the server/client side? (speed perspective) [duplicate]

I've done some web-based projects, and most of the difficulties I've met with (questions, confusions) could be figured out with help. But I still have an important question, even after asking some experienced developers: When functionality can be implemented with both server-side code and client-side scripting (JavaScript), which one should be preferred?
A simple example:
To render a dynamic html page, I can format the page in server-side code (PHP, python) and use Ajax to fetch the formatted page and render it directly (more logic on server-side, less on client-side).
I can also use Ajax to fetch the data (not formatted, JSON) and use client-side scripting to format the page and render it with more processing (the server gets the data from a DB or other source, and returns it to the client with JSON or XML. More logic on client-side and less on server).
So how can I decide which one is better? Which one offers better performance? Why? Which one is more user-friendly?
With browsers' JS engines evolving, JS can be interpreted in less time, so should I prefer client-side scripting?
On the other hand, with hardware evolving, server performance is growing and the cost of sever-side logic will decrease, so should I prefer server-side scripting?
EDIT:
With the answers, I want to give a brief summary.
Pros of client-side logic:
Better user experience (faster).
Less network bandwidth (lower cost).
Increased scalability (reduced server load).
Pros of server-side logic:
Security issues.
Better availability and accessibility (mobile devices and old browsers).
Better SEO.
Easily expandable (can add more servers, but can't make the browser faster).
It seems that we need to balance these two approaches when facing a specific scenario. But how? What's the best practice?
I will use client-side logic except in the following conditions:
Security critical.
Special groups (JavaScript disabled, mobile devices, and others).
In many cases, I'm afraid the best answer is both.
As Ricebowl stated, never trust the client. However, I feel that it's almost always a problem if you do trust the client. If your application is worth writing, it's worth properly securing. If anyone can break it by writing their own client and passing data you don't expect, that's a bad thing. For that reason, you need to validate on the server.
Unfortunately if you validate everything on the server, that often leaves the user with a poor user experience. They may fill out a form only to find that a number of things they entered are incorrect. This may have worked for "Internet 1.0", but people's expectations are higher on today's Internet.
This potentially leaves you writing quite a bit of redundant code, and maintaining it in two or more places (some of the definitions such as maximum lengths also need to be maintained in the data tier). For reasonably large applications, I tend to solve this issue using code generation. Personally I use a UML modeling tool (Sparx System's Enterprise Architect) to model the "input rules" of the system, then make use of partial classes (I'm usually working in .NET) to code generate the validation logic. You can achieve a similar thing by coding your rules in a format such as XML and deriving a number of checks from that XML file (input length, input mask, etc.) on both the client and server tier.
Probably not what you wanted to hear, but if you want to do it right, you need to enforce rules on both tiers.
I tend to prefer server-side logic. My reasons are fairly simple:
I don't trust the client; this may or not be a true problem, but it's habitual
Server-side reduces the volume per transaction (though it does increase the number of transactions)
Server-side means that I can be fairly sure about what logic is taking place (I don't have to worry about the Javascript engine available to the client's browser)
There are probably more -and better- reasons, but these are the ones at the top of my mind right now. If I think of more I'll add them, or up-vote those that come up with them before I do.
Edited, valya comments that using client-side logic (using Ajax/JSON) allows for the (easier) creation of an API. This may well be true, but I can only half-agree (which is why I've not up-voted that answer yet).
My notion of server-side logic is to that which retrieves the data, and organises it; if I've got this right the logic is the 'controller' (C in MVC). And this is then passed to the 'view.' I tend to use the controller to get the data, and then the 'view' deals with presenting it to the user/client. So I don't see that client/server distinctions are necessarily relevant to the argument of creating an API, basically: horses for courses. :)
...also, as a hobbyist, I recognise that I may have a slightly twisted usage of MVC, so I'm willing to stand corrected on that point. But I still keep the presentation separate from the logic. And that separation is the plus point so far as APIs go.
I generally implement as much as reasonable client-side. The only exceptions that would make me go server-side would be to resolve the following:
Trust issues
Anyone is capable of debugging JavaScript and reading password's, etc. No-brainer here.
Performance issues
JavaScript engines are evolving fast so this is becoming less of an issue, but we're still in an IE-dominated world, so things will slow down when you deal with large sets of data.
Language issues
JavaScript is weakly-typed language and it makes a lot of assumptions of your code. This can cause you to employ spooky workarounds in order to get things working the way they should on certain browsers. I avoid this type of thing like the plague.
From your question, it sounds like you're simply trying to load values into a form. Barring any of the issues above, you have 3 options:
Pure client-side
The disadvantage is that your users' loading time would double (one load for the blank form, another load for the data). However, subsequent updates to the form would not require a refresh of the page. Users will like this if there will be a lot of data fetching from the server loading into the same form.
Pure server-side
The advantage is that your page would load with the data. However, subsequent updates to the data would require refreshes to all/significant portions of the page.
Server-client hybrid
You would have the best of both worlds, however you would need to create two data extraction points, causing your code to bloat slightly.
There are trade-offs with each option so you will have to weigh them and decide which one offers you the most benefit.
One consideration I have not heard mentioned was network bandwidth. To give a specific example, an app I was involved with was all done server-side and resulted in 200Mb web page being sent to the client (it was impossible to do less without major major re-design of a bunch of apps); resulting in 2-5 minute page load time.
When we re-implemented this by sending the JSON-encoded data from the server and have local JS generate the page, the main benefit was that the data sent shrunk to 20Mb, resulting in:
HTTP response size: 200Mb+ => 20Mb+ (with corresponding bandwidth savings!)
Time to load the page: 2-5mins => 20 secs (10-15 of which are taken up by DB query that was optimized to hell an further).
IE process size: 200MB+ => 80MB+
Mind you, the last 2 points were mainly due to the fact that server side had to use crappy tables-within-tables tree implementation, whereas going to client side allowed us to redesign the view layer to use much more lightweight page. But my main point was network bandwidth savings.
I'd like to give my two cents on this subject.
I'm generally in favor of the server-side approach, and here is why.
More SEO friendly. Google cannot execute Javascript, therefor all that content will be invisible to search engines
Performance is more controllable. User experience is always variable with SOA due to the fact that you're relying almost entirely on the users browser and machine to render things. Even though your server might be performing well, a user with a slow machine will think your site is the culprit.
Arguably, the server-side approach is more easily maintained and readable.
I've written several systems using both approaches, and in my experience, server-side is the way. However, that's not to say I don't use AJAX. All of the modern systems I've built incorporate both components.
Hope this helps.
I built a RESTful web application where all CRUD functionalities are available in the absence of JavaScript, in other words, all AJAX effects are strictly progressive enhancements.
I believe with enough dedication, most web applications can be designed this way, thus eroding many of the server logic vs client logic "differences", such as security, expandability, raised in your question because in both cases, the request is routed to the same controller, of which the business logic is all the same until the last mile, where JSON/XML, instead of the full page HTML, is returned for those XHR.
Only in few cases where the AJAXified application is so vastly more advanced than its static counterpart, GMail being the best example coming to my mind, then one needs to create two versions and separate them completely (Kudos to Google!).
I know this post is old, but I wanted to comment.
In my experience, the best approach is using a combination of client-side and server-side. Yes, Angular JS and similar frameworks are popular now and they've made it easier to develop web applications that are light weight, have improved performance, and work on most web servers. BUT, the major requirement in enterprise applications is displaying report data which can encompass 500+ records on one page. With pages that return large lists of data, Users often want functionality that will make this huge list easy to filter, search, and perform other interactive features. Because IE 11 and earlier IE browsers are are the "browser of choice"at most companies, you have to be aware that these browsers still have compatibility issues using modern JavaScript, HTML5, and CSS3. Often, the requirement is to make a site or application compatible on all browsers. This requires adding shivs or using prototypes which, with the code included to create a client-side application, adds to page load on the browser.
All of this will reduce performance and can cause the dreaded IE error "A script on this page is causing Internet Explorer to run slowly" forcing the User to choose if they want to continue running the script or not...creating bad User experiences.
Determine the complexity of the application and what the user wants now and could want in the future based on their preferences in their existing applications. If this is a simple site or app with little-to-medium data, use JavaScript Framework. But, if they want to incorporate accessibility; SEO; or need to display large amounts of data, use server-side code to render data and client-side code sparingly. In both cases, use a tool like Fiddler or Chrome Developer tools to check page load and response times and use best practices to optimize code.
Checkout MVC apps developed with ASP.NET Core.
At this stage the client side technology is leading the way, with the advent of many client side libraries like Backbone, Knockout, Spine and then with addition of client side templates like JSrender , mustache etc, client side development has become much easy.
so, If my requirement is to go for interactive app, I will surely go for client side.
In case you have more static html content then yes go for server side.
I did some experiments using both, I must say Server side is comparatively easier to implement then client side.
As far as performance is concerned. Read this you will understand server side performance scores.
http://engineering.twitter.com/2012/05/improving-performance-on-twittercom.html
I think the second variant is better. For example, If you implement something like 'skins' later, you will thank yourself for not formatting html on server :)
It also keeps a difference between view and controller. Ajax data is often produced by controller, so let it just return data, not html.
If you're going to create an API later, you'll need to make a very few changes in your code
Also, 'Naked' data is more cachable than HTML, i think. For example, if you add some style to links, you'll need to reformat all html.. or add one line to your js. And it isn't as big as html (in bytes).
But If many heavy scripts are needed to format data, It isn't to cool ask users' browsers to format it.
As long as you don't need to send a lot of data to the client to allow it to do the work, client side will give you a more scalable system, as you are distrubuting the load to the clients rather than hammering your server to do everything.
On the flip side, if you need to process a lot of data to produce a tiny amount of html to send to the client, or if optimisations can be made to use the server's work to support many clients at once (e.g. process the data once and send the resulting html to all the clients), then it may be more efficient use of resources to do the work on ther server.
If you do it in Ajax :
You'll have to consider accessibility issues (search about web accessibility in google) for disabled people, but also for old browsers, those who doesn't have JavaScript, bots (like google bot), etc.
You'll have to flirt with "progressive enhancement" wich is not simple to do if you never worked a lot with JavaScript. In short, you'll have to make your app work with old browsers and those that doesn't have JavaScript (some mobile for example) or if it's disable.
But if time and money is not an issue, I'd go with progressive enhancement.
But also consider the "Back button". I hate it when I'm browsing a 100% AJAX website that renders your back button useless.
Good luck!
2018 answer, with the existence of Node.js
Since Node.js allows you to deploy Javascript logic on the server, you can now re-use the validation on both server and client side.
Make sure you setup or restructure the data so that you can re-use the validation without changing any code.

Caching text/image assets in performance-constrained environments

I'm working on an extremely performance-constrained devices. Because of the overhead of AJAX requests, I intend to aggressively cache text and image assets in the browser, but I need to configure the cache size per-device to as low as 1MB of text and 9MB of images -- quite a challenge for a multi-screen, graphical application.
Because the device easily hits the memory limit, I must be very cautious about how I manage my application's size: code file size, # of concurrent HTTP requests, # of JS processor cycles upon event dispatch, limiting CSS reflows, etc. My question today is how to develop a size-restrained cache for text assets and images.
For text, I've rolled my own cache using JSON.encode().length for objects and 'string'.length to approximate size. The application manually gets/sets cache entries. Upon hitting a configurable upper limit, the class garbage collects itself from gcLimit to gcTarget sizes, giving weight to the last-accessed properties (i.e., if something has been accessed recently, skip collecting that object the first time around).
For images, I intend to preload interface elements and let the browser deal with garbage collection itself by removing DOM elements and never persistently storing Image() objects. For preloading, I will probably roll my own again -- I have examples to imitate like FiNGAHOLiC's ImgPreloader and this. I need to keep in mind features like "download window size" and "max cache requests" to ensure I don't inadvertently overload the device.
This is a huge challenge working in such a constrained environment, and common frameworks like Backbone don't support "max Collection size". Elsewhere on SO, users quote limits of 5MB for HTML5 localStorage, but my goal is not session persistence, so I don't see the benefit.
I can't help feeling there might be better solutions. Ideas?
Edit: #Xotic750: Thanks for the nod to IndexedDB. Sadly, this app is a standard web page built on Opera/Presto. Even better, the platform offers no persistence. Rock and a hard place :-/.
localStorage and sessionStorage (DOM Storage) limits do not apply (or can be overridden) if the application is a browser extension (you don't mention what your application is).
localStorage is persistent
sessionStorage is sessional
Idea
Take a look at IndexedDB it is far more flexible though not as widely supported yet.
Also, some references to Chrome storage
Managing HTML5 Offline Storage
chrome.storage
With modern javascript engines cpu/gpu performance is not an issue for most apps (except games, heavy animation or flash) on even low powered devices so I suspect your primary issues are memory and io. Optimising for one typically harms the other but I suspect that the issues below will be your primary concern.
I'm not sure you have any control over the cache usage of the browser. You can limit the memory taken up by the javascript app using methods like those you've suggested but the browser will still do it's own thing and that is probably the primary issue in terms of memory. By creating your own caches you will probably be actually duplicating data that is already cached by the browser and so exacerbate the memory problem. The browser (unless you're using something obscure) will normally do a better job of caching than is possible in javascript. In any case, I would strongly recommend letting the browser take care of garbage collection as there is no way in javascript to actually force browsers to free up the memory (they do garbage collection when they want, not when you tell them to). Do you have control over which browser is used on the device? If you do, then changing that may be the best way to reduce memory usage (also can you limit the size of the browser cache?).
For ajax requests ensure you fully understand the difference between GET and POST as this has big implications for caching on the browser and on proxies routing messages around the web (and therefore also affects the logic of your app). See if you can minimise the number of requests by grouping them together (JSON helps here). It is normally latency rather than bandwidth that is the issue for AJAX requests (but don't go too far as most browsers can do several requests concurrently). Ensure you construct your ajax manager to allow prioritisation of requests (i.e stuff that affects what the user sees is prioritised over preloading which is prioritised over analytics - half the web has a google analytics call the first thing that happens after page load, even before ads and other content is loaded).
Beyond that, I would suggest that images are likely to be the primary contributor to memory issues (I doubt code size even registers but you should ensure code is minimised with e.g. google closure). Reduce image resolutions to the bare minimum and experiment with file formats (e.g. gif or png might be significantly smaller than jpeg for some images (cartoons, logos, icons) but much larger for others (photos, gradients).
10MB of cache in your app may sound small but it is actually enormous compared with most apps out there. The majority leave caching to the browser (which in any case will probably still cache the data whether you want it to or not).
You mention Image objects which suggests you are using the canvas. There is a noticeable speed improvement if you create a new canvas to store the image (after which you can discard the Image object). You can use this canvas as the source of any image data you later need to copy to a canvas and as no translation between data types is required this is much faster. Given canvas operations often happen many times a frame this can be a significant boost.
Final note - don't use frameworks / libraries that were developed with a desktop environment in mind. To optimise performance (whether speed or memory) you need to understand every line of code. Look at the source code of libraries (many have some very clever optimised code) but assume that, in general, you are a special case for which they are not optimised.

Preloading (=Caching) a full Website with Ajax – Possible Problems?

I'm currently building a portfolio-website for an architect that has a whole lot of images on its pages.
Navigation is done with history.js (=AJAX). In order to save loading time and make the whole thing more "snappy", I wrote a script that crawls the page body for links to other pages and fetches these automatically in the background. So far, it works like a charm.
It basically keeps a queue-Array that holds all the links. A setTimeout()-Function works through them and fetches each page using jQuery $.ajax(). The resulting HTML is stored in a Javascript Object.
Now, here's my question:
What are possible problems that might occur when using this on different machines/browsers/operation systems?
I'm thinking about:
max. javascript object/variable size (The fetched HTML is stored in an javascript object)
possible performance problems
max. number of asynchronous requests?
… anything you can think of?
Thanks a lot in advance,
a hobby programmer
Although it might be a good idea to cache the whole website on the client side there are a lot of things that can cause issues:
Memory
Unnecessary load on the webserver
Loading uneeded pages into memory
Some users have a limit to their internet so loading the entire website is not smart in those cases
Once the user naviagets away or refreshes the entire "cache" is gone
What I would do is first try to optimize the server side.
Add a bunch of caching mechanisms from the database to the user, the "Expires" header can really help you.
And if that doesn't help I would then think about caching some pages(which ones are for you to decide) in the offline cache, see (HTML 5 Offline Features)
That way you are safe even on page reload, keep the memory to a minimum and only load what you need.
PS: Don't try to reinvent stuff that the browser already has :P
You should queue the async requests, and only launch one at a time.
And since you're storing everything in variables, at some point you (the browser) may consume to much memory, and the whole thing can become very slow. I suggest you limit the size of your cache to a certain number of pages.
You can also try to not to store fetched content - just fetch it and throw-out. Browser still will cache fetched pages and images in its internal storage, so subsequent loads will be much faster (of course if ajax library does not forcibly disable the cache i.e. by using POST)

Making CPU Bound javascript feel responsive---webworkers?

I am writing a CPU intensive javascript application. I am running into a problem where sometimes the UI is locked while CPU-intensive calculation occurs. I know that the standard approach to solving this is to call setTimeout and let the event loop respond to UI events. However, that doesn't work for me and here's why.
When the page loads, the javascript vm needs to do a bunch of parsing and analyzing of chunks of data. This is truly background stuff, and I am calling setTimeout to run each chunk. However, this means that the user gets a very choppy UI experience until all chunks have been completed (can be up to 10 seconds for large files) and on every save. This is not acceptable.
I can think of 2 solutions, neither of which I really like:
be more granular about the chunks, thus providing more opportunities for the event loop to run. But, I don't like this because the cpu code is already quite complex, but it typically runs well. Calling setTimeout throughout the cpu bound code would make it far more complicated
Do more work on the server. However, I am running a node server and this would simply push the problem from the client to the server, with the added problem of increased bandwidth.
Fixing this would be trivial on a traditional thread-based VM. What should I do for Javascript?
UPDATE:
Some points that I forgot to mention:
We are not concerned with legacy browsers and all users will be required to use a modern Firefox, Chrome, Opera, Safari, IE, etc.
Our initial prototype has the client and server co-located, but there should be nothing preventing us from moving to a remote server.
The data lives on the client (well...obviously, if the client and server are the same machine, but this will be the case even when we move to remote servers).
Webworkers might be the solution, but they do still seem flaky. Does anyone have experience with them? Are they stable? Which modern browsers do not support them well? Are there any general problems with them?
Depending on whether this application will ever become public or not, you have to decide whether you can use Web Workers, split the data up more or do server-side processing. For real-world applications the real solution would be doing heavy computation on the server since you can't expect the user to have the latest processor, it might be a mere netbook which will probably only cough a few times and then crash.
Web workers would be a solution when you can be sure that users have the latest browsers that support it, however if that's not the case, there's no way to shim it like most HTML5 stuff.
Based on what I know about your application, I'd say that you should send precomputed data to the client. Furthermore, Node.js is bad at doing hardcore computations so you might want to look into different data processing options on the server. Also, I don't think bandwidth will be a problem since you have to give the client the initial data anyway. How much bigger is the processed data?

Json only web application. What are the cons? (or Pros)

I want to design a web application whose only interface is json i.e. all the http requests receive responses only in json format and dont render any html on the server side. All the form posts convert the form data into a json object and then post it as a string. All the rendering is done by client side javascript.
The one downside of this approach I know is that browsers without javascript wont be able to do much with this architecture but the interaction on the site is rich enough to be useless to non-javascript browsers anyway.
Are there any other downsides of this approach of designing a web application?
It's an increasingly-common pattern, with tools such as GWT and ext-js. Complex web apps such as GMail have been over 90% JS-created DOM for some time. If you are developing a traditional 'journal' type website with mainly written content to be read this approach will be overkill. But for a complex app that wishes to avoid page refreshes it may well be appropriate.
One downside is that not only does it require a browser that supports JavaScript, it is also easy for the computing resources required by the app to creep up to the point where it needs quite a powerful browser. If you develop in Chrome on a top-end PC you might come to run the app on a less powerful machine such as a netbook or mobile device and find it has become quite sluggish.
Another downside is you lose the opportunity to use HTML tools when working on your pages, and that viewing your application's pages' DOM trees under Firebug or Chrome Developer Tools may be hard work because the relationship between the elements and your code may not be clear.
Edit: another thing to consider is that it is more work to make pages more accessible, as keyboard shortcuts may have to be added (you may not be able to use the browser built in behavior here) and users with special needs may find it more difficult to vary the appearance of the app, for instance by increasing font size.
Another edit: it's unlikely now text content on your website will be successfully crawled by search engines. For this reason you sometimes see server created text-only pages representing the same content, that refer browsers to the JS-enabled version of the page.
Other than the issue you point out, there's another: Speed. But it's not necessarily a big issue, and in fact using JSON rather than HTML may (over slower connections) improve rather than hamper speed.
Web browsers are highly optimised to render HTML, both whole pages (e.g., normally) and fragments (e.g., innerHTML and the various wrappers for it, like jQuery's html or Prototype's update). There's a lot you can do to minimize the speed impact of working through your returned data and rendering the result, but nothing is going to be quite as fast as grabbing some HTML markup from the server and dumping it into the browser for display.
Now, that said, it's not necessarily going to be a big problem at all. If you use efficient techniques (some notes in this article), and if you primarily render the results by building up HTML strings you're then going to hand to the brower (again, via innerHTML or wrappers for it), or if you're rending only a few elements at a time, it's unlikely that there will be any perceptible speed difference.
If, instead, you build up substantial trees by creating and appending individual elements via the DOM API or wrappers for it, you're very likely to notice a performance impact. That seems like the right thing to do, but it involves lots of trips across the DOM/JavaScript boundary and means the browser has to present the DOM version of things to your code at all intermediate steps; in contrast, when you hand it an HTML string, it can do its thing and romp through it full-speed-ahead. You can see the difference in this performance test. It's substantial.
Over slower connections, the speed impact may be made up for or even overcome if the JSON data is more compact than the HTML would have been, because of the smaller size on the wire.
You've got to be more mindful of high-latency, low-bandwidth connections when you're building something like this. The likelihood is, you're going to be making a lot of Ajax calls to sync data and grab new data from the server, and the lag can be noticeable if there's a lot of latency. You need a strategy in place to keep the user informed about the progress of any communication between client and server.
In development, it's something that can be overlooked, especially if you're working with a local web server, but it can be killer in production. It means looking into prefetching and caching strategies.
You also need an effective way to manage HTML fragments/templates. Obviously, there are some good modules out there for rendering templates - Mustache.js, Underscore template, etc. - but keeping on top of the HTML fragments can cause some maintenance headaches. I tend to store the HTML templates in separate files, and load them dynamically via Ajax calls (plus caching to minimise HTTP requests).
Edit - another con:
Data syncing - if you use a server database as your data "authority" then it's important to keep data in sync between the server and client. This is even more relevant if changes to data on one client affects multiple clients. You then get into the realms of dealing with realtime, asynchronous updates, which can cause some interesting conceptual challenges. (Fortunately, using frameworks and libraries such as Socket.IO and Backbone.js can really make things easier).
Edit - pros:
There are some huge advantages to this type of application - it's far more responsive, and can really enhance the user experience. Trivial actions that would normally require a round-trip to the server and incur network overhead can now be performed quickly and seamlessly.
Also, it allows you to more effectively couple data to your views. Chances are, if you're handling the data on the client-side, you will have a framework in place that allows you to organise the data and make use of an ORM - whether its Backbone.js, Knockout.js or something similar. You no longer have to worry about storing data in html attributes or in temporary variables. Everything becomes a lot more manageable, and it opens the door for some really sophisticated UI development.
Also also, JavaScript opens up the possibility for event-driven interaction, which is the perfect paradigm for highly interactive applications. By making use of the event loop, you can hook your data directly to user-initiated and custom events, which opens up great possibilities. By hooking your data models directly to user-driven events, you can robustly handle updates and changes to data and render the appropriate output with minimal fuss. And it all happens at high speed.
I think the most important thing is what is your requirement, if you want to build a interactive application, giving desktop like feel then go for client side development. Using Javascript framework like backbone.js or knockout.js will really help in organizing and maintaining the code. The advantages are already detailed out in previous answers. As respect to the performance in rendering with respect to server side rendering is concerned here is a nice blog post which made thinking.
http://openmymind.net/2012/5/30/Client-Side-vs-Server-Side-Rendering/

Categories