Pros and cons of fully generating html using javascript - javascript

Recently, I have come up with some idea about how to improve the overall performance for
a web application in that instead of generating a ready-to-show html page from the web server, why not let it be fully generated in the client side.
Doing it this way, only pure data (in my case is data in JSON format) needs to be sent to the browser.
This will offload the work of html generation from the server to the client's browser and
reduce the size of the response packet sent back to users.
After some research, I have found that this framework (http://beebole-apps.com/pure/)
does the same thing as mine.
What I want to know is the pros and cons of doing it this way.
It is surely faster and better for web servers and with modern browser, Javascript code can run fast so page generation can be done fast.
What should be a downside for this method is probably for SEO.
I am not sure if search engines like Google will appropriately index my website.
Could you tell me what the downside for this method is?
Ps: I also attached sample code to help describe the method as follows:
In the head, simple javascript code will be written like this:
<script type='javascript' src='html_generator.js'/>
<script>
function onPageLoad(){
htmlGenerate($('#data').val());
}
</script>
In the body, only one element exist, used merely as a data container as follows:
<input type='hidden' id='data' value='{"a": 1, "b": 2, "c": 3}'/>
When the browser renders the file, htmlGenerate function which is in html_generator.js will be called and the whole html page will be generated in this function. Note that the html_generator.js file can be large since it contains a lot of html templates but since it can be cached in the browser, it will be downloaded once and only once.

Downsides
Search Engines will not be able to index your page. If they do, you're very lucky.
Users with JavaScript disabled, or on mobile devices, will very likely not be able to use it.
The speed advantages might turn out to be minimal, especially if the user's using a slow JavaScript engine like in IE.
Maintainability: Unless you automate the generation of your javascript, it's going to be a nightmare!
All in all
This method is not recommended if you're doing it only for speed increase. However, it is often done in web applications, where users stay on the same page, but then you would probably be better off using one of the existing frameworks for this, such as backbone.js, and make sure it remains crawlable by following Google's hashbang guidelines or HTML5 PushState (as suggested by #rohk).
If it's just performance you're looking for, and your app doesn't strictly need to work like this, don't do it. But if you do go this way, then make sure you do it in the standardized way so that it remains fast and indexable.

Client-side templating is often used in Single Page Applications.
You have to weight the pros and cons:
Pros :
More responsive interface
Load reduced on your web server
Cons :
SEO will be harder than for a classic website (unless you use HTML5 PushState)
Accessibility : you are relying heavily on javascript...
If you are going this way I recommend that you look at Backbone.js.
You can find tutorials and examples here : http://www.quora.com/What-are-some-good-resources-for-Backbone-js
Examples of SPA :
Document Cloud
Grooveshark
Gmail
Also look at this answer : https://stackoverflow.com/a/8372246/467003

Yikes.
jQuery templates are closer to what you are thinking. Sanity says you should have some HTML templates that you can easily modify at will. You want MAINTAINABILITY not string concatenations that keep you tossing and turning in your worst nightmares.
You can continue with this madness but you will end up learning the hard way. I employ you to first try some protypes using jQuery templates and your pure code way. Sanity will surely overcome you my friend and I say that coming from trying your way for a time. :)
Its possible to load content in dynamically of the template you need as well using AJAX. It doesn't make sense that you will have a panacea approach where every single conceivable template is needed to be downloaded in a single request.

The pros? I really can't think of any. You say that it will be easier in the webserver, but serving HTML is what web servers are designed to do.
The cons? It goes against just about every best practise when it comes to building websites:
search engines will not be able to index your site well, if at all
reduced maintainability
no matter how fast JS engines are, the DOM is slow, and will never be as fast as building HTML on the server
one JS error and your entire site doesn't render. Oops. Browsers however are extremely tolerant of HTML errors.
Ultimately HTML is a great tool for displaying content. JavaScript is a great way of adding interaction. Mixing them up like you suggest is just nuts and will only cause problems.

Cons
SEO
Excluding user without javascript
Pros
Faster workflow / Fluider interface
Small load reduce
If SEO is not important for you and most of your user have Javascript you could create a Single Page Applications. It will not reduce your load very much but it can create a faster workflow on the page.
Btw: I would not pack all templates in one file. It would be too big for big projects.

Related

Rails: Not ember, not JS responses, but something in-between

I am developing a standard rails application, and so far I haven't used any AJAX, just good ol' HTML. My plan is to iteratively add "remote" links and all that kind of stuff and support for JS responses, because I know that generating JS server side is very very evil, but I find it to be very handy as well, easy, fast and it makes the application snappy enough and i18n comes out-of-the-box.
Using a pure JSON approach would be lighter, but needs lots of client-side coding.
Now imagine that in this application users have a mailbox, and since the idea is that they will be able to do most or even all of the actions without reloading the page, the mailbox counter will never change unless they refresh the page manually.
So, here comes the question: Which is the best way to handle this?
I thought about using Ember (for data binding), and sharing views with rails, via some sort of handlebars implementation for ruby. That would be quite awesome, but not very transparent for the developer (me). Although I guess that I only need to write handlebars views that will be used by ember, the rest can still be written in their original format, no?
Another option might be to use some sort of event system (EventSource maybe?), and just go with handy the JS views approach, and listen to those events. I guess those should be JSON objects, and the client must be coded to be able to handle them. This seems a bit cumbersome, and I need a solution for heroku (faye?), which is where my app is hosted. Any hints?
I think that the ember approach is the more robust one, but seems to be quite complex as well, and I don't want to repeat myself server and client side.
EDIT:
I have seen this, which is more or less the option #2.
One of the advantages of using a JavaScript framework is that the whole application can be concatenated and compressed into one JavaScript file. Provided that modern browsers aggressively cache JavaScript, the browser would no longer need to request those assets after initial page load.
Another advantage of using a JavaScript framework is that it requires you to be a consumer of your own API. Fleshing out the application's API for your own consumption might lend to less work in the future if there is a possibility of mobile applications or 3rd parties having access to it.
If you do not need your application to respond to every request with an equivalent HTML response, I think a compelling case could be made for using a JavaScript framework.
Many of those benefits might be lost if your application needs to respond to every request with an equivalent HTML template. The Ember core has been relatively vocal and in opposition to supporting this style of progressive enhancement. Considering the tools for using a JavaScript framework in this way are relatively unstable and immature, I might be prone to using option 2 to accomplish this.

Should I render this template using JavaScript or the server?

I'm rendering a news feed.
I'm planning to use Backbone.js for my javascript stuff because I'm sick of doing manual DOM binds with JQuery.
So right now I'm looking at 2 options.
When the user loads the page, the "news feed" container is blank. But the page triggers a javascript which renders the items of the news feed onto the screen. This would tie into Backbone's models and collections, etc.
When the user loads the page, the "news feed" is rendered by the server. Even if javascript was turned off, the items would still show because the server rendered it via a templating engine.
I want to use Backbone.js to keep my javascript clean. So, I should pick #1, right?? But #1 is much more complicated than #2.
By the way, the reason I'm asking this question is because I don't want to use the routing feature of Backbone.js. I would load each page individually, and use Backbone for the individual items of the page. In other words, I'm using Backbone.js halfway.
If I were to use the routing feature of Backbone.js, then the obvious answer would be #1, right? But I'm afraid it would take too much time to build the route system, and time should be balanced into my equation as well.
I'm sorry if this question is confusing: I just want to know the best practice of using Backbone.js and saving time as well.
There are advantages and disadvantages to both, so I would say this: choose the option that is best for you, according to your requirements.
I don't know Backbone.js, so I'm going to keep my answer to client- versus server-side rendering.
Client-side Rendering
This approach allows you to render your structure quickly on the server-side, then let the user's JavaScript pick up the actual content.
Pros:
Quicker perceived user experience: if there's enough static content on the initial render, then the user gets their page back (or at least the beginning of it) quicker and they won't be bothered about the dynamic content, because in all likelihood that will render reasonably quickly too.
Better control of caching: By requiring that the browser makes multiple requests, you can set up your server to use different caching headers for each URL, depending on your requirements. In this way, you could allow users to cache the initial page render, but require that a user fetch dynamic (changing) content every time.
Cons:
User must have JavaScript enabled: This is an obvious one and I shouldn't even need to mention it, but you are cutting out a (very small) portion of your user base if you don't provide a graceful alternative to your JS-heavy site.
Complexity: This one is a little subjective, but in some ways it's just simpler to have everything in your server-side language and not require so much back-and-forth. Of course, it can go both ways.
Slow post-processing: This depends on the browser, but the fact is that if a lot of DOM manipulation or other post-processing needs to occur after retrieving the dynamic content, it might be faster to let the server do it if the server is underutilized. Most browsers are good at basic DOM manipulation, but if you have to do JSON parsing, sorting, arithmetic, etc., some of that might be faster on the server.
Server-side Rendering
This approach allows the user to receive everything at once and also caters to browsers that don't have good JavaScript support, but it also means everything takes a bit longer before the browser gets the first <html> tag.
Pros:
Content appears all at once: If your server is fast, it will render everything all at once, and that's that. No messy XmlHttpRequests (does anyone still use those directly?).
Quick post-processing: Just like you wouldn't want your application layer to do sorting of a database queryset because the database is faster, you might also want to reserve a good amount of processing on the server-side. If you design for the client-side approach, it's easy to get carried away and put the processing in the wrong place.
Cons:
Slower perceived user experience: A user won't be able to see a single byte until the server's work is all done. Sure, the server is probably going to zip through it, but it's still a few extra seconds on the user's side and you would do them a favor by rendering what you can right away.
Does not scale as well because server spends more time on requests: It might be that you really want the server to finish a request quickly and move on to the next connection.
Which of these are most important to your requirements? That should inform your decision.
I don't know backbone, but here's a simple thought: if at all possible and secure, do everything on the client instead of the server. That way the server has less work to do and can therefore handle more connections and scale better.
But #1 is much more complicated than #2.
Not really. Once you get your hang of Backbone and jQuery and client-side templating (and maybe throw CoffeeScript into the mix, too), then this is not really difficult. In fact, it greatly simplifies your server code, as all the display-related functions are now removed. You could also even have different clients (mobile version, for example) running against the same server.
Even if javascript was turned off, the items would still show because the server rendered it via a templating engine.
That is the important consideration here. If you want to support users without Javascript, then you need a non-JS version.
If you already have a non-JS version, you can think about if you still need the "enhanced" version, and if you do, if you want to re-use the server-side templating you already have coded and tested and need to maintain anyway, or duplicate the effort client-side, which adds development cost, but as you say may provide a superior experience and lower the load on the server (although I cannot imagine that fetching rendered data versus fetching XML data makes that much of a difference).
If you do not need to support users without Javascript, then by all means, render on the client.
I think Backbone's aim is to organize a Javascript in-page client application. But first of all you should take a position on the next statement:
Even if javascript was turned off, the web-app still works in "post-back mode".
Is that one of your requirements? (This is not a simple requirement.) If no, then I'll advice you: "Do more JS". But if yes then I believe your best friend is jQuery load function.
A Note: I'm a Java programmer and there's a lot of server-side frameworks that bring the ability to write applications that work ajax-ly when js is enabled and switch on post-backs when it isn't. I think Wicket and Echo2 are two of them but it's meant they are server-side libraries...

Json only web application. What are the cons? (or Pros)

I want to design a web application whose only interface is json i.e. all the http requests receive responses only in json format and dont render any html on the server side. All the form posts convert the form data into a json object and then post it as a string. All the rendering is done by client side javascript.
The one downside of this approach I know is that browsers without javascript wont be able to do much with this architecture but the interaction on the site is rich enough to be useless to non-javascript browsers anyway.
Are there any other downsides of this approach of designing a web application?
It's an increasingly-common pattern, with tools such as GWT and ext-js. Complex web apps such as GMail have been over 90% JS-created DOM for some time. If you are developing a traditional 'journal' type website with mainly written content to be read this approach will be overkill. But for a complex app that wishes to avoid page refreshes it may well be appropriate.
One downside is that not only does it require a browser that supports JavaScript, it is also easy for the computing resources required by the app to creep up to the point where it needs quite a powerful browser. If you develop in Chrome on a top-end PC you might come to run the app on a less powerful machine such as a netbook or mobile device and find it has become quite sluggish.
Another downside is you lose the opportunity to use HTML tools when working on your pages, and that viewing your application's pages' DOM trees under Firebug or Chrome Developer Tools may be hard work because the relationship between the elements and your code may not be clear.
Edit: another thing to consider is that it is more work to make pages more accessible, as keyboard shortcuts may have to be added (you may not be able to use the browser built in behavior here) and users with special needs may find it more difficult to vary the appearance of the app, for instance by increasing font size.
Another edit: it's unlikely now text content on your website will be successfully crawled by search engines. For this reason you sometimes see server created text-only pages representing the same content, that refer browsers to the JS-enabled version of the page.
Other than the issue you point out, there's another: Speed. But it's not necessarily a big issue, and in fact using JSON rather than HTML may (over slower connections) improve rather than hamper speed.
Web browsers are highly optimised to render HTML, both whole pages (e.g., normally) and fragments (e.g., innerHTML and the various wrappers for it, like jQuery's html or Prototype's update). There's a lot you can do to minimize the speed impact of working through your returned data and rendering the result, but nothing is going to be quite as fast as grabbing some HTML markup from the server and dumping it into the browser for display.
Now, that said, it's not necessarily going to be a big problem at all. If you use efficient techniques (some notes in this article), and if you primarily render the results by building up HTML strings you're then going to hand to the brower (again, via innerHTML or wrappers for it), or if you're rending only a few elements at a time, it's unlikely that there will be any perceptible speed difference.
If, instead, you build up substantial trees by creating and appending individual elements via the DOM API or wrappers for it, you're very likely to notice a performance impact. That seems like the right thing to do, but it involves lots of trips across the DOM/JavaScript boundary and means the browser has to present the DOM version of things to your code at all intermediate steps; in contrast, when you hand it an HTML string, it can do its thing and romp through it full-speed-ahead. You can see the difference in this performance test. It's substantial.
Over slower connections, the speed impact may be made up for or even overcome if the JSON data is more compact than the HTML would have been, because of the smaller size on the wire.
You've got to be more mindful of high-latency, low-bandwidth connections when you're building something like this. The likelihood is, you're going to be making a lot of Ajax calls to sync data and grab new data from the server, and the lag can be noticeable if there's a lot of latency. You need a strategy in place to keep the user informed about the progress of any communication between client and server.
In development, it's something that can be overlooked, especially if you're working with a local web server, but it can be killer in production. It means looking into prefetching and caching strategies.
You also need an effective way to manage HTML fragments/templates. Obviously, there are some good modules out there for rendering templates - Mustache.js, Underscore template, etc. - but keeping on top of the HTML fragments can cause some maintenance headaches. I tend to store the HTML templates in separate files, and load them dynamically via Ajax calls (plus caching to minimise HTTP requests).
Edit - another con:
Data syncing - if you use a server database as your data "authority" then it's important to keep data in sync between the server and client. This is even more relevant if changes to data on one client affects multiple clients. You then get into the realms of dealing with realtime, asynchronous updates, which can cause some interesting conceptual challenges. (Fortunately, using frameworks and libraries such as Socket.IO and Backbone.js can really make things easier).
Edit - pros:
There are some huge advantages to this type of application - it's far more responsive, and can really enhance the user experience. Trivial actions that would normally require a round-trip to the server and incur network overhead can now be performed quickly and seamlessly.
Also, it allows you to more effectively couple data to your views. Chances are, if you're handling the data on the client-side, you will have a framework in place that allows you to organise the data and make use of an ORM - whether its Backbone.js, Knockout.js or something similar. You no longer have to worry about storing data in html attributes or in temporary variables. Everything becomes a lot more manageable, and it opens the door for some really sophisticated UI development.
Also also, JavaScript opens up the possibility for event-driven interaction, which is the perfect paradigm for highly interactive applications. By making use of the event loop, you can hook your data directly to user-initiated and custom events, which opens up great possibilities. By hooking your data models directly to user-driven events, you can robustly handle updates and changes to data and render the appropriate output with minimal fuss. And it all happens at high speed.
I think the most important thing is what is your requirement, if you want to build a interactive application, giving desktop like feel then go for client side development. Using Javascript framework like backbone.js or knockout.js will really help in organizing and maintaining the code. The advantages are already detailed out in previous answers. As respect to the performance in rendering with respect to server side rendering is concerned here is a nice blog post which made thinking.
http://openmymind.net/2012/5/30/Client-Side-vs-Server-Side-Rendering/

Why don't I just build the whole web app in Javascript and Javascript HTML Templates?

I'm getting to the point on an app where I need to start caching things, and it got me thinking...
In some parts of the app, I render table rows (jqGrid, slickgrid, etc.) or fancy div rows (like in the New Twitter) by grabbing pure JSON and running it through something like Mustache, jquery.tmpl, etc.
In other parts of the app, I just render the info in pure HTML (server-side HAML templates), and if there's searching/paginating, I just go to a new URL and load a new HTML page.
Now the problem is in caching and maintainability.
On one hand I'm thinking, if everything was built using Javascript HTML Templates, then my app would serve just an HTML layout/shell, and a bunch of JSON. If you look at the Facebook and Twitter HTML source, that's basically what they're doing (95% json/javascript, 5% html). This would make it so my app only needed to cache JSON (pages, actions, and/or records). Which means you'd hit the cache no matter if you were some remote api developer accessing a JSON api, or the strait web app. That is, I don't need 2 caches, one for the JSON, one for the HTML. That seems like it'd cut my cache store down in half, and streamline things a little bit.
On the other hand, I'm thinking, from what I've seen/experienced, generating static HTML server-side, and caching that, seems to be much better performance wise cross-browser; you get the graphics instantly and don't have to wait that split-second for javascript to render it. StackOverflow seems to do everything in plain HTML, so does Google, and you can tell... everything appears at once. Notice how though on twitter.com, the page is blank for .5-1 seconds, and the page chunks in: the javascript has to render the json. The downside with this is that, for anything dynamic (like endless scrolling, or grids), I'd have to create javascript templates anyway... so now I have server-side HAML templates, client-side javascript templates, and a lot more to cache.
My question is, is there any consensus on how to approach this? What are the benefits and drawbacks from your experience of mixing the two versus going 100% with one over the other?
Update:
Some reasons that factor into why I haven't yet made the decision to go with 100% javascript templating are:
Performance. Haven't formally tested, but from what I've seen, raw html renders faster and more fluidly than javascript-generated html cross-browser. Plus, I'm not sure how mobile devices handle dynamic html performance-wise.
Testing. I have a lot of integration tests that work well with static HTML, so switching to javascript-only would require 1) more focused pure-javascript testing (jasmine), and 2) integrating javascript into capybara integration tests. This is just a matter of time and work, but it's probably significant.
Maintenance. Getting rid of HAML. I love HAML, it's so easy to write, it prints pretty HTML... It makes code clean, it makes maintenance easy. Going with javascript, there's nothing as concise.
SEO. I know google handles the ajax /#!/path, but haven't grasped how this will affect other search engines and how older browsers handle it. Seems like it'd require a significant setup.
Persistant private data storage.
You need a server to store data with various levels of public/private access. You also need a server for secure closed source information. You need a server to do heavy lifting that you don't want to do on the client. Complex data querying is best left upto your database engine. Indexing and searching is not yet optimised for javascript.
Also you have the issues of older browsers being far slower. If your not running FF4/Chrome or IE9 then there is a big difference between data manipulation and page construction on the client and the server.
I myself am going to be trying to build a web application made entirely using a MVC framework and template's but still using the server to connect to secure and optimised database.
But in general the application can indeed be build entirely in javascript and using templates. The various constructs and javascript engines have advanced enough to do this. There are enough popular frameworks out there to do this. The Pure javascript web applications are no longer experiments and prototypes.
Oh, and if were recommending frameworks for this, then take a look at backbone.js.
Security
Let's not forget that we do not trust the client. We need serverside validation. JavaScript is interpreted, dynamic and can be manipulated at run time. We never trust client input.
I used to mix these two approaches but then switched to client side rendering entirely because it was really hard to handle heavy JavaScript properly the otherwise. As a complete solution can recommend the approach of the JavaScriptMVC framework.
It has a view rendering engine called EJS which can compress your views into plain JavaScript for a production build of your application. That makes it extremely fast (all your HTML gets preloaded with your single compressed JavaScript file so it is rendered as soon as you receive your models JSON data). I personally couldn't notice a performance difference between EJS rendering and transferring plain HTML.
Then for your API, following the REST principles, caching is one of the key constraints. So if your application supports HTTP caching properly for JSON data and using your compressed client side templates you might even see a performance improvement.
I could be way off here, but...
Have you ever looked at CouchDB? (I have no affiliation w/ them BTW) I could be way wrong, but your situation sounds like it may be a perfect fit for the use of the Apache CouchDB I haven't really used it yet myself but I took a good look at it a short while back and it is a very interesting database.
It is a document based database that uses a REST api for connections (very versatile and easy to use). It is also very JSON centric, very fast and a tiny footprint; they say it can reside on phones and other embedded uses too but at the same time is supposed to be extremely scalable (upwards that is). If your a big JS user (which you sound like you are) then you may be right at home with it.
I was just thinking that it may come in handy in any number of ways that have been proposed here and thought I'd chime in just to give you an idea for storage options :)

Just In General: JS Only Vs Page-Based Web Apps

When a developing a web app, versus a web site, what reasons are there to use multiple HTML pages, rather than using one html page and doing everything through Javascript?
I would expect that it depends on the application -- maybe -- but would appreciate any thoughts on the subject.
Thanks in advance.
EDIT:
Based on the responses here, and some of my own research, if you wanted to do a single-page, fully JS-Powered site, some useful tools would seem to include:
JQuery Plug Ins:
JQuery History:
http://balupton.com/projects/jquery-history
JQuery Address:
http://plugins.jquery.com/project/jquery-address
JQuery Pagination:
http://plugins.jquery.com/project/pagination
Frameworks:
Sproutcore
http://www.sproutcore.com/
Cappucino
http://cappuccino.org/
Possibly, JMVC:
http://www.javascriptmvc.com/
page based applications provide:
ability to work on any browser or device
simpler programming model
they also provide the following (although these are solvable by many js frameworks):
bookmarkability
browser history
refresh or F5 to repeat action
indexability (in case the application is public and open)
One of the bigger reasons is going to be how searchable your website is.
Doing everything in javascript is going to make it complicated for search engines to crawl all content of your website, and thus not fully indexing it. There are ways around this (with Google's recent AJAX SEO guidelines) but I'm not sure if all search engines support this yet. On top of that, it's a little bit more complex then just making separate pages.
The bigger issue, whether you decide to build multiple HTML pages, or you decide to use some sort of framework or CMS to generate them for you, is that the different sections of your website have URL's that are unique to them. E.g., an about section would have a URL like mywebsite.com/about, and that URL is used on the actual "about" link within the website.
One of the biggest downfalls of single-page, Ajax-ified websites is complexity. What might otherwise be spread across several pages suddenly finds its way into one huge, master page. Also, it can be difficult to coordinate the state of the page (for example, tracking if you are in Edit mode, or Preview mode, etc.) and adjusting the interface to match.
Also, one master page that is heavy on JS can be a performance drag if it has to load multiple, big JS files.
At the OP's request, I'm going to discuss my experience with JS-only sites. I've written four relevant sites: two JS-heavy (Slide and SpeedDate) and two JS-only (Yazooli and GameCrush). Keep in mind that I'm a JS-only-site bigot, so you're basically reading John Hinkley on the subject of Jody Foster.
The idea really works. It produces gracefully, responsive sites at very low operational costs. My estimate is that the cost for bandwidth, CPU, and such goes to 10% of the cost of running a similar page-based site.
You need fewer but better (or at least, better-trained) programmers. JavaScript is an powerful and elegant language, but it has huge problems that a more rigid and unimaginative language like Java doesn't have. If you have a whole bunch of basically mediocre guys working for you, consider JSP or Ruby instead of JS-only. If you are required to use PHP, just shoot yourself.
You have to keep basic session state in the anchor tag. Users simply expect that the URL represents the state of the site: reload, bookmark, back, forward. jQuery's Address plug-in will do a lot of the work for you.
If SEO is an issue for you, investigate Google Ajax Crawling. Basically, you make a very simple parallel site, just for search engines.
When would I not use JS-only? If I were producing a site that was almost entirely content, where the user did nothing but navigate from one place to another, never interacting with the site in a complicated manner. So, Wikipedia and ... well, that's about it. A big reference site, with a lot of data for the user to read.
modularization.
multiple files allows you to mre cleanly break out different workflow paths and process parts.
chances are your Business Rules are something that do not usually directly impact your layout rules and multiple files would better help in editing on what needs to be edited without the risk of breaking something unrelated.
I actually just developed my first application using only one page.
..it got messy
My idea was to create an application that mimicked the desktop environment as much as possible. In particular I wanted a detailed view of some app data to be in a popup window that would maintain it's state regardless of the section of the application they were in.
Thus my frankenstein was born.
What ended up happening due to budget/time constraints was the code got out of hand. The various sections of my JavaScript source got muddled together. Maintaining the proper state of various views I had proved to be... difficult.
With proper planning and technique I think the 'one-page' approach is a very easy way to open up some very interesting possibilities (ex: widgets that maintain state across application sections). But it also opens up many... many potential problem areas. including...
Flooding the global namespace (if you don't already have your own... make one)
Code organization can easily get... out of hand
Context - It's very easy to
I'm sure there are more...
In short, I would urge you to stay away from relying on JavaScript dependency for the compatibility issue's alone. What I've come to realize is there is simply no need rely on JavaScript to everything.
I'm actually in the process of removing JavaScript dependencies in loo of Progressive Enhancement. It just makes more sense. You can achieve the same or similar effects with properly coded JavaScript.
The idea is too...
Develop out well-formatted, fully functional application w/o any JavaScript
Style it
Wrap the whole thing with JavaScript
Using Progressive Enhancement one can develop an application that delivers the best possible experience for the user that is possible.
For some additional arguments, check out The Single Page Interface Manifesto and some (mostly) negative reaction to it on Hacker News (link at the bottom of the SPI page):
The Single Page Interface Manifesto: http://itsnat.sourceforge.net/php/spim/spi_manifesto_en.php
stofac, first of all, thanks for the link to the Single Page Interface (SPI) Manifesto (I'm the author of this boring text)
Said this, SPI != doing everything through Javascript
Take a look to this example (server-centric):
http://www.innowhere.com/insites/
The same in GAE:
http://itsnatsites.appspot.com/
More info about the GAE approach:
http://www.theserverside.com/news/thread.tss?thread_id=60270
In my opinion coding a complex SPI application/web site fully on JavaScript is very very complex and problematic, the best approach in my opinion is "hybrid programming" for SPI, a mix of server-centric for big state management and client-centric (a.k.a JavaScript by hand) for special effects.
Doing everything on a single page using ajax everywhere would break the browser's history/back button functionality and be annoying to the user.
I utterly despise JS-only sites where it is not needed. That extra condition makes all the difference. By way of example consider the oft quoted Google Docs, in this case it not only helps improve experiences it is essential. But some parts of Google Help have been JS-only and yet it adds nothing to the experience, it is only showing static content.
Here are reasons for my upset:
Like many, I am a user of NoScript and love it. Pages load faster, I feel safer and the more distracting adverts are avoided. The last point may seem like a bad thing for webmasters but I don't want anyone to get rewarded for pushing annoying flashy things in my face, if tactless advertisers go out of business I consider it natural selection.
Obviously this means some visitors to your site are either going to be turned away or feel hassled by the need to provide a temporary exclusion. This reduces your audience.
You are duplicating effort. The browser already has a perfectly good history function and you shouldn't need to reinvent the wheel by redrawing the previous page when a back button is clicked. To make matters worse going back a page shouldn't require re-rendering. I guess I am a student of If-it-ain't-broke-don't-fix-it School (from Don't-Repeat-Yourself U.).
There are no HTTP headers when traversing "pages" in JS. This means no cache controls, no expiries, content cannot be adjusted for requested language nor location, no meaningful "page not found" nor "unavailable" responses. You could write error handling routines within your uber-page that respond to failed AJAX fetches but that is more complexity and reinvention, it is redundant.
No caching is a big deal for me, without it proxies cannot work efficiently and caching has the greatest of all load reducing effects. Again, you could mimic some caching in your JS app but that is yet more complexity and redundancy, higher memory usage and poorer user experience overall.
Initial load times are greater. By loading so much Javascript on the first visit you are causing a longer delay.
More JavaScript complexity means more debugging in various browsers. Server-side processing means debugging only once.
Unfuddle (a bug-tracker) left a bad taste. One of my most unpleasant web experiences was being forced to use this service by an employer. On the surface it seems well suited; the JS-heavy section is private so doesn't need to worry about search engines, only repeat visitors will be using it so have time to turn off protections and shouldn't mind the initial JS library load.
But it's use of JS is pointless, most content is static. "Pages" were still being fetched (via AJAX) so the delay is the same. With the benefit of AJAX it should be polling in the background to check for changes but I wouldn't get notified when the visible page had been modified. Sections had different styles so there was an awkward re-rendering when traversing those, loading external stylesheets by Javascript is Bad Practice™. Ease of use was sacrificed for whizz-bang "look at our Web 2.0" features. Such a business-orientated application should concentrate on speed of retrieval, but it ended up slower.
Eventually I had to refuse to use it as it was disrupting the team's work flow. This is not good for client-vendor relationships.
Dynamic pages are harder to save for offline use. Some mobile users like to download in advance and turn off their connection to save power and data usage.
Dynamic pages are harder for screen readers to parse. While the number of blind users are probably less than those with NoScript or a mobile connection it is inexcusable to ignore accessibility - and in some countries even illegal, see the "Disability Discrimination Act" (1999) and "Equality Act" (2010).
As mentioned in other answers the "Progressive Enhancement", née "Unobtrusive Javascript", is the better approach. When I am required to make a JS-only site (remember, I don't object to it on principle and there are times when it is valid) I look forward to implementing the aforementioned AJAX crawling and hope it becomes more standardised in future.

Categories