Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've read about SPA and it advantages. I find most of them unconvincing. There are 3 advantages that arouse my doubts.
Question: Can you act as advocate of SPA and prove that I am wrong about first three statements?
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
Server-side rendering is hard to implement for all the intermediate
states - small view states do not map well to URLs.
Single page apps are distinguished by their ability to redraw any part
of the UI without requiring a server roundtrip to retrieve HTML. This
is achieved by separating the data from the presentation of data by
having a model layer that handles data and a view layer that reads
from the models.
What is wrong with holding a model layer for non-SPA? Does SPA the only compatible architecture with MVC on client side?
2. With SPA we don't need to use extra queries to the server to download pages.
Hah, and how many pages user can download during visiting your site? Two, three? Instead there appear another security problems and you need to separate your login page, admin page etc into separate pages. In turn it conflicts with SPA architecture.
3.May be any other advantages? Don't hear about any else..
=== DISADVANTAGES ===
Client must enable javascript.
Only one entry point to the site.
Security.
P.S. I've worked on SPA and non-SPA projects. And I'm asking those questions because I need to deepen my understanding. No mean to harm SPA supporters. Don't ask me to read a bit more about SPA. I just want to hear your considerations about that.
Let's look at one of the most popular SPA sites, GMail.
1. SPA is extremely good for very responsive sites:
Server-side rendering is not as hard as it used to be with simple techniques like keeping a #hash in the URL, or more recently HTML5 pushState. With this approach the exact state of the web app is embedded in the page URL. As in GMail every time you open a mail a special hash tag is added to the URL. If copied and pasted to other browser window can open the exact same mail (provided they can authenticate). This approach maps directly to a more traditional query string, the difference is merely in the execution. With HTML5 pushState() you can eliminate the #hash and use completely classic URLs which can resolve on the server on the first request and then load via ajax on subsequent requests.
2. With SPA we don't need to use extra queries to the server to download pages.
The number of pages user downloads during visit to my web site?? really how many mails some reads when he/she opens his/her mail account. I read >50 at one go. now the structure of the mails is almost the same. if you will use a server side rendering scheme the server would then render it on every request(typical case).
- security concern - you should/ should not keep separate pages for the admins/login that entirely depends upon the structure of you site take paytm.com for example also making a web site SPA does not mean that you open all the endpoints for all the users I mean I use forms auth with my spa web site.
- in the probably most used SPA framework Angular JS the dev can load the entire html temple from the web site so that can be done depending on the users authentication level. pre loading html for all the auth types isn't SPA.
3. May be any other advantages? Don't hear about any else..
these days you can safely assume the client will have javascript enabled browsers.
only one entry point of the site. As I mentioned earlier maintenance of state is possible you can have any number of entry points as you want but you should have one for sure.
even in an SPA user only see to what he has proper rights. you don't have to inject every thing at once. loading diff html templates and javascript async is also a valid part of SPA.
Advantages that I can think of are:
rendering html obviously takes some resources now every user visiting you site is doing this. also not only rendering major logics are now done client side instead of server side.
date time issues - I just give the client UTC time is a pre set format and don't even care about the time zones I let javascript handle it. this is great advantage to where I had to guess time zones based on location derived from users IP.
to me state is more nicely maintained in an SPA because once you have set a variable you know it will be there. this gives a feel of developing an app rather than a web page. this helps a lot typically in making sites like foodpanda, flipkart, amazon. because if you are not using client side state you are using expensive sessions.
websites surely are extremely responsive - I'll take an extreme example for this try making a calculator in a non SPA website(I know its weird).
Updates from Comments
It doesn't seem like anyone mentioned about sockets and long-polling.
If you log out from another client say mobile app, then your browser
should also log out. If you don't use SPA, you have to re-create the
socket connection every time there is a redirect. This should also
work with any updates in data like notifications, profile update etc
An alternate perspective: Aside from your website, will your project
involve a native mobile app? If yes, you are most likely going to be
feeding raw data to that native app from a server (ie JSON) and doing
client-side processing to render it, correct? So with this assertion,
you're ALREADY doing a client-side rendering model. Now the question
becomes, why shouldn't you use the same model for the website-version
of your project? Kind of a no-brainer. Then the question becomes
whether you want to render server-side pages only for SEO benefits and
convenience of shareable/bookmarkable URLs
I am a pragmatist, so I will try to look at this in terms of costs and benefits.
Note that for any disadvantage I give, I recognize that they are solvable. That's why I don't look at anything as black and white, but rather, costs and benefits.
Advantages
Easier state tracking - no need to use cookies, form submission, local storage, session storage, etc. to remember state between 2 page loads.
Boiler plate content that is on every page (header, footer, logo, copyright banner, etc.) only loads once per typical browser session.
No overhead latency on switching "pages".
Disadvantages
Performance monitoring - hands tied: Most browser-level performance monitoring solutions I have seen focus exclusively on page load time only, like time to first byte, time to build DOM, network round trip for the HTML, onload event, etc. Updating the page post-load via AJAX would not be measured. There are solutions which let you instrument your code to record explicit measures, like when clicking a link, start a timer, then end a timer after rendering the AJAX results, and send that feedback. New Relic, for example, supports this functionality. By using a SPA, you have tied yourself to only a few possible tools.
Security / penetration testing - hands tied: Automated security scans can have difficulty discovering links when your entire page is built dynamically by a SPA framework. There are probably solutions to this, but again, you've limited yourself.
Bundling: It is easy to get into a situation when you are downloading all of the code needed for the entire web site on the initial page load, which can perform terribly for low-bandwidth connections. You can bundle your JavaScript and CSS files to try to load in more natural chunks as you go, but now you need to maintain that mapping and watch for unintended files to get pulled in via unrealized dependencies (just happened to me). Again, solvable, but with a cost.
Big bang refactoring: If you want to make a major architectural change, like say, switch from one framework to another, to minimize risk, it's desirable to make incremental changes. That is, start using the new, migrate on some basis, like per-page, per-feature, etc., then drop the old after. With traditional multi-page app, you could switch one page from Angular to React, then switch another page in the next sprint. With a SPA, it's all or nothing. If you want to change, you have to change the entire application in one go.
Complexity of navigation: Tooling exists to help maintain navigational context in SPA's, like history.js, Angular 2, most of which rely on either the URL framework (#) or the newer history API. If every page was a separate page, you don't need any of that.
Complexity of figuring out code: We naturally think of web sites as pages. A multi-page app usually partitions code by page, which aids maintainability.
Again, I recognize that every one of these problems is solvable, at some cost.
But there comes a point where you are spending all your time solving problems which you could have just avoided in the first place. It comes back to the benefits and how important they are to you.
Disadvantages
1. Client must enable javascript. Yes, this is a clear disadvantage of SPA. In my case I know that I can expect my users to have JavaScript enabled. If you can't then you can't do a SPA, period. That's like trying to deploy a .NET app to a machine without the .NET Framework installed.
2. Only one entry point to the site. I solve this problem using SammyJS. 2-3 days of work to get your routing properly set up, and people will be able to create deep-link bookmarks into your app that work correctly. Your server will only need to expose one endpoint - the "give me the HTML + CSS + JS for this app" endpoint (think of it as a download/update location for a precompiled application) - and the client-side JavaScript you write will handle the actual entry into the application.
3. Security. This issue is not unique to SPAs, you have to deal with security in exactly the same way when you have an "old-school" client-server app (the HATEOAS model of using Hypertext to link between pages). It's just that the user is making the requests rather than your JavaScript, and that the results are in HTML rather than JSON or some data format. In a non-SPA app you have to secure the individual pages on the server, whereas in a SPA app you have to secure the data endpoints. (And, if you don't want your client to have access to all the code, then you have to split apart the downloadable JavaScript into separate areas as well. I simply tie that into my SammyJS-based routing system so the browser only requests things that the client knows it should have access to, based on an initial load of the user's roles, and then that becomes a non-issue.)
Advantages
A major architectural advantage of a SPA (that rarely gets mentioned) in many cases is the huge reduction in the "chattiness" of your app. If you design it properly to handle most processing on the client (the whole point, after all), then the number of requests to the server (read "possibilities for 503 errors that wreck your user experience") is dramatically reduced. In fact, a SPA makes it possible to do entirely offline processing, which is huge in some situations.
Performance is certainly better with client-side rendering if you do it right, but this is not the most compelling reason to build a SPA. (Network speeds are improving, after all.) Don't make the case for SPA on this basis alone.
Flexibility in your UI design is perhaps the other major advantage that I have found. Once I defined my API (with an SDK in JavaScript), I was able to completely rewrite my front-end with zero impact on the server aside from some static resource files. Try doing that with a traditional MVC app! :) (This becomes valuable when you have live deployments and version consistency of your API to worry about.)
So, bottom line: If you need offline processing (or at least want your clients to be able to survive occasional server outages) - dramatically reducing your own hardware costs - and you can assume JavaScript & modern browsers, then you need a SPA. In other cases it's more of a tradeoff.
One major disadvantage of SPA - SEO. Only recently Google and Bing started indexing Ajax-based pages by executing JavaScript during crawling, and still in many cases pages are being indexed incorrectly.
While developing SPA, you will be forced to handle SEO issues, probably by post-rendering all your site and creating static html snapshots for crawler's use. This will require a solid investment in a proper infrastructures.
Update 19.06.16:
Since writing this answer a while ago, I gain much more experience with Single Page Apps (namely, AngularJS 1.x) - so I have more info to share.
In my opinion, the main disadvantage of SPA applications is SEO, making them limited to kind of "dashboard" apps only. In addition, you are going to have a much harder times with caching, compared to classic solutions. For example, in ASP.NET caching is extreamly easy - just turn on OutputCaching and you are good: the whole HTML page will be cached according to URL (or any other parameters). However, in SPA you will need to handle caching yourself (by using some solutions like second level cache, template caching, etc..).
I would like to make the case for SPA being best for Data Driven Applications. gmail, of course is all about data and thus a good candidate for a SPA.
But if your page is mostly for display, for example, a terms of service page, then a SPA is completely overkill.
I think the sweet spot is having a site with a mixture of both SPA and static/MVC style pages, depending on the particular page.
For example, on one site I am building, the user lands on a standard MVC index page. But then when they go to the actual application, then it calls up the SPA. Another advantage to this is that the load-time of the SPA is not on the home page, but on the app page. The load time being on the home page could be a distraction to first time site users.
This scenario is a little bit like using Flash. After a few years of experience, the number of Flash only sites dropped to near zero due to the load factor. But as a page component, it is still in use.
For such companies as google, amazon etc, whose servers are running at max capacity in 24/7-mode, reducing traffic means real money - less hardware, less energy, less maintenance. Shifting CPU-usage from server to client pays off, and SPAs shine. The advantages overweight disadvantages by far.
So, SPA or not SPA depends much on the use case.
Just for mentioning another, probably not so obvious (for Web-developers) use case for SPAs:
I'm currently looking for a way to implement GUIs in embedded systems and browser-based architecture seems appealing to me. Traditionally there were not many possibilities for UIs in embedded systems - Java, Qt, wx, etc or propriety commercial frameworks. Some years ago Adobe tried to enter the market with flash but seems to be not so successful.
Nowadays, as "embedded systems" are as powerful as mainframes some years ago, a browser-based UI connected to the control unit via REST is a possible solution. The advantage is, the huge palette of tools for UI for no cost. (e.g. Qt require 20-30$ per sold unit on royalty fees plus 3000-4000$ per developer)
For such architecture SPA offers many advantages - e.g. more familiar development-approach for desktop-app developers, reduced server access (often in car-industry the UI and system muddles are separate hardware, where the system-part has an RT OS).
As the only client is the built-in browser, the mentioned disadvantages like JS-availability, server-side logging, security don't count any more.
2. With SPA we don't need to use extra queries to the server to download pages.
I still have to learn a lot but since I started learn about SPA, I love them.
This particular point may make a huge difference.
In many web apps that are not SPA, you will see that they will still retrieve and add content to the pages making ajax requests. So I think that SPA goes beyond by considering: what if the content that is going to be retrieved and displayed using ajax is the whole page? and not just a small portion of a page?
Let me present an scenario. Consider that you have 2 pages:
a page with list of products
a page to view the details of a specific product
Consider that you are at the list page. Then you click on a product to view the details. The client side app will trigger 2 ajax requests:
a request to get a json object with the product details
a request to get an html template where the product details will be inserted
Then, the client side app will insert the data into the html template and display it.
Then you go back to the list (no request is done for this!) and you open another product. This time, there will be only an ajax request to get the details of the product. The html template is going to be the same so you don't need to download again.
You may say that in a non SPA, when you open the product details, you make only 1 request and in this scenario we did 2. Yes. But you get the gain from an overall perspective, when you navigate across of many pages, the number of requests is going to be lower. And the data that is transferred between the client side and the server is going to be lower too because the html templates are going to be reused. Also, you don't need to download in every requests all those css, images, javascript files that are present in all the pages.
Also, let's consider that you server side language is Java. If you analyze the 2 requests that I mentioned, 1 downloads data (you don't need to load any view file and call the view rendering engine) and the other downloads and static html template so you can have an HTTP web server that can retrieve it directly without having to call the Java application server, no computation is done!
Finally, the big companies are using SPA: Facebook, GMail, Amazon. They don't play, they have the greatest engineers studying all this. So if you don't see the advantages you can initially trust them and hope to discover them down the road.
But is important to use good SPA design patterns. You may use a framework like AngularJS. Don't try to implement an SPA without using good design patterns because you may end up having a mess.
Disadvantages:
Technically, design and initial development of SPA is complex and can be avoided. Other reasons for not using this SPA can be:
a) Security: Single Page Application is less secure as compared to traditional pages due to cross site scripting(XSS).
b) Memory Leak: Memory leak in JavaScript can even cause powerful Computer to slow down. As traditional websites encourage to navigate among pages, thus any memory leak caused by previous page is almost cleansed leaving less residue behind.
c) Client must enable JavaScript to run SPA, but in multi-page application JavaScript can be completely avoided.
d) SPA grows to optimal size, cause long waiting time. Eg: Working on Gmail with slower connection.
Apart from above, other architectural limitations are Navigational Data loss, No log of Navigational History in browser and difficulty in Automated Functional Testing with selenium.
This link explain Single Page Application's Advantages and Disadvantages.
Try not to consider using a SPA without first defining how you will address security and API stability on the server side. Then you will see some of the true advantages to using a SPA. Specifically, if you use a RESTful server that implements OAUTH 2.0 for security, you will achieve two fundamental separation of concerns that can lower your development and maintenance costs.
This will move the session (and it's security) onto the SPA and relieve your server from all of that overhead.
Your API's become both stable and easily extensible.
Hinted to earlier, but not made explicit; If your goal is to deploy Android & Apple applications, writing a JavaScript SPA that is wrapped by a native call to host the screen in a browser (Android or Apple) eliminates the need to maintain both an Apple code base and an Android code base.
I understand this is an older question, but I would like to add another disadvantage of Single Page Applications:
If you build an API that returns results in a data language (such as XML or JSON) rather than a formatting language (like HTML), you are enabling greater application interoperability, for example, in business-to-business (B2B) applications. Such interoperability has great benefits but does allow people to write software to "mine" (or steal) your data. This particular disadvantage is common to all APIs that use a data language, and not to SPAs in general (indeed, an SPA that asks the server for pre-rendered HTML avoids this, but at the expense of poor model/view separation). This risk exposed by this disadvantage can be mitigated by various means, such as request limiting and connection blocking, etc.
In my development I found two distinct advantages for using an SPA. That is not to say that the following can not be achieved in a traditional web app just that I see incremental benefit without introducing additional disadvantages.
Potential for less server request as rendering new content isn’t always or even ever an http server request for a new html page. But I say potential because new content could easily require an Ajax call to pull in data but that data could be incrementally lighter than the itself plus markup providing a net benefit.
The ability to maintain “State”. In its simplest terms, set a variable on entry to the app and it will be available to other components throughout the user’s experience without passing it around or setting it to a local storage pattern. Intelligently managing this ability however is key to keep the top level scope uncluttered.
Other than requiring JS (which is not a crazy thing to require of web apps) other noted disadvantages are in my opinion either not specific to SPA or can be mitigated through good habits and development patterns.
I am developing a standard rails application, and so far I haven't used any AJAX, just good ol' HTML. My plan is to iteratively add "remote" links and all that kind of stuff and support for JS responses, because I know that generating JS server side is very very evil, but I find it to be very handy as well, easy, fast and it makes the application snappy enough and i18n comes out-of-the-box.
Using a pure JSON approach would be lighter, but needs lots of client-side coding.
Now imagine that in this application users have a mailbox, and since the idea is that they will be able to do most or even all of the actions without reloading the page, the mailbox counter will never change unless they refresh the page manually.
So, here comes the question: Which is the best way to handle this?
I thought about using Ember (for data binding), and sharing views with rails, via some sort of handlebars implementation for ruby. That would be quite awesome, but not very transparent for the developer (me). Although I guess that I only need to write handlebars views that will be used by ember, the rest can still be written in their original format, no?
Another option might be to use some sort of event system (EventSource maybe?), and just go with handy the JS views approach, and listen to those events. I guess those should be JSON objects, and the client must be coded to be able to handle them. This seems a bit cumbersome, and I need a solution for heroku (faye?), which is where my app is hosted. Any hints?
I think that the ember approach is the more robust one, but seems to be quite complex as well, and I don't want to repeat myself server and client side.
EDIT:
I have seen this, which is more or less the option #2.
One of the advantages of using a JavaScript framework is that the whole application can be concatenated and compressed into one JavaScript file. Provided that modern browsers aggressively cache JavaScript, the browser would no longer need to request those assets after initial page load.
Another advantage of using a JavaScript framework is that it requires you to be a consumer of your own API. Fleshing out the application's API for your own consumption might lend to less work in the future if there is a possibility of mobile applications or 3rd parties having access to it.
If you do not need your application to respond to every request with an equivalent HTML response, I think a compelling case could be made for using a JavaScript framework.
Many of those benefits might be lost if your application needs to respond to every request with an equivalent HTML template. The Ember core has been relatively vocal and in opposition to supporting this style of progressive enhancement. Considering the tools for using a JavaScript framework in this way are relatively unstable and immature, I might be prone to using option 2 to accomplish this.
I am just starting to look at MVC structure, first i looked at how backbone.js worked, and now I have just completed rails for zombies, by Code School. I know that I haven't delved too far into any of this, but I had a question to begin with.
Can you use these libraries together?
I have learned how to create models, views, etc in both, but when creating a real application do you use both backbone and rails?
If so...
When do you use a backbone.js model vs. a rails model?
Maybe I am just getting ahead of myself and need to keep practicing and doing tutorials but I couldn't seem to find anything directly on this.
Thanks!
Before anything else I'd suggest taking a look at thoughtbot's Backbone.js on Rails book, which is a great starting point, although aimed at an intermediate to advanced audience. I bought this book having already worked with rails but as a total backbone.js beginner and it has served me very well.
Beyond that, there are some fundamental issues with combining these frameworks which go beyond the details covered in this book and other books. Below are some things I'd suggest you think about, from my own experiences pairing RoR and backbone.js. This is a long answer and strays a bit from the specifics of your question, but I hope it might help you out in the "big picture" sense of understanding the problem you're facing.
Rails: Web Framework vs API
The first thing you confront when using backbone.js on top of a rails application is what to do about views, but this is really just the surface of a much deeper issue. The problem goes to the very heart of what it means to create a RESTful web service.
Rails is set up out of the box to encourage its users to create RESTful services, by structuring routing in terms of a set of resources accessed at uniform URIs (defined in your routes.rb file) through standard HTTP actions. So if you have a Post model, you can:
Get all posts by sending GET request to /posts
Create a new post by sending a GET request to /posts/new, filling out the form and sending it (a POST request) to /posts
Update a post with id 123 by sending a GET request to /posts/123/edit, filling out the form and sending it (a PUT request) to posts/123
Destroy a post with id 123 by sending a DELETE request to /posts/123
The key thing to remember about this aspect of Rails is that it is fundamentally stateless: regardless of what I was doing previously, I can create a new Post simply by sending a POST request with a valid form data to the correct URI, say /posts. Of course there are caveats: I may need to be logged in (have a session cookie identifying me), but in essence Rails doesn't really care what I was doing before I sent that request. I could follow it up by updating another post, or by sending a valid action to whatever other resources are made available to me.
This aspect of how Rails is designed makes it relatively easy to turn a (Javascript-light) Rails web application into an API: the resources will be similar or the same, the web framework returning HTML pages while the API (typically) returns data in JSON or XML format.
Backbone.js: A new stateful layer
Backbone is also based on RESTful resources. Whenever you create, update or destroy a backbone.js model, you do so via the standard HTTP actions sent to URIs which assume a RESTful architecture of the kind described above. This makes it ideal for integrating with RESTful services like RoR.
But there is a subtle point to be stressed here: backbone.js integrates seamlessly with Rails as an API. That is to say, if you strip away the HTML views and just use Rails for serving RESTful resources, integrating with the database, performing session management, etc., then it integrates very nicely with the structure that backbone.js provides for client-side code. Many people argue that there's nothing wrong with using rails this way, and I think in many ways they are right.
The complications arise though from the issue of what to do with that other part of Rails that we've just thrown away: the views and what they represent.
Stateful humans, stateless machines
This is actually more important than it may initially seem. HTML views represent the stateless interface that humans use for accessing the RESTful resources your service provides. Doing away with them leaves you with two access points:
For humans: a rich, client-side interface provided by the backbone.js layer (stateful)
For machines: a resource-oriented RESTful API provided by the rails layer (stateless)
Notice that there is no longer a stateless (RESTful) interface for humans. In contrast, in a traditional rails app with an API, we had something closer to this:
HTML resources for humans (stateless)
JSON/XML resources (API) for machines (stateless)
The latter two interfaces for accessing resources are much closer in nature to each other than the previous two. Just think for example of rails' respond_with, which takes advantage of the similarities to wrap various RESTful responders in a unified method.
Working together
This might all seem very abstract and beside the point, I know. To try to make it more concrete, consider the following problem, which gets back to your question about getting rails and backbone.js to work together. In this problem, you want to:
Create a web service with a rich client-side experience using backbone.js, with rails as the back end serving resources in JSON format.
Use pushState to give each page in the app a URL (e.g. /posts/123) which can be accessed directly (by entering it into the browser bar).
For each of these URLs, also serve an HTML page for clients without javascript.
These are not unusual demands for a modern web service, but they create a complex challenge. To make a long story short, you now have to create two "human-oriented" layers:
Stateful client-side interface (backbone.js templates and views)
Stateless HTML resources (Rails HTML views)
The complexity of actually doing this leads many nowadays to abandon the latter of these two and just offer a rich client-side interface. What you decide to do depends on your goals and what you want to achieve, but it's worth thinking about this problem carefully.
As another possible reference for doing that, I'd suggest having a look at O'Reilly's RESTful Web Services. It might seem odd to be recommending a book on REST in a question about Rails and Backbone.js, but actually I think this is the key piece that fits these very different frameworks together, and understanding it more fully will help you take advantage of the strengths of both.
Yes, you can use both side by side. Backbone is for storing and manipulating data within the client browser. It generally needs a server to talk to and fetch the data from. This is where Rails comes in. You can have a web application without heavy client-side code. Backbone is for building out sites that feel more like apps--think of Gmail or Pandora.
I advise just learning Rails first. Once you can get static pages loading and styled as you wish, then understanding Backbone's place will make more sense
I've used rails as a backend server to serve a fairly large website, which included a few one-page apps (built in backbone).
I'd suggest the backbone-on-rails gem. The idea is that your rails server will serve up the backbone app as a script tag in one of your views. You keep your backbone app itself in the rails app/assets folder.
Backbone understands rails routing conventions, and you just need to give it a path to a json api that rails can almost generate for you with rails generate resource.
Other than the syncing between the models, your backbone apps and rails apps are fairly separate. Backbone and Rails don't have quite the same MVC model, but getting them to cooperate is pretty easy.
This is a pretty generic question, but I come from a few years with Flex, and I am not so much experienced with pure web development.
My question is: if you need to build an AJAX app, which one of the two approaches you would prefer:
classical server-side MVC, where controllers return views supplied with model data. Views can be full-blown or partial. Basically, there will be only a small number of full-blown views, which work as the containers, and javascript will help fill-in the gaps with partial HTML views asynchronously. This approach is one step further the traditional web development, as javascript is used only for maintaining the overall control and user interactions
A full-blown js app, such as the ones built with Cappuccino, Sproutcore, or Backbone.js, where the client side is thick, and implements a client-side implementation of MVC that handles model as well, as controlling logi, and view interactions. Server-side in this case plays the role of a set of JSON/XML services with which the client exchanges data. The disadvantage in this case is that view templates have to be loaded at the beginning, when the initial application is bootstrapped, so that javascript can layout the markup based on the data. The advantages are the reduced weight of the server response, as well the better control within the client, which allows for stuff like view-model binding to be applied.
A somewhat mixed approach between those two.
I am favoring the second one, which is normal, since I come from a similar environment, but with that one I a mostly concerned about issues such as url routing (or deeplinking as we call it in Flash), state management, modularity, and view layout (when do the view markup templates get loaded? Should there be specific server endpoints that provide those templates upon being called, so that the template data does not get loaded in the beginning?)
Please, comment
I prefer #2 myself, but I dig javascript :)
Unfortunately, I have never even seen what flex code looks like. My experience is with rails, so I will talk in those terms, but hopefully the concepts are universal enough that the answer will make sense
As for client side templates, the best is when your server side platform of choice has a story for it (like rails 3.1 asset pipeline or the jammit plugin for pre 3.1). If you are using rails I can give more info, but if you aren't the first thing I would do is look into finding an asset management system that handles this out of the box.
My fallback is generally to just embed templates into my server side templates inside of a script tag like
<script type='text/html' id='foo-template'></script>
To retrieve the string later, you can do something like this (jquery syntax)
var template = $('#foo-template').html();
In my server side templates, I will pull those script tags into their own files as partials, so I still get the file separation (rails syntax)
<%= render :partial => 'templates/foo.html.erb' %>
I much prefer just using jammit, and having my client side templates in seperate files ending in .jst, but the second approach will work anywhere, and you still get most of the same benefits.
I would recommend the second approach. The second approach (thick client thin server approach ) which you are already familiar with is the preferred approach by an increasing number of modern developers because The rendering and management of widgets is done on the client and this saves computational and bandwidth overhead on the server. Plus if you have a case of complex widget management then using server side code for widgets can become increasingly complicated and unmanageable.
The disadvantage pointed by you :
view templates have to be loaded at the beginning, when the initial
application is bootstrapped, so that javascript can layout the markup
based on the data.
is not correct. You can very well load static templates on the fly as required through ajax and then render them using javascript into full blown widgets.
For example if you have an imagegallery with an image editor component then you may not load the files required for image editor (including images, templates and widget rendering code) until user actually chooses to edit an image.
Using scriptloaders (eg. requirejs, labjs) you can initially load only a small to medium sized bootstrapping script and then load the rest of the scripts dynamically depending on the requirements.
Also, powerful and mature server side widget libraries are only available for java backends (eg vaadin). If you are working on php,python or ruby backend then writing your own server side ui framework can be a serious overkill. It is much more convenient to use client side server-agonistic javascript ui frameworks eg. dojo, qooxdoo etc.
You seem to have an inclination towards towards client side mvc frameworks. This is a nice approach but the dual mvc architecture (on server as well as well as client) often has a tendency to lead to code duplication and confusion. For this reason I wont recommend the mixed approach.
You can have a proper mvc framework in the frontend and only a server side model layer that interacts with the application through a restful api (or rpc if you are so inclined).
Since you are coming from flex background I would recommend you to check out Ajax.org ui platform http://ui.ajax.org . Their user interface framework is tag based like flex and though the project is new they have a powerful set of widgets and very impressive charting and data-binding solutions. Dojo and Ample SDK also adopt tag based widget layout system.
Qooxdoo and extjs advocate doing everything from layouting and rendering through javascript which might be inconvenient for you.
I am an architect of one mobile Web application that has 100,000 users with 20,000 of them online at the same time.
For that kind of application (e.g. limited bandwidth) #2 is the only option I think.
So server side is just a set of data services and client uses pure AJAX RPC.
In our case we use single static index.htm file that contains everything. Plus we use HTML5 manifest to reduce roundtrips to the server for scripts/styles/images on startup. Plus use of localStorage for app state persistence and caching.
As of MVC: there are many template engines out there so you can use anything that is most convenient for you. Templates by themselves are pretty compact as they do not contain any data so it is OK (in our case) to include them all.
Yes, architecture of such an application needs to be well thought upfront. #1 option is not so critical - entry level is lower.
I don't know what platform you are targeting but as I said #2 is probably the only option for mobile.
I am facing a decision about the web application architecture I am going to work on.
We are a small team and actually I will work on it alone (everybody work on something else).
This application will consist of front-end build on the ExtJS library
and it will use the model "load page, build GUI and never refresh".
On the web "desktop" there will be a lot of data windows, map views (using openlayers + GeoExt) and other stuff.
GUI should be flexible and allow every user to modify (and persist) the layout to fit his/her needs.
It should be possible to divide the application into modules / parts / ... and then let users in specific groups use only the specific modules. In other words, each group of users
can have different GUI available on the web "desktop".
Questions are:
First of all, is this approach good?
There will be a lot of AJAX calls from clients,
may be this could be a problem.
How to handle code complexity on client side?
So far I have decided to use dojo.require / dojo.provide feature and divide the client side code into modules
(for production they will be put together using dojo build system)
I am thinking about to use kind of IoC container on client side, but not sure which one yet.
It is very likely that I will write one for myself, it should not be difficult in dynamic language like JavaScript.
How to handle AJAX calls on server ?
Should I use WCF on server side ? Or just ordinary ashx handler ?
How to handle code complexity on server side ?
I want to use Spring.NET. May be this approach could help with modularity problem.
Data access - here I am pretty sure what to use:
For DAL classes I will use nHibernate. Then I compose them with business classes using Spring.NET.
I would really appreciate some advice about which way to go.
I know about a lot of technologies, but I have used only little part of it.
I don't have time to explore all of them and be fine with the decision.
We do this type of single page interface where I work on a pretty large scale for our clients. (Our site is not an internet site)
This seems to work pretty well for us. The more js you have the more difficult it gets to maintain, so have as many automated js tests as you can and try to break up your js logic in an mvc fashion. 4.0 is supposed to make this much easier.
Ext 4.0 has this built in if you are trying to limit the code you bring down. If you have the same users day after day, then I think it would be best to just bring all the source down (compressed and minified) and cache it.
We've found asmx to work really well. I have nothing against wcf, but last I looked it seemed like more trouble than it was worth. I know they have made many improvements recently. asmx just works though (with a few request header changes and managing the "d." on the client side).
Our Server side data access layer is pretty complex, but the interface for the ajax calls is pretty simple. You have not really given enough info to answer this part. I would start as simple as possible and refactor often.
We are also using nHibernate. Works fine for us. We have built a DDD model around it. It can take a lot of work to get that right though (not sure if we have it right after months of working at it).
If I were you I'd start with just extjs, your web service technology, and nHibernte.
I would recommend ASP.NET MVC 3 with Razor instead of a lot Javascript and calls to Service you can just do ajax calls to an Action in a Controller and that will let you have more maintainable code and use a IoC like Ninject. EF instead of NHibernate.
But it's your decision.
I would look into using a tool like Google Closure Compiler, especially if you're dealing with a very large project. I don't have too much experience with ExtJS, but large projects in JavaScript are hard and something like Closure Compiler tends to make it easier.