MEAN Stack - Should I separate servers? - javascript

I'm working on a personal project with the MEAN stack and would greatly appreciate advice on which route I should take when setting up the server architecture. Performance and scalibility is important to me since I would like to build enterprise level web apps in the future.
This project will be a image hosting application which will include front-end, private API, and a file storage system.
Option 1: All on the same server
Option 2: Front-end and Private API on same server. File storage on separate server.
Option 3: Front-end, Private API and File storage would all be on their own server.
I'm thinking Option 2 might be the best option, but I would love to learn from those who have experience in building apps with a similar architecture.
Thanks!

Depending upon how much scale you are expecting and how much you want to spend on server infrastructure different cases are suitable. I will try to explain a best of both world.
For all the services which are API based, they should be on an infrastructure that can scale. For that reason, they are mostly in the form of servers which are always behind load balancers. Scaling takes places based upon traffic type and region. To do this, currently industry favorite is docker based microservices. Another comparable solution is Google App Engine.
For all the Front-End/UI based stuff, they should be stored on a CDN for optimized delivery. With CDNs you maintain that your application's UI will always be available to end users even if your private API based service is slow. CDNs are cheap and they will have a great impact on your end users.
For Image/Files storage, you should use a Blob storage based solution. Your server HDDs are mostly SSDs these days and they cost a bunch. Moreover those disks are attached to your servers hence really prone to errors or security issues. Using a blob storage is helpful since it takes care of redundant and scalable storage along with some form of security.
With this model, you will make sure that your files are safe and away from your business logic, your end users can access your webapp even when your core service is slow, and very easily manage server for API/service based scale.

You should not bake your server architecture right upfront. But measure along the way and improve upon the desired lacking features.
So I would go with the same server approach at first.
But design the Frontend as a serverless architecture.
Create a Private API and a separate File storage API for reading/writing. These can be microservices ready to scale when needed.
You can host the pieces on other clouds/or use your own to scale vertically/horizontally or geographically.
So you bake the scaling part into your design, but only scale when you need to.
Basically the only thing you need to take care now, that your architecture is decoupled into pieces so you are ready to scale when the need arises.
Note: two good read on the topics are the following: Infrastructure as Code, Building microservices

Related

Is completely mocking the back-end logic for the front-end reasonable?

My boss asked me to find a way to completely disassociate our front-end application from the back-end in the local environment, currently I'm the sole developer for both our back-end software and the front-end, so using Docker I'm able to mimic a production environment and work on both projects separately, (we don't render on the server side), his idea is to mock literally everything, so in theory you wouldn't need the back-end software to develop the front-end.
Two of the (more reasonable) solutions I've thought of are:
Mocking all of the network requests on the frontend, these functions will
run instead of network requests.
the problem with this approach is that it is not persistent, all of the data is randomly generated for every request, and in a system that is so oriented around forms, tables, and lists, I feel that getting the data you're expecting after a form submission is a must.
and in order to persist data, every request would have to go through some sort of data store (Mobx, Redux, etc...) and even then, if the page refreshes, the data is gone.
Initiating an express server and DB on top of Docker along with Webpack, and mimicking the production server requests and responses using db seeders, this way the front-end is persistent.
Obviously, this approach would generate plenty of work, and in order to make sure the express server is correctly mimicking the original back-end software, it too will need unit tests and mock requests.
While mocking the data is great for unit tests, this doesn't seem like the way of doing front-end with such a small team to me, is there a good approach to achieving this that I cant come up with or find? or is this an exercise in poor decoupling strategies?
What you are looking for is a Mock API. There's plenty of packages for it where you define example requests in a JSON format. A lot of these also handle persisting data for a short amount of time.
From a strategy perspective using these can actually make a lot of sense to automize end-to-end-tests, which shouldn't rely a production API. Whether it's the right choice of developer time in a one man team depends on the long-term perspective of course ;-)

Suggestion for the best way to store persistent data for a light-weight, portable JS-based web app

I'm still new to web development. To learn more about JavaScript(JS) and web development, I am thinking of writing a simple web app which pulls and records time-series data (say, the price of a stock) periodically and draws a live chart showing the historical data. In addition to price data, I would like the app to record/maintain some user-related info such as the ticker of the stock(s) associated to each user.
Ideally, I would like to keep the app light-weight and portable/standalone (meaning, reduce the dependency as much as possible, and the end user hopefully doesn't have to do a lot of configuration/install of dependencies). The issue that I cannot figure out is where to store the historical data. I looked around for database solutions which will allow the app to write data directly from the browser (that is, using JS) to the client's machine. LocalStorage and IndexDB are non-persistent as far as I understand. Some suggested using PouchDB, but upon looking at it closer, it seems like the user need to install CouchDB or some compatible DB (say, SQLite). But that means I cannot share my app with users who aren't technical enough to install and configure CouchDB or SQLite on their machine before using my app.
If anyone could share some insights as to which DB might allow a JS-based app to write persistent data to the client's machine (if such thing even exist), that would be greatly helpful. If there is no such DB solution, please feel free to let me know alternative solutions that would allow the goal of building a simple, portable, JS-based web app. Thank you!
I think the best solution is to use Electron.js. The whole idea of this framework is to create web apps that can reside on client machines. You could package up any DB option you want, or even better, just include an API to your backend through the web app and it will work on your client machine like I think you want it to.
As for DB options, there is a great thread on S.O. that talks about what is possible. It looks like knex.js is your best bet (full disclosure - I haven't used knex).

Best practice for on/off line data synchronization using AngularJS and Symfony 2

I'm building a relatively complex and data heavy web application in AngularJS. I'm planning to use php as a RESTful backend (with symfony2 and FOSRESTbundle). I have spent weeks looking around for different solutions to on/off line synchronization solutions and there seem to be many half solutions (see list below for some examples). But non of them seem to fit my situation perfectly. How do I go about deciding which strategy will suite me?
What issues that might determine “best practices” for building an on/off line synchronization system in AngularJS and symfony 2 needs some research, but on the top of my head I want to consider things like speed, ease of implementation, future proof (lasting solution), extensibility, resource usage/requirements on the client side, having multiple offline users editing the same data, how much and what type of data to store.
Some of my requirements that I'm presently aware of are:
The users will be offline often and then needs to synchronize (locally created) data with the database
Multiple users share some of the editable data (potential merging issues needs to be considered).
User's might be logged in from multiple devices at the same time.
Allowing large amount of data to be stored offline(up to a gigabyte)
I probably want the user to be able to decide what he wants to store locally.
Even if the user is online I probably want the user to be able to choose whether he uses all (backend) data or only what's available locally.
Some potential example solutions
PouchDB - Interesting strategies for synchronizing changes from multiple sources
Racer - Node lib for realtime sync, build on ShareJS
Meteor - DDP and strategies for sync
ShareJS - Node.js operational transformation, inspired by Google Wave
Restangular - Alternative to $resource
EmberData - EmberJS’s ORM-like data persistence library
ServiceWorker
IndexedDB Polyfill - Polyfill IndexedDB with browsers that support WebSQL (Safari)
BreezeJS
JayData
Loopback’s ORM
ActiveRecord
BackBone Models
lawnchair - Lightweight client-side DB lib from Brian Leroux
TogetherJS - Mozilla Labs’ multi-client state sync/collaboration lib.
localForage - Mozilla’s DOMStorage improvement library.
Orbit.js - Content synchronization library
(https://docs.google.com/document/d/1DMacL7iwjSMPP0ytZfugpU4v0PWUK0BT6lhyaVEmlBQ/edit#heading=h.864mpiz510wz)
Any help would be much appreciated :)
You seem to want a lot of stuff, the sync stuff is hard... I have a solution to some of this stuff in an OSS library I am developing. The idea is that it does versioning of local data, so you can figure out what has changed and therefore do meaningful sync, which also includes conflict resolution etc. This is sort-of the offline meteor as it is really tuned to offline use (for the London Underground where we have no mobile data signals).
I have also developed an eco system around it which includes a connection manager and server. The main project is at https://github.com/forbesmyester/SyncIt and is very well documented and tested. The test app for the ecosystem will be at https://github.com/forbesmyester/SyncItTodoMvc but I have yet to write virtually any docs for it.
It is currently using LocalStorage but will be easy to move to localForage as it actually is using a wrapper around localStorage to make it an async API... Another one for the list maybe?
To work offline with your requeriments I suggest to divide problem into two scenarios: content (html, js, css) and data (API REST).
The content
Will be stored offline by appcache for small apps or for advanced cases with the awesome serviceworkers. Chrome 40+.
The data
Require solve the storage and synchronization and It becames a more difficult problem.
I suggest a deep reading of the Differential Synchronization algorimth, and take next tips in consideration:
Frontend
Store the resource and shadow (using for example url as key) into the localstorage for small apps or into more advanced alternatives (pouchdb,indexdb,...). With the resource you could work offline and when needs synchronize with the server use jsonpath to get diffs between the resource-shadow and to send it to server the PATCH request.
Backend
At backend take in consideration storage the shadow copies into redis.
The two sides (Frontend/Backend) needs to identify the client node, to do so you could use x- syn-token at HTTP header (send it in all request of the client with angular interceptors).
https://www.firebase.com/
it's reliable and proven, and can be used as a backend and sync library for what you're after. but, it costs, and requires some integration coding.
https://goinstant.com/ is also a good hosted option.
In some of my apps, I prefer to have both: syncing db source AND another main database. (mogno/express, php/mysql, etc..). then each db handles what's its best with, and it's features (real-time vs. security, etc...). This is true regardless to sync-db provider (be it Racer or Firebase or GoInstant ...)
The app I am developing has many of the same requirements and is being built in AngularJS. In terms of future proofing, there are two main concerns that I have found, one is hacking attempts requiring encryption and possible use of one time keys and an backend key manager and the other is support for WebSQL being dropped by the standards consortium in preference to indesedDB. So finding an abstraction layer that can support both is important. The solution set I have come up with is fairly straight forward. Where offline data is is loaded first into the UI and a request goes out to the REST Server if in an online state. As for resolving data conflicts in a multi user environment, that becomes a business rule decision. My decision was to simplify the matter and not delve into data mergers but to use a microtime stamp comparison to determine which version should be kept and pushed out to clients. When in offline mode, store data as a dirty write and the push to server when returning to an online state.
Or use ydn-db, which I am evaluating now as it has built in support for AWS and Google cloud storage built in.
Another suggestion:
Yjs leverages an OT-like algorithm to share a wide range of supported data types, and you have the option to store the shared data in IndexedDB (so it is available for offline editing).

Single Page Application: advantages and disadvantages [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've read about SPA and it advantages. I find most of them unconvincing. There are 3 advantages that arouse my doubts.
Question: Can you act as advocate of SPA and prove that I am wrong about first three statements?
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
Server-side rendering is hard to implement for all the intermediate
states - small view states do not map well to URLs.
Single page apps are distinguished by their ability to redraw any part
of the UI without requiring a server roundtrip to retrieve HTML. This
is achieved by separating the data from the presentation of data by
having a model layer that handles data and a view layer that reads
from the models.
What is wrong with holding a model layer for non-SPA? Does SPA the only compatible architecture with MVC on client side?
2. With SPA we don't need to use extra queries to the server to download pages.
Hah, and how many pages user can download during visiting your site? Two, three? Instead there appear another security problems and you need to separate your login page, admin page etc into separate pages. In turn it conflicts with SPA architecture.
3.May be any other advantages? Don't hear about any else..
=== DISADVANTAGES ===
Client must enable javascript.
Only one entry point to the site.
Security.
P.S. I've worked on SPA and non-SPA projects. And I'm asking those questions because I need to deepen my understanding. No mean to harm SPA supporters. Don't ask me to read a bit more about SPA. I just want to hear your considerations about that.
Let's look at one of the most popular SPA sites, GMail.
1. SPA is extremely good for very responsive sites:
Server-side rendering is not as hard as it used to be with simple techniques like keeping a #hash in the URL, or more recently HTML5 pushState. With this approach the exact state of the web app is embedded in the page URL. As in GMail every time you open a mail a special hash tag is added to the URL. If copied and pasted to other browser window can open the exact same mail (provided they can authenticate). This approach maps directly to a more traditional query string, the difference is merely in the execution. With HTML5 pushState() you can eliminate the #hash and use completely classic URLs which can resolve on the server on the first request and then load via ajax on subsequent requests.
2. With SPA we don't need to use extra queries to the server to download pages.
The number of pages user downloads during visit to my web site?? really how many mails some reads when he/she opens his/her mail account. I read >50 at one go. now the structure of the mails is almost the same. if you will use a server side rendering scheme the server would then render it on every request(typical case).
- security concern - you should/ should not keep separate pages for the admins/login that entirely depends upon the structure of you site take paytm.com for example also making a web site SPA does not mean that you open all the endpoints for all the users I mean I use forms auth with my spa web site.
- in the probably most used SPA framework Angular JS the dev can load the entire html temple from the web site so that can be done depending on the users authentication level. pre loading html for all the auth types isn't SPA.
3. May be any other advantages? Don't hear about any else..
these days you can safely assume the client will have javascript enabled browsers.
only one entry point of the site. As I mentioned earlier maintenance of state is possible you can have any number of entry points as you want but you should have one for sure.
even in an SPA user only see to what he has proper rights. you don't have to inject every thing at once. loading diff html templates and javascript async is also a valid part of SPA.
Advantages that I can think of are:
rendering html obviously takes some resources now every user visiting you site is doing this. also not only rendering major logics are now done client side instead of server side.
date time issues - I just give the client UTC time is a pre set format and don't even care about the time zones I let javascript handle it. this is great advantage to where I had to guess time zones based on location derived from users IP.
to me state is more nicely maintained in an SPA because once you have set a variable you know it will be there. this gives a feel of developing an app rather than a web page. this helps a lot typically in making sites like foodpanda, flipkart, amazon. because if you are not using client side state you are using expensive sessions.
websites surely are extremely responsive - I'll take an extreme example for this try making a calculator in a non SPA website(I know its weird).
Updates from Comments
It doesn't seem like anyone mentioned about sockets and long-polling.
If you log out from another client say mobile app, then your browser
should also log out. If you don't use SPA, you have to re-create the
socket connection every time there is a redirect. This should also
work with any updates in data like notifications, profile update etc
An alternate perspective: Aside from your website, will your project
involve a native mobile app? If yes, you are most likely going to be
feeding raw data to that native app from a server (ie JSON) and doing
client-side processing to render it, correct? So with this assertion,
you're ALREADY doing a client-side rendering model. Now the question
becomes, why shouldn't you use the same model for the website-version
of your project? Kind of a no-brainer. Then the question becomes
whether you want to render server-side pages only for SEO benefits and
convenience of shareable/bookmarkable URLs
I am a pragmatist, so I will try to look at this in terms of costs and benefits.
Note that for any disadvantage I give, I recognize that they are solvable. That's why I don't look at anything as black and white, but rather, costs and benefits.
Advantages
Easier state tracking - no need to use cookies, form submission, local storage, session storage, etc. to remember state between 2 page loads.
Boiler plate content that is on every page (header, footer, logo, copyright banner, etc.) only loads once per typical browser session.
No overhead latency on switching "pages".
Disadvantages
Performance monitoring - hands tied: Most browser-level performance monitoring solutions I have seen focus exclusively on page load time only, like time to first byte, time to build DOM, network round trip for the HTML, onload event, etc. Updating the page post-load via AJAX would not be measured. There are solutions which let you instrument your code to record explicit measures, like when clicking a link, start a timer, then end a timer after rendering the AJAX results, and send that feedback. New Relic, for example, supports this functionality. By using a SPA, you have tied yourself to only a few possible tools.
Security / penetration testing - hands tied: Automated security scans can have difficulty discovering links when your entire page is built dynamically by a SPA framework. There are probably solutions to this, but again, you've limited yourself.
Bundling: It is easy to get into a situation when you are downloading all of the code needed for the entire web site on the initial page load, which can perform terribly for low-bandwidth connections. You can bundle your JavaScript and CSS files to try to load in more natural chunks as you go, but now you need to maintain that mapping and watch for unintended files to get pulled in via unrealized dependencies (just happened to me). Again, solvable, but with a cost.
Big bang refactoring: If you want to make a major architectural change, like say, switch from one framework to another, to minimize risk, it's desirable to make incremental changes. That is, start using the new, migrate on some basis, like per-page, per-feature, etc., then drop the old after. With traditional multi-page app, you could switch one page from Angular to React, then switch another page in the next sprint. With a SPA, it's all or nothing. If you want to change, you have to change the entire application in one go.
Complexity of navigation: Tooling exists to help maintain navigational context in SPA's, like history.js, Angular 2, most of which rely on either the URL framework (#) or the newer history API. If every page was a separate page, you don't need any of that.
Complexity of figuring out code: We naturally think of web sites as pages. A multi-page app usually partitions code by page, which aids maintainability.
Again, I recognize that every one of these problems is solvable, at some cost.
But there comes a point where you are spending all your time solving problems which you could have just avoided in the first place. It comes back to the benefits and how important they are to you.
Disadvantages
1. Client must enable javascript. Yes, this is a clear disadvantage of SPA. In my case I know that I can expect my users to have JavaScript enabled. If you can't then you can't do a SPA, period. That's like trying to deploy a .NET app to a machine without the .NET Framework installed.
2. Only one entry point to the site. I solve this problem using SammyJS. 2-3 days of work to get your routing properly set up, and people will be able to create deep-link bookmarks into your app that work correctly. Your server will only need to expose one endpoint - the "give me the HTML + CSS + JS for this app" endpoint (think of it as a download/update location for a precompiled application) - and the client-side JavaScript you write will handle the actual entry into the application.
3. Security. This issue is not unique to SPAs, you have to deal with security in exactly the same way when you have an "old-school" client-server app (the HATEOAS model of using Hypertext to link between pages). It's just that the user is making the requests rather than your JavaScript, and that the results are in HTML rather than JSON or some data format. In a non-SPA app you have to secure the individual pages on the server, whereas in a SPA app you have to secure the data endpoints. (And, if you don't want your client to have access to all the code, then you have to split apart the downloadable JavaScript into separate areas as well. I simply tie that into my SammyJS-based routing system so the browser only requests things that the client knows it should have access to, based on an initial load of the user's roles, and then that becomes a non-issue.)
Advantages
A major architectural advantage of a SPA (that rarely gets mentioned) in many cases is the huge reduction in the "chattiness" of your app. If you design it properly to handle most processing on the client (the whole point, after all), then the number of requests to the server (read "possibilities for 503 errors that wreck your user experience") is dramatically reduced. In fact, a SPA makes it possible to do entirely offline processing, which is huge in some situations.
Performance is certainly better with client-side rendering if you do it right, but this is not the most compelling reason to build a SPA. (Network speeds are improving, after all.) Don't make the case for SPA on this basis alone.
Flexibility in your UI design is perhaps the other major advantage that I have found. Once I defined my API (with an SDK in JavaScript), I was able to completely rewrite my front-end with zero impact on the server aside from some static resource files. Try doing that with a traditional MVC app! :) (This becomes valuable when you have live deployments and version consistency of your API to worry about.)
So, bottom line: If you need offline processing (or at least want your clients to be able to survive occasional server outages) - dramatically reducing your own hardware costs - and you can assume JavaScript & modern browsers, then you need a SPA. In other cases it's more of a tradeoff.
One major disadvantage of SPA - SEO. Only recently Google and Bing started indexing Ajax-based pages by executing JavaScript during crawling, and still in many cases pages are being indexed incorrectly.
While developing SPA, you will be forced to handle SEO issues, probably by post-rendering all your site and creating static html snapshots for crawler's use. This will require a solid investment in a proper infrastructures.
Update 19.06.16:
Since writing this answer a while ago, I gain much more experience with Single Page Apps (namely, AngularJS 1.x) - so I have more info to share.
In my opinion, the main disadvantage of SPA applications is SEO, making them limited to kind of "dashboard" apps only. In addition, you are going to have a much harder times with caching, compared to classic solutions. For example, in ASP.NET caching is extreamly easy - just turn on OutputCaching and you are good: the whole HTML page will be cached according to URL (or any other parameters). However, in SPA you will need to handle caching yourself (by using some solutions like second level cache, template caching, etc..).
I would like to make the case for SPA being best for Data Driven Applications. gmail, of course is all about data and thus a good candidate for a SPA.
But if your page is mostly for display, for example, a terms of service page, then a SPA is completely overkill.
I think the sweet spot is having a site with a mixture of both SPA and static/MVC style pages, depending on the particular page.
For example, on one site I am building, the user lands on a standard MVC index page. But then when they go to the actual application, then it calls up the SPA. Another advantage to this is that the load-time of the SPA is not on the home page, but on the app page. The load time being on the home page could be a distraction to first time site users.
This scenario is a little bit like using Flash. After a few years of experience, the number of Flash only sites dropped to near zero due to the load factor. But as a page component, it is still in use.
For such companies as google, amazon etc, whose servers are running at max capacity in 24/7-mode, reducing traffic means real money - less hardware, less energy, less maintenance. Shifting CPU-usage from server to client pays off, and SPAs shine. The advantages overweight disadvantages by far.
So, SPA or not SPA depends much on the use case.
Just for mentioning another, probably not so obvious (for Web-developers) use case for SPAs:
I'm currently looking for a way to implement GUIs in embedded systems and browser-based architecture seems appealing to me. Traditionally there were not many possibilities for UIs in embedded systems - Java, Qt, wx, etc or propriety commercial frameworks. Some years ago Adobe tried to enter the market with flash but seems to be not so successful.
Nowadays, as "embedded systems" are as powerful as mainframes some years ago, a browser-based UI connected to the control unit via REST is a possible solution. The advantage is, the huge palette of tools for UI for no cost. (e.g. Qt require 20-30$ per sold unit on royalty fees plus 3000-4000$ per developer)
For such architecture SPA offers many advantages - e.g. more familiar development-approach for desktop-app developers, reduced server access (often in car-industry the UI and system muddles are separate hardware, where the system-part has an RT OS).
As the only client is the built-in browser, the mentioned disadvantages like JS-availability, server-side logging, security don't count any more.
2. With SPA we don't need to use extra queries to the server to download pages.
I still have to learn a lot but since I started learn about SPA, I love them.
This particular point may make a huge difference.
In many web apps that are not SPA, you will see that they will still retrieve and add content to the pages making ajax requests. So I think that SPA goes beyond by considering: what if the content that is going to be retrieved and displayed using ajax is the whole page? and not just a small portion of a page?
Let me present an scenario. Consider that you have 2 pages:
a page with list of products
a page to view the details of a specific product
Consider that you are at the list page. Then you click on a product to view the details. The client side app will trigger 2 ajax requests:
a request to get a json object with the product details
a request to get an html template where the product details will be inserted
Then, the client side app will insert the data into the html template and display it.
Then you go back to the list (no request is done for this!) and you open another product. This time, there will be only an ajax request to get the details of the product. The html template is going to be the same so you don't need to download again.
You may say that in a non SPA, when you open the product details, you make only 1 request and in this scenario we did 2. Yes. But you get the gain from an overall perspective, when you navigate across of many pages, the number of requests is going to be lower. And the data that is transferred between the client side and the server is going to be lower too because the html templates are going to be reused. Also, you don't need to download in every requests all those css, images, javascript files that are present in all the pages.
Also, let's consider that you server side language is Java. If you analyze the 2 requests that I mentioned, 1 downloads data (you don't need to load any view file and call the view rendering engine) and the other downloads and static html template so you can have an HTTP web server that can retrieve it directly without having to call the Java application server, no computation is done!
Finally, the big companies are using SPA: Facebook, GMail, Amazon. They don't play, they have the greatest engineers studying all this. So if you don't see the advantages you can initially trust them and hope to discover them down the road.
But is important to use good SPA design patterns. You may use a framework like AngularJS. Don't try to implement an SPA without using good design patterns because you may end up having a mess.
Disadvantages:
Technically, design and initial development of SPA is complex and can be avoided. Other reasons for not using this SPA can be:
a) Security: Single Page Application is less secure as compared to traditional pages due to cross site scripting(XSS).
b) Memory Leak: Memory leak in JavaScript can even cause powerful Computer to slow down. As traditional websites encourage to navigate among pages, thus any memory leak caused by previous page is almost cleansed leaving less residue behind.
c) Client must enable JavaScript to run SPA, but in multi-page application JavaScript can be completely avoided.
d) SPA grows to optimal size, cause long waiting time. Eg: Working on Gmail with slower connection.
Apart from above, other architectural limitations are Navigational Data loss, No log of Navigational History in browser and difficulty in Automated Functional Testing with selenium.
This link explain Single Page Application's Advantages and Disadvantages.
Try not to consider using a SPA without first defining how you will address security and API stability on the server side. Then you will see some of the true advantages to using a SPA. Specifically, if you use a RESTful server that implements OAUTH 2.0 for security, you will achieve two fundamental separation of concerns that can lower your development and maintenance costs.
This will move the session (and it's security) onto the SPA and relieve your server from all of that overhead.
Your API's become both stable and easily extensible.
Hinted to earlier, but not made explicit; If your goal is to deploy Android & Apple applications, writing a JavaScript SPA that is wrapped by a native call to host the screen in a browser (Android or Apple) eliminates the need to maintain both an Apple code base and an Android code base.
I understand this is an older question, but I would like to add another disadvantage of Single Page Applications:
If you build an API that returns results in a data language (such as XML or JSON) rather than a formatting language (like HTML), you are enabling greater application interoperability, for example, in business-to-business (B2B) applications. Such interoperability has great benefits but does allow people to write software to "mine" (or steal) your data. This particular disadvantage is common to all APIs that use a data language, and not to SPAs in general (indeed, an SPA that asks the server for pre-rendered HTML avoids this, but at the expense of poor model/view separation). This risk exposed by this disadvantage can be mitigated by various means, such as request limiting and connection blocking, etc.
In my development I found two distinct advantages for using an SPA. That is not to say that the following can not be achieved in a traditional web app just that I see incremental benefit without introducing additional disadvantages.
Potential for less server request as rendering new content isn’t always or even ever an http server request for a new html page. But I say potential because new content could easily require an Ajax call to pull in data but that data could be incrementally lighter than the itself plus markup providing a net benefit.
The ability to maintain “State”. In its simplest terms, set a variable on entry to the app and it will be available to other components throughout the user’s experience without passing it around or setting it to a local storage pattern. Intelligently managing this ability however is key to keep the top level scope uncluttered.
Other than requiring JS (which is not a crazy thing to require of web apps) other noted disadvantages are in my opinion either not specific to SPA or can be mitigated through good habits and development patterns.

Is there a web stack optimized to minimize server-side coding?

For a couple recent projects on our corporate intranet, I have used a very simple stack of nginx + redis + webdis + client-side javascript to implement some simple data analysis tools. The experience was absolutely wonderful, especially compared to my previous experience with other stacks (including custom c++, apache/mod_perl, ASP.Net MVC, .Net HttpListener, Ruby on Rails, and a bit of Node.js). Given the availability of client-side templating tools and frontend libraries such as jquery-ui, it seems that I could happily implement much more complicated web-apps using such a no-server-side-code stack (perhaps substituting/augmenting redis with couchdb if warranted)...
The major limitation of this stack, of course, is that my database is directly exposed to the network - acceptable in this case on a firewalled corporate network, but not really an option if I wanted to use the same techniques on the internet. I need to have some level of server-side logic to securely handle authentication and user-role management.
Are there any best practices or common development stacks for this? Ideally I'd like something that is lightweight, and gives me a simple framework for filtering the client-side requests through my custom user-role logic before forwarding them on to the database back-end. I'm not interested in any sort of server-side templating, or ActiveRecord-style storage-level abstractions.
I can't comment on a framework.
You've already mentioned the primary weakness of this, especially on the internet, that being security. The problem there is not just authentication. The problem there is essentially the openness of the client, in this case the web browser, and the protocol, notably HTTP using JSON or XML or some other plaintext protocol.
Consider one example. It's quite simple. Imagine an HTTP service that takes an SQL query and returns a collection of JSON representing the rows. This is straightforward to write. You could probably pound out a nascent one in less than an hour from scratch using any tool that gives you SQL access to an RDBMS.
Arguably, back in the Golden Days of Client Server development, this is exactly what folks did, only instead of a some data tunneled over HTTP, folks used a DB specific driver and sent SQL text over to the back in DB directly.
The problem today is that the protocols are too open. If you implemented that SQL service mentioned above, you essentially turn your entire application in a SQL injection vector.
You simply can not secure something like that in the wild. The protocol is open to trivial observation (every browser comes with a built in packet sniffer, effectively today), along with all of the source code for the application. If you try to encrypt the data, that's all done on the client as well -- with the source to the process, as well as any keys involved.
CouchDB, for example, can not be secured this way. If someone has rights to the server, they have rights to all of the data. ALL of the data. The stuff you want them to see, and the stuff you don't.
The solution, naturally, is a service layer. Something that speaks at a higher level than simply raw data streams. Something that can be secured, and can keep secrets from the clients. But that, naturally, takes server side programming to enable, and its a ostensibly more work, more layers, more data conversion, more a pain.
Back in the day, folks would write entire systems using nothing but stored procedures in the DB. The procedures would have rights that the users invoking them did not, thus you could limit at the server what a user could or could not see or change. You could given them unlimited SELECT capability on a restricted view, perhaps, while a stored procedure would have rights to actually change data or access some of the hidden columns.
Stored procedures have mostly been replaced by application layers and application servers, with the DB being more and more relegated to "dumb storage". But the concepts are similar.
There's value for some scenarios to publishing data straight to the web, like you analytics example. That's a specific, read heavy niche. But beyond that, the concept doesn't work well, I fear. Obfuscated JS is hard to read, but not secure.
This is likely why you may have a little difficulty locating such a framework (I haven't looked at all, myself).

Categories