Why use React server side rendering - javascript

I have just learned react recently and intend to use it for my next project. I have come across react server side rendering for a few times, but wonders why do we still need it in "modern age".
In this article, it argues that with server side rendering, user does not have to wait to load those js from CDN or somewhere to see the initial static page, and the page will resume functionality when js arrives.
But after building with webpack production configuration, and gzip, the whole bundle (with react, my code and a lot other stuff) only takes 40kb, and I have aws CDN for it. I don't quite see the reason to use server side rendering for my situation.
So the question is why people still use server side rendering if the resulting javascript bundle is so tiny after gzip?

Load Times
A rendered view of the application can be delivered in the response of the initial HTTP request. In a traditional single page web app, the first request would come back, the browser would parse the HTML, then make subsequent requests for the scripts — which would eventually render the page. Those requests will still happen, but they won't ever get in the way of the user seeing the initial data.
This doesn't make much difference on quick internet connections, but for users on mobiles in low network coverage areas, this initial rendering of data can make apps render 20-30 seconds faster.
Views with static data can also be cached at a network level, meaning that a rendered view of a React application can be served with very little computational overhead
SEO
When a search engine crawler arrives at a web page, the HTML is served and the static content is inspected and indexed. In a purely client side Javascript application, there is no static content. It is all created and injected dynamically once the appropriate scripts load and run.
Unlike React, most frameworks have no way of serialising their component graph to HTML and then reinflating it. They have to use a more convoluted approach, which often involves rendering their page in a headless browser at the server, then serving up that version whenever a crawler requests it.
React can just render the component tree to a HTML string from a server side JS environment such as Node.js, then serve that straight away. No need for headless browsers or extra complications.
Noscript
It also allows you to write applications which gracefully degrade and ultimately, can be used as thin clients. This means that the processing goes on at the server and the application can be used in a browser with Javascript disabled. Whether or not that's an important market to consider is a debate for another time.

All-or-none rendering
This is a UX concern, some designers decide they don't like incremental rendering. The designer wants the page to show up complete and perfect, without the loading spinner, insertion of fetched text here and there, and displacement of layout.
Scroll bar restoration is a difficult issue for client side rendering. See Keep scroll position when navigating back (When navigating forward and then back, the scroll position is lost). Server side rendering does not suffer from this issue.

I think if you're chasing SEO will better rendered on server.
so all contents will read by bot SEO.

In addition to SEO mentioned above, with SSR browser could present the page right away even before all Java Script files are loaded. I have a tutorial explaining this.

Related

Downsides of client side routing?

I am building a universal/isomorphic javascript app (Express/Redux/React). I am contemplating routing on the client with React Router and/or routing on the server with Express.
I know that client side routing has become popular with single page apps because they make user interaction more seamless.
However, I am trying to get a better understanding of the client vs server side routing. What are potential downsides to client side routing that someone may encounter when building any application (single page or not)? And when is it best to consider routing on the server? Do large scale applications route exclusively on one side (client/server) or do they often blend the two?
Thank you!
I see no good reason to stay away from client side routing. If you're using something like react-router, then this is both client and server routing and there is nothing difficult about it. Some specific areas some people might tell you will be difficult:
SEO. That comes for free, whatever URL you hit will be rendered on the server correctly and sent to the client, so Googlebot will see the page correctly. There is absolutely no truth to the suggestion that SEO is harder with client side routing, provided that you are server-side redering.
Analytics. Easy, just put ga('send', 'pageview', path) wherever you're handling navigation on the client side (right before you trigger the router to change path).
Resources, if your whole site is quite big, you don't want to send the entire thing to the client when the page first loads. This will require a little more complexity (for example defining multiple entry points with webpack). If you have a site with hundreds of pages then client-side routing is going to give you less benefit anyway.
This site (my own) uses client side rendering. You'll note that it works just fine with JavaScript turned off (the best way to be certain that Google sees it correctly). The source is here if you want to see how any particular piece was done.
Some of the downsides of client side routing (can be) that:
While server-side routing is a tried and true technique and have many techniques and library options available, client side will probably be less robust and manageable.
Monitoring. While server-side routed pages could be verified through any basic web scraper software, client-side routed pages would need to be monitored through a more advanced tool that actually renders HTML and fires client-side scripts.
Difficulties with serving SEO content. Though it's possible, it's a lot more difficult.
Resources. Depending on how you build your application, server-side routing might be more resource-effective, since there'll be less overhead to load client-side for each page.
Compatability. Based on which browsers you're targeting, your preferred method of client-side routing might not be supported.
You can still use client-side routing for applications where the routed pages don't need to or shouldn't be indexed by search engines.
For pages that are critical to SEO and don't need to be a SPA (for example they just serve informational content), there's little reason not to go server side.

disadvantages of pre-creating html serverside for phonegap app

I want to create a mobile application for an already existing website. This application will allow users to login into their accounts, upload files, view messages etc. This app will be created using Phonegap and as much js/css/html5 as possible. My question is: Most apps I have seen so far use JSON-Responses (or similar) and work with them right in the app. But what is the disadvantage of already creating the necessary html on the server and simply loading it into a div using jQuery? (is it even possible because of Same Origin Policy?) When you want to change some minor things of the application, it is easier in this way, but there must be a reason why I haven't seen this so far...
The HTML and templates are already cached locally - so by passing JSON data instead of HTML fragments you're passing less data, network traffic is usually a limiting factor in mobile apps.
From a design point of view only asking the server for data (and not presentation logic) also makes for better separation of concerns and architecture very often. It's also easier to cache data than presentation because of user preferences and such.
On the upside - sending data will make presenting it slightly faster since less work has to happen on the client side.
As for the Same Origin Policy, it does not apply for mobile apps so don't worry about it.

Single Page Application: advantages and disadvantages [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've read about SPA and it advantages. I find most of them unconvincing. There are 3 advantages that arouse my doubts.
Question: Can you act as advocate of SPA and prove that I am wrong about first three statements?
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
Server-side rendering is hard to implement for all the intermediate
states - small view states do not map well to URLs.
Single page apps are distinguished by their ability to redraw any part
of the UI without requiring a server roundtrip to retrieve HTML. This
is achieved by separating the data from the presentation of data by
having a model layer that handles data and a view layer that reads
from the models.
What is wrong with holding a model layer for non-SPA? Does SPA the only compatible architecture with MVC on client side?
2. With SPA we don't need to use extra queries to the server to download pages.
Hah, and how many pages user can download during visiting your site? Two, three? Instead there appear another security problems and you need to separate your login page, admin page etc into separate pages. In turn it conflicts with SPA architecture.
3.May be any other advantages? Don't hear about any else..
=== DISADVANTAGES ===
Client must enable javascript.
Only one entry point to the site.
Security.
P.S. I've worked on SPA and non-SPA projects. And I'm asking those questions because I need to deepen my understanding. No mean to harm SPA supporters. Don't ask me to read a bit more about SPA. I just want to hear your considerations about that.
Let's look at one of the most popular SPA sites, GMail.
1. SPA is extremely good for very responsive sites:
Server-side rendering is not as hard as it used to be with simple techniques like keeping a #hash in the URL, or more recently HTML5 pushState. With this approach the exact state of the web app is embedded in the page URL. As in GMail every time you open a mail a special hash tag is added to the URL. If copied and pasted to other browser window can open the exact same mail (provided they can authenticate). This approach maps directly to a more traditional query string, the difference is merely in the execution. With HTML5 pushState() you can eliminate the #hash and use completely classic URLs which can resolve on the server on the first request and then load via ajax on subsequent requests.
2. With SPA we don't need to use extra queries to the server to download pages.
The number of pages user downloads during visit to my web site?? really how many mails some reads when he/she opens his/her mail account. I read >50 at one go. now the structure of the mails is almost the same. if you will use a server side rendering scheme the server would then render it on every request(typical case).
- security concern - you should/ should not keep separate pages for the admins/login that entirely depends upon the structure of you site take paytm.com for example also making a web site SPA does not mean that you open all the endpoints for all the users I mean I use forms auth with my spa web site.
- in the probably most used SPA framework Angular JS the dev can load the entire html temple from the web site so that can be done depending on the users authentication level. pre loading html for all the auth types isn't SPA.
3. May be any other advantages? Don't hear about any else..
these days you can safely assume the client will have javascript enabled browsers.
only one entry point of the site. As I mentioned earlier maintenance of state is possible you can have any number of entry points as you want but you should have one for sure.
even in an SPA user only see to what he has proper rights. you don't have to inject every thing at once. loading diff html templates and javascript async is also a valid part of SPA.
Advantages that I can think of are:
rendering html obviously takes some resources now every user visiting you site is doing this. also not only rendering major logics are now done client side instead of server side.
date time issues - I just give the client UTC time is a pre set format and don't even care about the time zones I let javascript handle it. this is great advantage to where I had to guess time zones based on location derived from users IP.
to me state is more nicely maintained in an SPA because once you have set a variable you know it will be there. this gives a feel of developing an app rather than a web page. this helps a lot typically in making sites like foodpanda, flipkart, amazon. because if you are not using client side state you are using expensive sessions.
websites surely are extremely responsive - I'll take an extreme example for this try making a calculator in a non SPA website(I know its weird).
Updates from Comments
It doesn't seem like anyone mentioned about sockets and long-polling.
If you log out from another client say mobile app, then your browser
should also log out. If you don't use SPA, you have to re-create the
socket connection every time there is a redirect. This should also
work with any updates in data like notifications, profile update etc
An alternate perspective: Aside from your website, will your project
involve a native mobile app? If yes, you are most likely going to be
feeding raw data to that native app from a server (ie JSON) and doing
client-side processing to render it, correct? So with this assertion,
you're ALREADY doing a client-side rendering model. Now the question
becomes, why shouldn't you use the same model for the website-version
of your project? Kind of a no-brainer. Then the question becomes
whether you want to render server-side pages only for SEO benefits and
convenience of shareable/bookmarkable URLs
I am a pragmatist, so I will try to look at this in terms of costs and benefits.
Note that for any disadvantage I give, I recognize that they are solvable. That's why I don't look at anything as black and white, but rather, costs and benefits.
Advantages
Easier state tracking - no need to use cookies, form submission, local storage, session storage, etc. to remember state between 2 page loads.
Boiler plate content that is on every page (header, footer, logo, copyright banner, etc.) only loads once per typical browser session.
No overhead latency on switching "pages".
Disadvantages
Performance monitoring - hands tied: Most browser-level performance monitoring solutions I have seen focus exclusively on page load time only, like time to first byte, time to build DOM, network round trip for the HTML, onload event, etc. Updating the page post-load via AJAX would not be measured. There are solutions which let you instrument your code to record explicit measures, like when clicking a link, start a timer, then end a timer after rendering the AJAX results, and send that feedback. New Relic, for example, supports this functionality. By using a SPA, you have tied yourself to only a few possible tools.
Security / penetration testing - hands tied: Automated security scans can have difficulty discovering links when your entire page is built dynamically by a SPA framework. There are probably solutions to this, but again, you've limited yourself.
Bundling: It is easy to get into a situation when you are downloading all of the code needed for the entire web site on the initial page load, which can perform terribly for low-bandwidth connections. You can bundle your JavaScript and CSS files to try to load in more natural chunks as you go, but now you need to maintain that mapping and watch for unintended files to get pulled in via unrealized dependencies (just happened to me). Again, solvable, but with a cost.
Big bang refactoring: If you want to make a major architectural change, like say, switch from one framework to another, to minimize risk, it's desirable to make incremental changes. That is, start using the new, migrate on some basis, like per-page, per-feature, etc., then drop the old after. With traditional multi-page app, you could switch one page from Angular to React, then switch another page in the next sprint. With a SPA, it's all or nothing. If you want to change, you have to change the entire application in one go.
Complexity of navigation: Tooling exists to help maintain navigational context in SPA's, like history.js, Angular 2, most of which rely on either the URL framework (#) or the newer history API. If every page was a separate page, you don't need any of that.
Complexity of figuring out code: We naturally think of web sites as pages. A multi-page app usually partitions code by page, which aids maintainability.
Again, I recognize that every one of these problems is solvable, at some cost.
But there comes a point where you are spending all your time solving problems which you could have just avoided in the first place. It comes back to the benefits and how important they are to you.
Disadvantages
1. Client must enable javascript. Yes, this is a clear disadvantage of SPA. In my case I know that I can expect my users to have JavaScript enabled. If you can't then you can't do a SPA, period. That's like trying to deploy a .NET app to a machine without the .NET Framework installed.
2. Only one entry point to the site. I solve this problem using SammyJS. 2-3 days of work to get your routing properly set up, and people will be able to create deep-link bookmarks into your app that work correctly. Your server will only need to expose one endpoint - the "give me the HTML + CSS + JS for this app" endpoint (think of it as a download/update location for a precompiled application) - and the client-side JavaScript you write will handle the actual entry into the application.
3. Security. This issue is not unique to SPAs, you have to deal with security in exactly the same way when you have an "old-school" client-server app (the HATEOAS model of using Hypertext to link between pages). It's just that the user is making the requests rather than your JavaScript, and that the results are in HTML rather than JSON or some data format. In a non-SPA app you have to secure the individual pages on the server, whereas in a SPA app you have to secure the data endpoints. (And, if you don't want your client to have access to all the code, then you have to split apart the downloadable JavaScript into separate areas as well. I simply tie that into my SammyJS-based routing system so the browser only requests things that the client knows it should have access to, based on an initial load of the user's roles, and then that becomes a non-issue.)
Advantages
A major architectural advantage of a SPA (that rarely gets mentioned) in many cases is the huge reduction in the "chattiness" of your app. If you design it properly to handle most processing on the client (the whole point, after all), then the number of requests to the server (read "possibilities for 503 errors that wreck your user experience") is dramatically reduced. In fact, a SPA makes it possible to do entirely offline processing, which is huge in some situations.
Performance is certainly better with client-side rendering if you do it right, but this is not the most compelling reason to build a SPA. (Network speeds are improving, after all.) Don't make the case for SPA on this basis alone.
Flexibility in your UI design is perhaps the other major advantage that I have found. Once I defined my API (with an SDK in JavaScript), I was able to completely rewrite my front-end with zero impact on the server aside from some static resource files. Try doing that with a traditional MVC app! :) (This becomes valuable when you have live deployments and version consistency of your API to worry about.)
So, bottom line: If you need offline processing (or at least want your clients to be able to survive occasional server outages) - dramatically reducing your own hardware costs - and you can assume JavaScript & modern browsers, then you need a SPA. In other cases it's more of a tradeoff.
One major disadvantage of SPA - SEO. Only recently Google and Bing started indexing Ajax-based pages by executing JavaScript during crawling, and still in many cases pages are being indexed incorrectly.
While developing SPA, you will be forced to handle SEO issues, probably by post-rendering all your site and creating static html snapshots for crawler's use. This will require a solid investment in a proper infrastructures.
Update 19.06.16:
Since writing this answer a while ago, I gain much more experience with Single Page Apps (namely, AngularJS 1.x) - so I have more info to share.
In my opinion, the main disadvantage of SPA applications is SEO, making them limited to kind of "dashboard" apps only. In addition, you are going to have a much harder times with caching, compared to classic solutions. For example, in ASP.NET caching is extreamly easy - just turn on OutputCaching and you are good: the whole HTML page will be cached according to URL (or any other parameters). However, in SPA you will need to handle caching yourself (by using some solutions like second level cache, template caching, etc..).
I would like to make the case for SPA being best for Data Driven Applications. gmail, of course is all about data and thus a good candidate for a SPA.
But if your page is mostly for display, for example, a terms of service page, then a SPA is completely overkill.
I think the sweet spot is having a site with a mixture of both SPA and static/MVC style pages, depending on the particular page.
For example, on one site I am building, the user lands on a standard MVC index page. But then when they go to the actual application, then it calls up the SPA. Another advantage to this is that the load-time of the SPA is not on the home page, but on the app page. The load time being on the home page could be a distraction to first time site users.
This scenario is a little bit like using Flash. After a few years of experience, the number of Flash only sites dropped to near zero due to the load factor. But as a page component, it is still in use.
For such companies as google, amazon etc, whose servers are running at max capacity in 24/7-mode, reducing traffic means real money - less hardware, less energy, less maintenance. Shifting CPU-usage from server to client pays off, and SPAs shine. The advantages overweight disadvantages by far.
So, SPA or not SPA depends much on the use case.
Just for mentioning another, probably not so obvious (for Web-developers) use case for SPAs:
I'm currently looking for a way to implement GUIs in embedded systems and browser-based architecture seems appealing to me. Traditionally there were not many possibilities for UIs in embedded systems - Java, Qt, wx, etc or propriety commercial frameworks. Some years ago Adobe tried to enter the market with flash but seems to be not so successful.
Nowadays, as "embedded systems" are as powerful as mainframes some years ago, a browser-based UI connected to the control unit via REST is a possible solution. The advantage is, the huge palette of tools for UI for no cost. (e.g. Qt require 20-30$ per sold unit on royalty fees plus 3000-4000$ per developer)
For such architecture SPA offers many advantages - e.g. more familiar development-approach for desktop-app developers, reduced server access (often in car-industry the UI and system muddles are separate hardware, where the system-part has an RT OS).
As the only client is the built-in browser, the mentioned disadvantages like JS-availability, server-side logging, security don't count any more.
2. With SPA we don't need to use extra queries to the server to download pages.
I still have to learn a lot but since I started learn about SPA, I love them.
This particular point may make a huge difference.
In many web apps that are not SPA, you will see that they will still retrieve and add content to the pages making ajax requests. So I think that SPA goes beyond by considering: what if the content that is going to be retrieved and displayed using ajax is the whole page? and not just a small portion of a page?
Let me present an scenario. Consider that you have 2 pages:
a page with list of products
a page to view the details of a specific product
Consider that you are at the list page. Then you click on a product to view the details. The client side app will trigger 2 ajax requests:
a request to get a json object with the product details
a request to get an html template where the product details will be inserted
Then, the client side app will insert the data into the html template and display it.
Then you go back to the list (no request is done for this!) and you open another product. This time, there will be only an ajax request to get the details of the product. The html template is going to be the same so you don't need to download again.
You may say that in a non SPA, when you open the product details, you make only 1 request and in this scenario we did 2. Yes. But you get the gain from an overall perspective, when you navigate across of many pages, the number of requests is going to be lower. And the data that is transferred between the client side and the server is going to be lower too because the html templates are going to be reused. Also, you don't need to download in every requests all those css, images, javascript files that are present in all the pages.
Also, let's consider that you server side language is Java. If you analyze the 2 requests that I mentioned, 1 downloads data (you don't need to load any view file and call the view rendering engine) and the other downloads and static html template so you can have an HTTP web server that can retrieve it directly without having to call the Java application server, no computation is done!
Finally, the big companies are using SPA: Facebook, GMail, Amazon. They don't play, they have the greatest engineers studying all this. So if you don't see the advantages you can initially trust them and hope to discover them down the road.
But is important to use good SPA design patterns. You may use a framework like AngularJS. Don't try to implement an SPA without using good design patterns because you may end up having a mess.
Disadvantages:
Technically, design and initial development of SPA is complex and can be avoided. Other reasons for not using this SPA can be:
a) Security: Single Page Application is less secure as compared to traditional pages due to cross site scripting(XSS).
b) Memory Leak: Memory leak in JavaScript can even cause powerful Computer to slow down. As traditional websites encourage to navigate among pages, thus any memory leak caused by previous page is almost cleansed leaving less residue behind.
c) Client must enable JavaScript to run SPA, but in multi-page application JavaScript can be completely avoided.
d) SPA grows to optimal size, cause long waiting time. Eg: Working on Gmail with slower connection.
Apart from above, other architectural limitations are Navigational Data loss, No log of Navigational History in browser and difficulty in Automated Functional Testing with selenium.
This link explain Single Page Application's Advantages and Disadvantages.
Try not to consider using a SPA without first defining how you will address security and API stability on the server side. Then you will see some of the true advantages to using a SPA. Specifically, if you use a RESTful server that implements OAUTH 2.0 for security, you will achieve two fundamental separation of concerns that can lower your development and maintenance costs.
This will move the session (and it's security) onto the SPA and relieve your server from all of that overhead.
Your API's become both stable and easily extensible.
Hinted to earlier, but not made explicit; If your goal is to deploy Android & Apple applications, writing a JavaScript SPA that is wrapped by a native call to host the screen in a browser (Android or Apple) eliminates the need to maintain both an Apple code base and an Android code base.
I understand this is an older question, but I would like to add another disadvantage of Single Page Applications:
If you build an API that returns results in a data language (such as XML or JSON) rather than a formatting language (like HTML), you are enabling greater application interoperability, for example, in business-to-business (B2B) applications. Such interoperability has great benefits but does allow people to write software to "mine" (or steal) your data. This particular disadvantage is common to all APIs that use a data language, and not to SPAs in general (indeed, an SPA that asks the server for pre-rendered HTML avoids this, but at the expense of poor model/view separation). This risk exposed by this disadvantage can be mitigated by various means, such as request limiting and connection blocking, etc.
In my development I found two distinct advantages for using an SPA. That is not to say that the following can not be achieved in a traditional web app just that I see incremental benefit without introducing additional disadvantages.
Potential for less server request as rendering new content isn’t always or even ever an http server request for a new html page. But I say potential because new content could easily require an Ajax call to pull in data but that data could be incrementally lighter than the itself plus markup providing a net benefit.
The ability to maintain “State”. In its simplest terms, set a variable on entry to the app and it will be available to other components throughout the user’s experience without passing it around or setting it to a local storage pattern. Intelligently managing this ability however is key to keep the top level scope uncluttered.
Other than requiring JS (which is not a crazy thing to require of web apps) other noted disadvantages are in my opinion either not specific to SPA or can be mitigated through good habits and development patterns.

What is server side rendering of javascript?

Some javascript frameworks like Dust.js claim that they ALSO support server-side rendering (in addition to client side rendering) as well. Can someone explain how does this work? My understanding is that the JS is always executed in the browser runtime.
JavaScript can be run on servers using systems like Node.js.
With regard to Dust.js, a templating engine, it can generate hypertext and HTML on the server and send that content directly to the client's browser. This is typically used to avoid a flash of temporarily empty templates, caused by the browser requiring a split second to populate a view's templates via a framework like Dust.js. The downside is that the page will take slightly longer to load because more work has to be done on the server before sending data to the client.
Check out this question for the pros and cons of server-side rending. One must choose between slow post-processing (requiring the user's browser to do the work) or slow pre-processing (making the server do the work before the user sees anything).
Server side rendering is converting JavaScript into static html and css.
Earlier JS tended to load last to optimise website performance.
But the problem was it affected SEO.
So server side rendering became the solution to resolve this issue.
The 3 main renders methods are:
Client Side Rendering (CSR)
Is the default rendering process of modern frontend frameworks like ReactJS, the server just returns an almost empty index.html file, and then the user browser using Javascript loads and builds everything.
Server Side Rendering (SSR):
Refers to the process of generating in the server, the HTML content, on-demand for each request.
Usually, you’ll accomplish this by setting up a server running something like Express or Next.js (which use NodeJS) that renders your React app with each request — just as with a more traditional PHP or Rails based website.
Static Site Generator (SSG):
Both Server Side Rendering and Static Site Generator involve generating HTML for each of your site’s URLs. The difference is that Static rendering happens once at build time, while server rendering happens on-demand, as the user requests each file.
With static rendering, you need to generate responses for every possible request ahead of time. But what if you’re building something where you can’t predict all possible requests, like a search page? Or what if you have a lot of user generated content, and the response changes on every request? In that case, you’ll need Server Side Rendering.
*** So which should you use? ***
It depends.
if SEO is irrelevant — e.g. an app that lives behind a login screen — then CSR is fine and you just need something like ReactJS
If you need a good SEO:
a) If you can predict the content (to generate it in build time) then you need SSG and should choose something like NextJS
b) If you can’t predict the content/possible request, the server will need to generate the pages on demand, so you need SSR and should choose something like NextJS.
Note: NextJS allows you to selectively mix in the same project the 3 main rendering forms. They are also other good solutions like Gatsby on the market.
Server-Side Rendering (SSR) is executing JavaScript [SPA application] on the server-side.
Opposite to Client-Side Rendering when JavaScript is executed on the client-side.
Server-Side Rendering is a relatively new technology/therm that allows SPA frameworks like React or Angular to become more SEO-friendly.
Technology is rather inefficient, because of additional work put on the server.
Instead, of creating SPA (single-page application) and then executing it on the server, someone should consider creating the multi-page application in the first place.
SSR term is somehow ambiguous and confusing as some people use it referring to traditional multi-page websites like those built on Wordpress, for example, or any PHP or Java backends where HTML pages are generated by the server, and some use it only for SPA rendered on the server.
The latter, IMHO, is "more correct" as it produces less confusion.

Why exactly is server side HTML rendering faster than client side?

I am working on a large web site, and we're moving a lot of functionality to the client side (Require.js, Backbone and Handlebars stack). There are even discussions about possibly moving all rendering to the client side.
But reading some articles, especially ones about Twitter moving away from client side rendering, which mention that server side is faster / more reliable, I begin to have questions. I don't understand how rendering fairly simple HTML widgets in JS from JSON and templates is a contemporary browser on a dual core CPU with 4-8 GB RAM is any slower than making dozens of includes in your server side app. Are there any actual real life benchmarking figures regarding this?
Also, it seems like parsing HTML templates by server side templating engines can't be any faster than rendering same HTML code from a Handlebars template, especially if this is a precomp JS function?
There are many reasons:
JavaScript is interpreted language and is slower than server side
(usually done in compiled language)
DOM manipulation is slow, and if you are manipulating it in JS it
results in poor performance. There are ways to overcome this like
preparing your rendering in text then evaluating it, this might in fact gets you as close to server side rendering.
Some browsers are just too slow, especially old IE
Performance of compiled language versus interpreted javascript
Caching, ie - serving up the exact same page another user has already requested, this removes the need for each client to render it. Great for sites with huge traffic - ie news sites. Micro-caching can even provide near real-time updates, yet serve significant traffic from the cache. No need to wait for client rendering
Less reliance on users with old computers or slow / crippled browsers
Only need to worry about rendering, less reliance on how different browsers manage DOM (reliability)
But for a complex UI, client side rendering of interactions will provide a snappier user experience.
It really depends on what performance you're trying to optimise, and for how many users.
To run code on the client side it first has to be loaded. Server side code is just loaded when the server start, whereas the client code must potentially be loaded every time the page is. In any case the code must be interpreted when loading the page, even if the file is already cached. You might also have caching of JS parse trees in the browser, but I think those are not persisted, so they won't live long.
This means that no matter how fast JavaScript is (and it is quite fast) work has to be performed while the user waits. Many studies have shown that page loading time greatly affects the users perception of the sites quality and relevance.
Bottom line is that you have 500ms at the most to get your page rendered from a clean cache on your typical developer environment. Slower devices and networks will make that lag just barely acceptable to most users.
So you probably have 50-100ms to do things in JavaScript during page load, all of it, grand total, which means that rendering a complex page, well, not easy.

Categories