Some javascript frameworks like Dust.js claim that they ALSO support server-side rendering (in addition to client side rendering) as well. Can someone explain how does this work? My understanding is that the JS is always executed in the browser runtime.
JavaScript can be run on servers using systems like Node.js.
With regard to Dust.js, a templating engine, it can generate hypertext and HTML on the server and send that content directly to the client's browser. This is typically used to avoid a flash of temporarily empty templates, caused by the browser requiring a split second to populate a view's templates via a framework like Dust.js. The downside is that the page will take slightly longer to load because more work has to be done on the server before sending data to the client.
Check out this question for the pros and cons of server-side rending. One must choose between slow post-processing (requiring the user's browser to do the work) or slow pre-processing (making the server do the work before the user sees anything).
Server side rendering is converting JavaScript into static html and css.
Earlier JS tended to load last to optimise website performance.
But the problem was it affected SEO.
So server side rendering became the solution to resolve this issue.
The 3 main renders methods are:
Client Side Rendering (CSR)
Is the default rendering process of modern frontend frameworks like ReactJS, the server just returns an almost empty index.html file, and then the user browser using Javascript loads and builds everything.
Server Side Rendering (SSR):
Refers to the process of generating in the server, the HTML content, on-demand for each request.
Usually, you’ll accomplish this by setting up a server running something like Express or Next.js (which use NodeJS) that renders your React app with each request — just as with a more traditional PHP or Rails based website.
Static Site Generator (SSG):
Both Server Side Rendering and Static Site Generator involve generating HTML for each of your site’s URLs. The difference is that Static rendering happens once at build time, while server rendering happens on-demand, as the user requests each file.
With static rendering, you need to generate responses for every possible request ahead of time. But what if you’re building something where you can’t predict all possible requests, like a search page? Or what if you have a lot of user generated content, and the response changes on every request? In that case, you’ll need Server Side Rendering.
*** So which should you use? ***
It depends.
if SEO is irrelevant — e.g. an app that lives behind a login screen — then CSR is fine and you just need something like ReactJS
If you need a good SEO:
a) If you can predict the content (to generate it in build time) then you need SSG and should choose something like NextJS
b) If you can’t predict the content/possible request, the server will need to generate the pages on demand, so you need SSR and should choose something like NextJS.
Note: NextJS allows you to selectively mix in the same project the 3 main rendering forms. They are also other good solutions like Gatsby on the market.
Server-Side Rendering (SSR) is executing JavaScript [SPA application] on the server-side.
Opposite to Client-Side Rendering when JavaScript is executed on the client-side.
Server-Side Rendering is a relatively new technology/therm that allows SPA frameworks like React or Angular to become more SEO-friendly.
Technology is rather inefficient, because of additional work put on the server.
Instead, of creating SPA (single-page application) and then executing it on the server, someone should consider creating the multi-page application in the first place.
SSR term is somehow ambiguous and confusing as some people use it referring to traditional multi-page websites like those built on Wordpress, for example, or any PHP or Java backends where HTML pages are generated by the server, and some use it only for SPA rendered on the server.
The latter, IMHO, is "more correct" as it produces less confusion.
Related
I have just learned react recently and intend to use it for my next project. I have come across react server side rendering for a few times, but wonders why do we still need it in "modern age".
In this article, it argues that with server side rendering, user does not have to wait to load those js from CDN or somewhere to see the initial static page, and the page will resume functionality when js arrives.
But after building with webpack production configuration, and gzip, the whole bundle (with react, my code and a lot other stuff) only takes 40kb, and I have aws CDN for it. I don't quite see the reason to use server side rendering for my situation.
So the question is why people still use server side rendering if the resulting javascript bundle is so tiny after gzip?
Load Times
A rendered view of the application can be delivered in the response of the initial HTTP request. In a traditional single page web app, the first request would come back, the browser would parse the HTML, then make subsequent requests for the scripts — which would eventually render the page. Those requests will still happen, but they won't ever get in the way of the user seeing the initial data.
This doesn't make much difference on quick internet connections, but for users on mobiles in low network coverage areas, this initial rendering of data can make apps render 20-30 seconds faster.
Views with static data can also be cached at a network level, meaning that a rendered view of a React application can be served with very little computational overhead
SEO
When a search engine crawler arrives at a web page, the HTML is served and the static content is inspected and indexed. In a purely client side Javascript application, there is no static content. It is all created and injected dynamically once the appropriate scripts load and run.
Unlike React, most frameworks have no way of serialising their component graph to HTML and then reinflating it. They have to use a more convoluted approach, which often involves rendering their page in a headless browser at the server, then serving up that version whenever a crawler requests it.
React can just render the component tree to a HTML string from a server side JS environment such as Node.js, then serve that straight away. No need for headless browsers or extra complications.
Noscript
It also allows you to write applications which gracefully degrade and ultimately, can be used as thin clients. This means that the processing goes on at the server and the application can be used in a browser with Javascript disabled. Whether or not that's an important market to consider is a debate for another time.
All-or-none rendering
This is a UX concern, some designers decide they don't like incremental rendering. The designer wants the page to show up complete and perfect, without the loading spinner, insertion of fetched text here and there, and displacement of layout.
Scroll bar restoration is a difficult issue for client side rendering. See Keep scroll position when navigating back (When navigating forward and then back, the scroll position is lost). Server side rendering does not suffer from this issue.
I think if you're chasing SEO will better rendered on server.
so all contents will read by bot SEO.
In addition to SEO mentioned above, with SSR browser could present the page right away even before all Java Script files are loaded. I have a tutorial explaining this.
I am building a universal/isomorphic javascript app (Express/Redux/React). I am contemplating routing on the client with React Router and/or routing on the server with Express.
I know that client side routing has become popular with single page apps because they make user interaction more seamless.
However, I am trying to get a better understanding of the client vs server side routing. What are potential downsides to client side routing that someone may encounter when building any application (single page or not)? And when is it best to consider routing on the server? Do large scale applications route exclusively on one side (client/server) or do they often blend the two?
Thank you!
I see no good reason to stay away from client side routing. If you're using something like react-router, then this is both client and server routing and there is nothing difficult about it. Some specific areas some people might tell you will be difficult:
SEO. That comes for free, whatever URL you hit will be rendered on the server correctly and sent to the client, so Googlebot will see the page correctly. There is absolutely no truth to the suggestion that SEO is harder with client side routing, provided that you are server-side redering.
Analytics. Easy, just put ga('send', 'pageview', path) wherever you're handling navigation on the client side (right before you trigger the router to change path).
Resources, if your whole site is quite big, you don't want to send the entire thing to the client when the page first loads. This will require a little more complexity (for example defining multiple entry points with webpack). If you have a site with hundreds of pages then client-side routing is going to give you less benefit anyway.
This site (my own) uses client side rendering. You'll note that it works just fine with JavaScript turned off (the best way to be certain that Google sees it correctly). The source is here if you want to see how any particular piece was done.
Some of the downsides of client side routing (can be) that:
While server-side routing is a tried and true technique and have many techniques and library options available, client side will probably be less robust and manageable.
Monitoring. While server-side routed pages could be verified through any basic web scraper software, client-side routed pages would need to be monitored through a more advanced tool that actually renders HTML and fires client-side scripts.
Difficulties with serving SEO content. Though it's possible, it's a lot more difficult.
Resources. Depending on how you build your application, server-side routing might be more resource-effective, since there'll be less overhead to load client-side for each page.
Compatability. Based on which browsers you're targeting, your preferred method of client-side routing might not be supported.
You can still use client-side routing for applications where the routed pages don't need to or shouldn't be indexed by search engines.
For pages that are critical to SEO and don't need to be a SPA (for example they just serve informational content), there's little reason not to go server side.
I was wondering if it's good idea to intercept all internal links and load the target page with ajax.
The new history api from Html5 makes it possible to also change the url in the address bar.
are there any disadvantages between this way and the old traditional way that let's the browser load a new page (besides the increased complexity of the code)?
Many frameworks use the HTML5 History API to have client side routing. I don't think you mean that you should load the target page with ajax, but rather change the DOM without requesting the page from the server. There's a whole debate going on (and has been for the past several years) about which architecture is better and honestly it's all down to what you're trying to achieve.
Angular and Ember are client side frameworks that help build rich javascript applications (rather than the traditional website). Since JavaScript has gotten faster and more powerful in all the browsers, it has been possible to build more and more complex applications in the browser (opposed to desktop applications written in C++ or .NET/Java). The advantages of using this way of routing is that you get nice clean URLs, and you don't waste time going to the server for each request. However, you lose authentication (so if you want to protect content you'll need to go to the server anyway), and not all browsers are up and running with the History API yet (look at IE7,8,9). Consider your target audience and ask yourself if they will use these browsers or not. The frameworks I mentioned use fallback methods and implement the hashbang system instead. This has arguable problems with SEO however.
On the other side you have your server side frameworks like Rails (for Ruby) and Express (for NodeJS) that will serve the pages to the client using clean URLs. If we go further back you get into the realms of ASP.NET, PHP and plain old HTML that use the 'unclean' URL way of routing. The advantages of using the server to give you your content should be obvious - if you have protected content and the user should be authenticated then you can easily check this.
One final thing to note is the question of JavaScript. Ask yourself if your users are going to have JavaScript enabled and what will happen to your application if they have it disabled. Does this matter? How complicated is your application going to be? Do you need to use a big framework for your app, or will simply using a modern technology that gives you clean URLs be enough? Loading every bit of JavaScript takes time, and that time can be eliminated if the server is only giving the client the bare minimum of what it needs.
This is a question about the best way to structure an app that has both server-side and client-side needs. Forgive the length -- I am trying to be as clear as possible with my vague question.
For a standalone non-web-connected art project, I'm creating a simple browser-based app. It could best be compared to a showy semi-complicated calculator.
I want the app to take advantage of the browser presentation abilities and run in a single non-reloading page. While I have lots of experience writing server-side apps in perl, PHP, and Python, I am newer to client-side programming, and neophyte at JavaScript.
The app will be doing a fair bit of math, a fair bit of I/O control on the Raspberry Pi, and lots of display control.
My original thought (and comfort zone) was to write it in Python with some JS hooks, but I might need to rethink that. I'd prefer to separate the logic layer from the presentation layer, but given the whole thing happens on a single non reloading html page, it seems like JavaScript is my most reasonable choice.
I'll be running this on a Raspberry Pi and I need to access the GPIO ports for both input and output. I understand that JavaScript will not be able to do I/O directly, and so I need to turn to something that will be doing AJAX-ish type calls to receive and sent IO, something like nodejs or socket.io.
My principle question is this -- Is there a clear best practice in choosing between these two approaches:
Writing the main logic of the app in client-side JavaScript and using server-side scripting to do I/O, or
Writing the logic of the app in a server-side language such as Python with calls to client-side Javascript to manage the presentation layer?
Both approaches require an intermediary between the client-side and server-side scripting. What would be the simplest platform or library to do this that will serve without being either total overkill or totally overwhelming for a learner?
I have never developed for the Raspberry Pi or had to access GPIO ports. But I have developed stand-alone web apps that acted like showy semi-complicated calculators.
One rather direct approach for your consideration:
Create the app as a single page HTML5 stand-alone web app that uses AJAX to access the GPIO ports via Node.JS or Python. Some thoughts on this approach based on my experience:
jQuery is a wonderful tool for keeping DOM access and manipulation readable and manageable. It streamlines JavaScript for working with the HTML page elements.
Keep your state in the browser local storage - using JavaScript objects and JSON makes this process amazingly simple and powerful. (One line of code can write your whole global state object to the local storage as a JSON string.) Always transfer any persistent application state changes from local variables to local storage - and have a page init routine that pulls the local storage into local variables upon any browser refresh or system reboot. Test by constantly refreshing your app as part of your testing as you develop to make sure state is managed the way you desire. This trick will keep things stable as you progress.
Using AJAX via jQuery for any I/O is very readable and reliable. It's asynchronous approach also keeps the app responsive as you perform any I/O. Error trapping and time-out handling is also easily accomplished.
For a back end, if the platform supports it, do consider Node.JS. It looks like there is at least one module for your specific I/O needs: https://github.com/EnotionZ/GpiO
I have found node to be very well supported and very easy to get started with. Also, it will keep you using JavaScript on both the front and back ends. Where this becomes most powerful is when you rely on JavaScript object literals and JSON - the two become almost interchangeable and allow you to pass complicated data structures to/from the back end via a few (or even one!) single object variable.
You can also keep your options open now on where you want to execute your math functions - since you can execute the exact same JavaScript functions in the browser or in the node back end.
If you do go the route of JavaScript and an HTML5 approach - do invest time in using the browser "developer tools" that offer very powerful debugging tools and dashboards to see exactly what is going on. You can even browse all the local storage key/value pairs with ease. It's quite a nice development platform.
After some consideration, I see the following options for your situation:
Disable browser security and directly communicate with GPIO. No standard libaries?
Use a JavaScript server environment with GPIO access and AJAX. Complexity introduced by AJAX
Use the familiar Python and use an embedded web browser If libraries are around, easy
Don't add too much complexity if you're not familiar with the tooling and language
Oh what a nice question! I'm thinking of it right now. My approach is a little difference:
With old MVC fashion, you consider the V(iew) layer is the rendered HTML page with Javascript CSS and many other things, and M and C will run on the server. And one day, I met Mr.AngularJS, I realized: Wow, some basic things may change:
AngularJS considers the View ( or the thing I believed is view ) is not actually view. AngularJS gave me Controllers, Data resources and even View templates in that "View", in another word: Client side itself can be a real Application. So now my approach is:
Server do the "server job" like: read and write data , sends data to the client, receive data from client ect....
And client do the "client job": interact with user, do the logic processes of data BEFORE IT WAS SENT such as validation, or format the information collected from user ect...
Maybe you can re-think of your approach: Ask your self what logic should run at client, what should at server. Client with javascript do its I/O, Server with server-side script do its I/O. The server will provide the needed resource for client and javascript use that resources as M(odel) of it's MVC. Hope you understand, my bad English :D
Well... it sounds like you've mostly settled on:
Python Server. (Python must manage the GPIO.)
HTML/JavaScript client, to create a beautiful UI. (HTML must present the UI.)
That seems great!
You're just wondering how much work to do on each side of the client/server divide... should be functionally equivalent.
In short: Do most of the work in whichever language you are more productive in.
Other notes come to mind:
Writing the entire server as standalone python is pretty
straightforwad.
You don't have to , but it's nice and
self-contained if you serve the page content itself from it.
If you
keep most of the state on the server/python side, you can make the
whole app a little more robust against page reloads (even though I
know you mentioned, that should never happen).
This is a pretty generic question, but I come from a few years with Flex, and I am not so much experienced with pure web development.
My question is: if you need to build an AJAX app, which one of the two approaches you would prefer:
classical server-side MVC, where controllers return views supplied with model data. Views can be full-blown or partial. Basically, there will be only a small number of full-blown views, which work as the containers, and javascript will help fill-in the gaps with partial HTML views asynchronously. This approach is one step further the traditional web development, as javascript is used only for maintaining the overall control and user interactions
A full-blown js app, such as the ones built with Cappuccino, Sproutcore, or Backbone.js, where the client side is thick, and implements a client-side implementation of MVC that handles model as well, as controlling logi, and view interactions. Server-side in this case plays the role of a set of JSON/XML services with which the client exchanges data. The disadvantage in this case is that view templates have to be loaded at the beginning, when the initial application is bootstrapped, so that javascript can layout the markup based on the data. The advantages are the reduced weight of the server response, as well the better control within the client, which allows for stuff like view-model binding to be applied.
A somewhat mixed approach between those two.
I am favoring the second one, which is normal, since I come from a similar environment, but with that one I a mostly concerned about issues such as url routing (or deeplinking as we call it in Flash), state management, modularity, and view layout (when do the view markup templates get loaded? Should there be specific server endpoints that provide those templates upon being called, so that the template data does not get loaded in the beginning?)
Please, comment
I prefer #2 myself, but I dig javascript :)
Unfortunately, I have never even seen what flex code looks like. My experience is with rails, so I will talk in those terms, but hopefully the concepts are universal enough that the answer will make sense
As for client side templates, the best is when your server side platform of choice has a story for it (like rails 3.1 asset pipeline or the jammit plugin for pre 3.1). If you are using rails I can give more info, but if you aren't the first thing I would do is look into finding an asset management system that handles this out of the box.
My fallback is generally to just embed templates into my server side templates inside of a script tag like
<script type='text/html' id='foo-template'></script>
To retrieve the string later, you can do something like this (jquery syntax)
var template = $('#foo-template').html();
In my server side templates, I will pull those script tags into their own files as partials, so I still get the file separation (rails syntax)
<%= render :partial => 'templates/foo.html.erb' %>
I much prefer just using jammit, and having my client side templates in seperate files ending in .jst, but the second approach will work anywhere, and you still get most of the same benefits.
I would recommend the second approach. The second approach (thick client thin server approach ) which you are already familiar with is the preferred approach by an increasing number of modern developers because The rendering and management of widgets is done on the client and this saves computational and bandwidth overhead on the server. Plus if you have a case of complex widget management then using server side code for widgets can become increasingly complicated and unmanageable.
The disadvantage pointed by you :
view templates have to be loaded at the beginning, when the initial
application is bootstrapped, so that javascript can layout the markup
based on the data.
is not correct. You can very well load static templates on the fly as required through ajax and then render them using javascript into full blown widgets.
For example if you have an imagegallery with an image editor component then you may not load the files required for image editor (including images, templates and widget rendering code) until user actually chooses to edit an image.
Using scriptloaders (eg. requirejs, labjs) you can initially load only a small to medium sized bootstrapping script and then load the rest of the scripts dynamically depending on the requirements.
Also, powerful and mature server side widget libraries are only available for java backends (eg vaadin). If you are working on php,python or ruby backend then writing your own server side ui framework can be a serious overkill. It is much more convenient to use client side server-agonistic javascript ui frameworks eg. dojo, qooxdoo etc.
You seem to have an inclination towards towards client side mvc frameworks. This is a nice approach but the dual mvc architecture (on server as well as well as client) often has a tendency to lead to code duplication and confusion. For this reason I wont recommend the mixed approach.
You can have a proper mvc framework in the frontend and only a server side model layer that interacts with the application through a restful api (or rpc if you are so inclined).
Since you are coming from flex background I would recommend you to check out Ajax.org ui platform http://ui.ajax.org . Their user interface framework is tag based like flex and though the project is new they have a powerful set of widgets and very impressive charting and data-binding solutions. Dojo and Ample SDK also adopt tag based widget layout system.
Qooxdoo and extjs advocate doing everything from layouting and rendering through javascript which might be inconvenient for you.
I am an architect of one mobile Web application that has 100,000 users with 20,000 of them online at the same time.
For that kind of application (e.g. limited bandwidth) #2 is the only option I think.
So server side is just a set of data services and client uses pure AJAX RPC.
In our case we use single static index.htm file that contains everything. Plus we use HTML5 manifest to reduce roundtrips to the server for scripts/styles/images on startup. Plus use of localStorage for app state persistence and caching.
As of MVC: there are many template engines out there so you can use anything that is most convenient for you. Templates by themselves are pretty compact as they do not contain any data so it is OK (in our case) to include them all.
Yes, architecture of such an application needs to be well thought upfront. #1 option is not so critical - entry level is lower.
I don't know what platform you are targeting but as I said #2 is probably the only option for mobile.