I have heard SSG generates static sites.
Then I thought SSG generates pure HTML that didn't include React, but I think it may not be true now.
I think:
SSG generates a usual React App and rendered HTMLs for initialization.
As it is a usual React App, if I click a button and trigger a side effect, client-side rendering will be triggered and the page will be updated.
When routing using router is triggered, next page's js file and data obtained when build will be downloaded, and then client-side rendering will be triggered.
The next page's rendered HTML for initialization isn't used here.
Is it true?
SSG (Static Site Generators) like Gatsby and Next, what they do is to create an output HTML based on a React environment code. This doesn't mean that the site is "static" in terms of interaction. This means that the page you are requesting is already created so you are avoiding response and compilation time in the server.
Summarizing, given a "traditional"/"old-fashioned" PHP site. When you request the homepage, for example, your requests go to the server, the server transpiles the PHP into HTML (what the browser can parse and print) and then you get the page. That processing time is omitted in Gatsby/Next because the HTML is already created.
When you build your site in Gatsby/Next, the data is being retrieved from the sources (using GraphQL from markdowns, CMSs, APIs, JSONs, etc) and creates the output (that's why there's a /public folder generated). All your JavaScript and React is bundled into the output HTML so your website will be "dynamic" in terms of user interactivity, React is part of the ecosystem so your side-effects (triggered by useEffect hook for example) or your rehydration process (useState hook for example) will be part of your site.
Explained as:
When you navigate into another page, you are requesting a page that is already built and generated, that's why is so blazingly fast.
I've answered this question few weeks ago on the Nuxt discussions: https://github.com/nuxt/nuxt.js/discussions/9493#discussioncomment-948643
Let's say that SSG bring several things:
SEO
speed
ecology
[probably some other things]
There are several ways of doing SSG and all of them have their pro/cons and their use-cases. For the most part, and if you're using Nuxt.js, you will probably go the target: static, ssr: true route.
This will:
generate fully static pages during build time and you'll be able to host it on Netlify, Vercel or alike
hydrate the static content with some JS after you have fetched the static files
have the Vue behavior afterwards, as a classic SPA (hence managing the routing without further server calls)
This behavior is called Isomorphic or Universal, more info in the linked discussion.
Gatsby and Next.js do work in somewhat similar ways. There are some minor differences but the general is globally the same across those 3 AFAIK.
SvelteKit and Astro handle this a bit differently. May be interesting to give it a look!
Related
So I'm finding it difficult to see the benefits of doing SSR for dynamic paths in NextJs when I can just just pre-render a few static paths, and use fallback=true to cover my bases on most pages.
Say I have an eCommerce site with 1 million product detail pages, but I only want to pre-render featured products on the home page(most clicked). If I set fallback to true in getStaticPaths, then the getStaticProps function runs every time a non featured product page is requested.
So what's the advantage of using SSR when I can just have a fallback that queries the database every time a non pre-rendered page is called?
Note: I saw a similar question on Stack Overflow, and the answer was that web-crawlers see only the fallback state of your react Component that you set for non pre-rendered pages (so the source code would only read <p>Loading...</p> or something like that, vs the SSR page which would load all your data for the product directly as the source code. But this doesn't seem to be true in my app.
Thanks for any help.
TLDR: [In NextJs..] Why can't I just use SSG for dynamic paths, with fallback=true in getStaticPaths, instead of SSR?
THANKS ALL
I tried reading the NextJs docs and couldn't find an explanation for the cons of using fallback=true in getStaticPaths
From next.js docs:
By default, Next.js pre-renders every page. This means that Next.js
generates HTML for each page in advance, instead of having it all done
by client-side JavaScript.
Two Forms of Pre-rendering
Next.js has two forms of pre-rendering: Static Generation and
Server-side Rendering. The difference is in when it generates the HTML
for a page.
Static Generation is the pre-rendering method that generates the HTML
at build time. The pre-rendered HTML is then reused on each request.
Server-side Rendering is the pre-rendering method that generates the
HTML on each request.
I put those definitions to clarify the terms in next.js. I believe your question is regarding fallback:true versus generating HTML on each request (or building page runtime vs build time). I think this note you shared is not correct
Note: I saw a similar question on Stack Overflow, and the answer was
that web-crawlers see only the fallback state of your react Component
that you set for non pre-rendered pages (so the source code would only
read Loading... or something like that, vs the SSR page which
would load all your data for the product directly as the source code.
But this doesn't seem to be true in my app.
In each case the populated page is seen by the crawlers.
Using getStaticPath in your e-commerce example is the usage of caching. those pages for popular products are already built and inside next build folder you can see them if you build your app locally. But in large applications, those static assets are stored in CDN, and whenever the server gets a request response will be in no time. so customer will have a better user experience so which will eventually affect the profit of the ecommerce site.
I think the clearest example would be thinking about a blogging website like Medium. The most popular blogs will be pre-generated since the content of the blogs do not change that often. Medium will use CDN's from different parts of the world, so user all around the world will have faster access to blogs.
Hitting databases is a very expensive operation. The more load you have on the database harder to maintain the availability, scalability, and reliability of your application.
Also, you might have a better internet connection, you use high end clients so you might access any data faster but you have to think about all people around the world who try to access data with low-quality clients or internet connections.
I built a small site that uses gatsby for static content, but then for some content that needs to be rendered on the client-side, I'm using client-only routes in gatsby.
I am not sure I fully understand how this works though - Say I have a Header, Footer & a font that I am using in my static site. On my client-only routes, I am using the same Header, Footer & font. Will I benefit at all from having used these elements in my statically components previously? Is the font being loaded anew, for example?
Basically, I would like to know what Gatsby-features my client-site content is losing out on now, and what I should maybe attend to a bit more, since Gatsby won't be handling this for me anymore. Especially in terms of pagespeed.
Yes you should benefit from having used those components and fonts already.
The React components that are being re-used will already be in a JS bundle that you have shipped to the user and shouldn't need to be fetched again. Likewise with the font files - but these will be asset files - not in a JS bundle.
The best way to see what is being fetched will be to test it out in a browser.
Load a static page
Open the Network tab in dev tools
Navigate to a client-only page and check for network activity
While those assets shouldn't be fetched twice I can imagine some instances where an incorrect setup would fetch them twice - so best to just double check.
Will I benefit at all from having used these elements in my statically
components previously?
The answer is yes. Gatsby works with #reach/router under the hood so you will have all benefits of it no matter if you use client-only routes or not.
In other words, the trickiest part of using client-only routes is the internal routing, for your site, in that scenario, Gatsby will handle the routing internally since it extends from #reach/router, so the shared components (header, footer, etc) will only be rendered on-demand and will be shared across your site, no matter if it's a client-only route or a static page.
I would like to know what Gatsby-features my client-site content is
losing out on now, and what I should maybe attend to a bit more, since
Gatsby won't be handling this for me anymore. Especially in terms of
pagespeed.
Summarizing a lot, when a page loads, #reach/router looks at the path prop of each component nested under <Router />, and chooses one to render that best matches window.location. So you will only render the needed code on-demand.
In terms of page speed, your site won't be affected because the site keeps being "static" and pre-mounted once the build is done. The only "negative" part of using client-only routes (if you want to say that) is the SEO part since they won't be intercepted by Google, but, that's the reason why you are using client-only paths, in most of the cases you don't want to index those pages.
I want to build a website with Next.js and am trying to better understand their Automatic Static Optimization, and the different ways that you can use it.
So to start with, there's Gatsby.js which is a static site generator. When you run Gatsby's build command you get a /public folder which is completely static and can be deployed without any need for some kind of a back-end. If I understand correctly, this means that the entire static folder is sent to the client on the first request, and from then on everything, including routing, happens client-side.
With Next.js on the other hand, you have static generation, which means that all pages are pre-rendered on the server at build time (like Gatsby), but the application still depends on a back-end (either a full-blown server or a serverless function) for routing. I.e, the pages are pre-rendered but, unlike Gatsby, they're sent to the client per request, i.e on navigation. (I found this answer which says there's only the initial request with Next, but then, what's the difference from Gatsby?)
What I find confusing in all this, are things like Next's docs for Static HTML Export. They start by stating
next export allows you to export your app to static HTML, which can be run standalone without the need of a Node.js server.
So, sounds like this option gives us the ability to use Next just like Gatsby, i.e as a completely static folder.
But then they go on remarking that:
If your pages don't have getInitialProps you may not need next export at all; next build is already enough thanks to Automatic Static Optimization.
But Automatic Static Optimization refers only to server side static pre-rendering, and next build does not produce a Gatsby like static folder that can be deployed as a standalone.
So what am I missing here? What's the difference between Gatsby.js and Next.js? Does Gatsby can do something Next can't? Can I build a completely static site with Next without using the export command?
Most importantly, can I build and deploy a Next.js application with some pages completely static (like Gatsby), some pages only pre-rendered (getStaticProps and getStaticPaths), and some pages rendered server-side (getServerSideProps)?
Thanks a lot in advance!
The first request is for <url>/index.html so no the entire public is not sent to the client.
Gatsby optimises the loading process to ensure critical resources (HTML, CSS, JS) are loaded first, ensuring best possible user experience. From there it will load remaining resources required to render the full page and will also prefetch linked pages from the main page. Of course if you have requested a route to a different page, then the client would fetch the HTML initially for that page, but the process followed will be similar.
Gatsby is still better at this than Next.js (SSG is a very new feature for Next and this the core of what Gatsby does) - see https://dev.to/notsidney/gatsby-won-against-next-js-in-this-head-to-head-37ka.
In answer to your questions, yes you can do full SSG, partial SSR/SSG and full SSR with Next. You need to do next export if you want full SSG, otherwise for the other modes you are in standard Next territory and Next will take care of both SSG/SSR given you have a traditional web server running that can serve both static content and perform dynamic SSR.
I am developing an involved web app with asp.net core.
I am developing React components, writing all of my components with ES and JSX syntax.
I run webpack to transpile all of my code (so now I have pre-transpiled files ready to be served)
When a request comes in, I just serve my pre-transpiled bundles.
I wanted to have a way of only bundling and sending user-specific components (based on a list of features they have access to) to the client.
The only way I could figure doing this is to do "on-the-fly permission-controlled component bundling combined with on-the-fly jsx compilation" to serve my components.
I gather that webpack shouldn't be used as an on-the-fly bundler like this, so that is out of the picture...
Partial scrappy solution I came up with:
Using no importing or export mechanism in my js, I use Razor to cycle through my feature list, adding the appropriate (mostly modular) components in what I call "Dependency First Order" to the page, and at the end of each components' code, I write
class ComponentA extends React.Component { //Component Code Here }
window.ComponentA = ComponentA;
So all components are global and can be rendered.
This way, I am able to select what Components get sent to the client with Razor.
NOW, remember when I said "mostly modular"? Well if I am rendering a component within another component that the user doesn't have access to, this partial solution would leave the render statement embedded in main component that is rendering the sub-component itself, without the component code it's supposed to render actually being there. This being a dirty partial solution, I would just suppress the error if the component was non-existent and move on.
Bottom line is I am having a real difficult time making my react components 100% modular and being able to control the granularity of my 'component dependencies' so that no code is on the client that a user shouldn't have access to.
Ridiculous solution someone offered me:
It is also certainly out of the question that I would generate a set of bundles for every user and whenever an admin changes what a user has access to, I would re-render that bundle with webpack. (especially since I am dealing with thousands of users here).
As I am writing all of this, the more and more I feel like I am just being a perfectionist and should just go with the above paragraph.
The solution I should probably go with:
There is the ideology out there to just send all of your js to the browser and then selectively render them based on the permission of the user. Any security loopholes here would just be handled by server-side access control to lock down endpoints if a specific user did try to forge requests to parts of the application they don't have access to (which would be implemented regardless).
I am under the gun here and feel like I am overthinking most of this. I would be greatly appreciative of any feedback. Thank you.
It is possible to ship permission based JS bundle to client. You can leverage webpack dynamic import logic to load only required features JS bundles.
You need to create directory structure based on features and load them based on user permissions. Basically what webpack does is, it creates separate bundle for each feature and load it via dynamic import when requested.
Solution here 👇
Note: You might not see lazy bundles in codesandbox.io network panel, but, you can download project and run server locally to see bundles being lazy loaded.
When we use client side rendering, I know this will reduce the amount of connection time with the server, for example if we use react for that (using create-react-app) , react will create one js file contains all of our application stuff except the data we will receive from the api (which will most often be in json) - but that means all the DOM stuff will be in that one js file that the user will get when he load the page for the first time, now for small apps I don't see a problem. But in large applications, when we have a lot of pages, components and sub-pages using routing libraries like react-router, do all these things and code will be in that file? wouldn't that make it too big? to be send at once?
There is no doubt that these techniques increase the performance of the website and interactivity, but my concern is the first time the site is loaded and how to make it as fast as possible with Relatively large applications
Thank you all, the solution is to use "lazy loading" and "code splitting" techniques, This is a good article about this :
Lazy loading routes in react