I'm using a Headless CMS based in another country that is not near my own, and each request to this CMS takes at least 1.5s to return a payload (which can be a ridiculously long time to wait when you're making multiple requests).
Now, I can use getStaticProps for each page requiring CMS content. But, since I have global components that need information from the CMS too (such as the navigation and footer), this destroys Next's ability to render each page statically, and it forces me to use getInitialProps in _app.js.
Now, before you say:
Just create a request to the CMS on every page using getStaticProps.
That will remove the ability for my website to be a proper single-page application and cause a re-render for elements in the layout that don't need to be re-mounted while navigating. This ability to navigate without re-mounting is vital since the navigation has functionality that relies on it.
Here is an example of what my _App.getInitialProps looks like:
const appProps = await App.getInitialProps(context);
try{
// Retrieve data from CMS.
// Yes. I do have to create two requests to the CMS to retrieve a full payload,
// as that is how their API is setup, unfortunately.
const dataOne = await fetch(...);
const dataTwo = await fetch(...);
// Yes, I'm aware this isn't actually how it would work.
return { ...dataOne, ...dataTwo, ...appProps };
} catch(error){
console.log("Error in _App component:", error);
}
return appProps;
The ideal workaround for this would be to use getStaticProps in each component that requires it - but this isn't a feature that Next offers, unfortunately.
Related
I read Next.js documentation many times, but I still have no idea what's the difference between getStaticProps using fallback:true and getServerSideProps.
As far as I understand :
getStaticProps
getStaticProps is rendered at the build time and serves any requests as static HTML files. It is using with pages that are not updated often, for example, an About us page.
export async function getStaticPaths() {
return {
paths: [{ params: { id: '1' } }, { params: { id: '2' } }]
}
}
But if we put fallback:true at the return of the function, and there is a request to the page that is not generated at the build time, Next.js will generate the page as a static page then other requests on this page will be served as a static.
export async function getStaticPaths() {
return {
paths: [{ params: { id: '1' } }, { params: { id: '2' } }],
fallback: true,
}
}
So, getStaticProps's concept is working great for me. I think it can work for most scenarios. But if getStaticProps is working great, then why do we need getServerSideProps?
I understand that Next.js will pre-render each request if we use getServerSideProps. But why do we need it, when we can use getStaticProps to get the newest data, which I think it is better for TTPB?
Can you guys please explain it to me or guide me to something that can help me totally understand this?
Consider a web crawler that does not run Javascript, when it visits a page that was not built at build time, and its fallback is set to be true, Next.js will serve a fallback version of the page that you defined (Fallback pages). The crawler may see something like <div>Loading</div>.
Meanwhile, if the crawler visits a page with getServerSideProps, Next.js will always respond with a page with all the data ready, so the crawler will always get the completed version of the page.
From a user standpoint, the difference is not significant. In fact, the page with getStaticProps and fallback: true may even result in a better perceived performance.
Also, as more and more crawlers execute JavaScript before indexing JavaScript, I would expect there will be fewer reasons to use getServerSideProps in the future.
fallback:true is set inside getStaticPaths which is used with getStaticProps to pregenerate the dynamic pages.
getStaticPath fethces the database to determine how many pages have to be pregenerated and what would be the dynamic part which is usually "slug" for blogs and "id" for products. Imagine, your page has 1000s of products, and pre-generating each page will take long time. If user just manually types the url "yourDomain/products/100". If that page is not ready, you will be returning a Fallback component.
However getServerSideProps get executed upon request hits the route. with getStaticPaths and getStaticProps(if you are revalidating you have access to request), you have no access to request object. for example you might need to extract the cookie and make an api call with this cookie.
Otherwise imagine you have dynamic page "users/[id]", and you are showing users secret data here upon authentication. If you pregenerate this pages, then all other users can see each page.
So this is a fairly new topic, React Server Components has recently been released, comparing to SSR/Next.js, how does it affect SEO?
Since the component is rendered in the server dynamically when it is requested, it is not really as static as SSR like Next.js, will search engine fail to index those component if I use it?
A demo can found here
We can see that in api.server.js,
async function renderReactTree(res, props) {
await waitForWebpack();
const manifest = readFileSync(
path.resolve(__dirname, '../build/react-client-manifest.json'),
'utf8'
);
const moduleMap = JSON.parse(manifest);
pipeToNodeWritable(React.createElement(ReactApp, props), res, moduleMap);
}
function sendResponse(req, res, redirectToId) {
const location = JSON.parse(req.query.location);
if (redirectToId) {
location.selectedId = redirectToId;
}
res.set('X-Location', JSON.stringify(location));
renderReactTree(res, {
selectedId: location.selectedId,
isEditing: location.isEditing,
searchText: location.searchText,
});
}
I understand this can help to reduce the workload for client's device, since the component are rendered on the server and sent to the client, and that the component can be rendered with the secret stored in server as we can just pass it in as props rather we sending the secret to client.
But if SEO matters, is SSR preferred over React Server Component?
Server Components are complementary to rendering into HTML, not an alternative. The plan is to have both.
Server Components were not released. What was released is an early tech preview in the spirit of sharing our research. This preview doesn’t include an HTML renderer. The api.server.js file from the demo you mentioned contains a comment about this:
const html = readFileSync(
path.resolve(__dirname, '../build/index.html'),
'utf8'
);
// Note: this is sending an empty HTML shell, like a client-side-only app.
// However, the intended solution (which isn't built out yet) is to read
// from the Server endpoint and turn its response into an HTML stream.
res.send(html);
By the time Server Components are officially released, there will be a streaming HTML renderer for the first render.
It’s not built yet.
It should be same from SEO point of view as SPA.
What happens with classic React SPA is, it loads React components (which are essentially JS functions) as part of the JS bundle, and then it starts to request data from the backend in JSON format. After JSON is fetched, it is rendered via the React component functions and inserted into the DOM. Modern crawlers use V8 engine (or maybe smth else if it's Bing :D), they wait until page is fully loaded and all JSON data is loaded and all components are actually rendered - and then it crawls the resulting DOM.
GoogleBot is crawling SPAs that way for at least 3 years now, probably more - so if you were thinking that SSR is essential for SEO, no, it is not. There were plenty of investigations into this, random example: https://medium.com/#l.mugnaini/spa-and-seo-is-googlebot-able-to-render-a-single-page-application-1f74e706ab11
So essentially for crawler it doesn't really matter, how exactly React component is rendered. In case of React Server Components, component function resides on server and is never transferred to the client as part of the JS bundle. So instead of requesting JSON data, the application requests rendered component in some intermediate format (not HTML btw). Result of that is transferred to the client and is getting rendered to the DOM. So the end result is still the same - it's some DOM elements that the bot can crawl.
I am working on a website that uses Nextjs and React. We are using a mix of static generation as well as SSR. One of the components we're using is shared by every page on the site. It is a header. The issue is that his header requires some very large queries in order to get all the information it needs to properly render. Ideally, we want to "statically render" this component so that it can be used by all pages at build time, even for pages that are not statically rendered. As it stands, rendering this header every time a user visits our website significantly increases load times as well as puts an unnecessary load on our DB.
Is there any way to go about doing something like "statically rendering" a single component that's shared by every page? Part of the code for the header looks something like
const { loading, error, data } = useQuery(QUERY);
const { loading: productLoading, data: productData } = useQuery(QUERY_PRODUCT);
const { data: otherData } = useQuery(OTHER_DATA);
const { data: otherOtherData } = useQuery(OTHER_OTHER_DATA);
Ideally we don't want to run these queries every time a user visits the website.
If you are going to use the same data for every page, you may consider using a redux-store or storing them in a context provider.
You may use the codes sample similar to https://stackoverflow.com/a/64924281/2822041 where you create a context and provider which is accessible in all your pages.
The other method, ideally, would be to "cache" using Redis so that you do not need to query the data over and over again.
I am using next js for developing a web app. I have following page:
<page>
<component_1>
Fetch the data from remote server using `getInitialProps` (I could use `getStaticProps` or `getServerSideProps` too)
Show the data to user and get the user input, pass on to component_2
</component_1>
<component_2>
Based on the user selection, fetch remote data and show it in the page
</component_2>
</page>
During user interaction, either component_1 or component_2 is visible.
getInitialProps is allowed to be used only on page level components (URL loading) and not called in sub-components (this way, I am not able to get remote data from component_2). Should I have to handle this scenario by using different URLs? Or is there a better way of handling in the same page?
Thanks for the help in advance!!!
getStaticProps, getInitialProps and getServerSideProps are functions that fetch data for pre-rendered pages and components.
Your component needs to fetch data based on the user actions, so you need to fetch data only on the client side. You can fetch data on the client side in any React component.
Example using SWR hook. However, you can use any preferable method to fetch data.
import useSWR from 'swr'
function Profile() {
const { data, error } = useSWR('/api/user', fetch)
if (error) return <div>failed to load</div>
if (!data) return <div>loading...</div>
return <div>hello {data.name}!</div>
}
Suggested reading: Next.js fetching data on the client side
Do you know if it's possible to re-execute Gatsby page queries (normal queries) manually?
Note, This should happen in dev mode while gatsby develop runs.
Background info: I'm trying to set up a draft environment with Gatsby and a Headless CMS (Craft CMS in my case). I want gatsby develop to run on, say, heroku. The CMS requests a Gatsby page, passing a specific draft-token as an URL param, and then the page queries should be re-executed, using the token to re-fetch the draft content from the CMS rather than the published content.
I'm hooking into the token-request via a middleware defined in gatsby-config.js. This is all based on https://gist.github.com/monachilada/af7e92a86e0d27ba47a8597ac4e4b105
I tried
createSchemaCustomization({ refresh: true }).then(() => {
sourceNodes()
})
but this completely re-creates all pages. I really only want the page queries to be extracted/executed.
Probably you are looking for this. Basically, you need to set an environment variable (ENABLE_GATSBY_REFRESH_ENDPOINT) which opens and exposes a /__refresh webhook that is able to receive POST requests to refresh the sourced content. This exposed webhook can be triggered whenever remote data changes, which means you can update your data without re-launching the development server.
You can also trigger it manually using: curl -X POST http://localhost:8000/__refresh
If you need a detailed explanation of how to set .env variables in Gatsby just tell me and I will provide a detailed explanation. But you just need to create a .env file with your variables (ENABLE_GATSBY_REFRESH_ENDPOINT=true) and place this snippet in your gatsby-config.js:
require("dotenv").config({
path: `.env.${activeEnv}`,
})
Of course, it will only work under the development environment but in this case, it fits your requirements.
Rebuild for all is needed f.e. when you have indexing pages.
It looks like you need some logic to conditionally call createPage (with all data refetched) or even conditionally fetch data for selected pages only.
If amount (of pages) is relatively not so big I would fetch for all data to get page update times. Then in loop conditionally (time within a few minutes - no needs to pass parameter) call createPage.
If develop doesn't call 'createPage' on /__refresh ... dive deeper into gatsby code and find logic and way to modify redux touched nodes.
... or search for other optimization techniques you can use for this scenario (queried data cached into json files?).