When we use client side rendering, I know this will reduce the amount of connection time with the server, for example if we use react for that (using create-react-app) , react will create one js file contains all of our application stuff except the data we will receive from the api (which will most often be in json) - but that means all the DOM stuff will be in that one js file that the user will get when he load the page for the first time, now for small apps I don't see a problem. But in large applications, when we have a lot of pages, components and sub-pages using routing libraries like react-router, do all these things and code will be in that file? wouldn't that make it too big? to be send at once?
There is no doubt that these techniques increase the performance of the website and interactivity, but my concern is the first time the site is loaded and how to make it as fast as possible with Relatively large applications
Thank you all, the solution is to use "lazy loading" and "code splitting" techniques, This is a good article about this :
Lazy loading routes in react
Related
So I'm finding it difficult to see the benefits of doing SSR for dynamic paths in NextJs when I can just just pre-render a few static paths, and use fallback=true to cover my bases on most pages.
Say I have an eCommerce site with 1 million product detail pages, but I only want to pre-render featured products on the home page(most clicked). If I set fallback to true in getStaticPaths, then the getStaticProps function runs every time a non featured product page is requested.
So what's the advantage of using SSR when I can just have a fallback that queries the database every time a non pre-rendered page is called?
Note: I saw a similar question on Stack Overflow, and the answer was that web-crawlers see only the fallback state of your react Component that you set for non pre-rendered pages (so the source code would only read <p>Loading...</p> or something like that, vs the SSR page which would load all your data for the product directly as the source code. But this doesn't seem to be true in my app.
Thanks for any help.
TLDR: [In NextJs..] Why can't I just use SSG for dynamic paths, with fallback=true in getStaticPaths, instead of SSR?
THANKS ALL
I tried reading the NextJs docs and couldn't find an explanation for the cons of using fallback=true in getStaticPaths
From next.js docs:
By default, Next.js pre-renders every page. This means that Next.js
generates HTML for each page in advance, instead of having it all done
by client-side JavaScript.
Two Forms of Pre-rendering
Next.js has two forms of pre-rendering: Static Generation and
Server-side Rendering. The difference is in when it generates the HTML
for a page.
Static Generation is the pre-rendering method that generates the HTML
at build time. The pre-rendered HTML is then reused on each request.
Server-side Rendering is the pre-rendering method that generates the
HTML on each request.
I put those definitions to clarify the terms in next.js. I believe your question is regarding fallback:true versus generating HTML on each request (or building page runtime vs build time). I think this note you shared is not correct
Note: I saw a similar question on Stack Overflow, and the answer was
that web-crawlers see only the fallback state of your react Component
that you set for non pre-rendered pages (so the source code would only
read Loading... or something like that, vs the SSR page which
would load all your data for the product directly as the source code.
But this doesn't seem to be true in my app.
In each case the populated page is seen by the crawlers.
Using getStaticPath in your e-commerce example is the usage of caching. those pages for popular products are already built and inside next build folder you can see them if you build your app locally. But in large applications, those static assets are stored in CDN, and whenever the server gets a request response will be in no time. so customer will have a better user experience so which will eventually affect the profit of the ecommerce site.
I think the clearest example would be thinking about a blogging website like Medium. The most popular blogs will be pre-generated since the content of the blogs do not change that often. Medium will use CDN's from different parts of the world, so user all around the world will have faster access to blogs.
Hitting databases is a very expensive operation. The more load you have on the database harder to maintain the availability, scalability, and reliability of your application.
Also, you might have a better internet connection, you use high end clients so you might access any data faster but you have to think about all people around the world who try to access data with low-quality clients or internet connections.
I have heard SSG generates static sites.
Then I thought SSG generates pure HTML that didn't include React, but I think it may not be true now.
I think:
SSG generates a usual React App and rendered HTMLs for initialization.
As it is a usual React App, if I click a button and trigger a side effect, client-side rendering will be triggered and the page will be updated.
When routing using router is triggered, next page's js file and data obtained when build will be downloaded, and then client-side rendering will be triggered.
The next page's rendered HTML for initialization isn't used here.
Is it true?
SSG (Static Site Generators) like Gatsby and Next, what they do is to create an output HTML based on a React environment code. This doesn't mean that the site is "static" in terms of interaction. This means that the page you are requesting is already created so you are avoiding response and compilation time in the server.
Summarizing, given a "traditional"/"old-fashioned" PHP site. When you request the homepage, for example, your requests go to the server, the server transpiles the PHP into HTML (what the browser can parse and print) and then you get the page. That processing time is omitted in Gatsby/Next because the HTML is already created.
When you build your site in Gatsby/Next, the data is being retrieved from the sources (using GraphQL from markdowns, CMSs, APIs, JSONs, etc) and creates the output (that's why there's a /public folder generated). All your JavaScript and React is bundled into the output HTML so your website will be "dynamic" in terms of user interactivity, React is part of the ecosystem so your side-effects (triggered by useEffect hook for example) or your rehydration process (useState hook for example) will be part of your site.
Explained as:
When you navigate into another page, you are requesting a page that is already built and generated, that's why is so blazingly fast.
I've answered this question few weeks ago on the Nuxt discussions: https://github.com/nuxt/nuxt.js/discussions/9493#discussioncomment-948643
Let's say that SSG bring several things:
SEO
speed
ecology
[probably some other things]
There are several ways of doing SSG and all of them have their pro/cons and their use-cases. For the most part, and if you're using Nuxt.js, you will probably go the target: static, ssr: true route.
This will:
generate fully static pages during build time and you'll be able to host it on Netlify, Vercel or alike
hydrate the static content with some JS after you have fetched the static files
have the Vue behavior afterwards, as a classic SPA (hence managing the routing without further server calls)
This behavior is called Isomorphic or Universal, more info in the linked discussion.
Gatsby and Next.js do work in somewhat similar ways. There are some minor differences but the general is globally the same across those 3 AFAIK.
SvelteKit and Astro handle this a bit differently. May be interesting to give it a look!
I am creating a one page web app with ExtJS.
Isn't the best way to decrease load time of an web app to inject JS, CSS and HTML in the initial HTML file sent to browser instead of just including the script and css tags to load the files from the server one at a time since that will reduce multiple HTTP requests into only one.
You may like the concept of httpcombiner.ashx.
http://archive.msdn.microsoft.com/HttpCombiner
This tool can also compress and cache your js and css
If you want to cut down on initial load time, one of the best ways is to take advantage of the browser cache. Suggest you look at using a hosted ExtJS library, such as from Google Ajax APIs. There is a great chance a prospective visitor will already have it cached.
This is just one tip of many.
This webpage outlines some best practices when it comes to lowering perceived webpage loading time.
http://developer.yahoo.com/performance/rules.html
In addition to using the condensers pavan suggested, you can use Google's closure compiler to minimize javascript files.
http://closure-compiler.appspot.com/home
Well, there is big difference between load time and observed load time. One of the best ways to reduce load time is to use server side compression. However, progressive loading appears to be loading faster for the user.
Therefore initial response should only contain minimal set of style sheets (lets browser render later arriving stuff already styled) and layout. Then you could have onLoad callback to some AJAX loader which loads additional components.
Most importantly do not forget to size your image containers. One of the most annoying things is when you miss-click a link just because an image started loading and changed the layout.
I'm pretty new to Sencha Touch and am trying to make a simple application that has a login form and makes calls and fetch results into lists.
My question is, how should I create the structure of the application? Should it all be inside one .html file? or should I different pages for each list and the login page? If so, how can I change views from one page to another and get transition effects?
There is actually a generator which you can use to generate the canonical app structure. From the Sencha download, go to the jsbuilder directory, then run a command similar to this:
./sencha.sh generate app MyApp path/to/myapp
Also, this slide set demonstrates the structure, though you may have to dig for more of the philosophy for why things are where:
http://www.sencha.com/conference/sessions/session.php?sid=322
And here's the example app talked about in the discussion:
http://cl.ly/1d1S282O1Y2c3N1v1j1i
It's fine to use a single HTML file to get things started, but in the long run it's worth making the application structure consistent with 'best practices' so that others can look at, and understand, your code subsequently.
Sencha Touch generators (coming in v1.1) place the launch logic in a file called app.js and then have files for each model, view and controller (in respective directories).
While you may not be building a fully fledged MVC application from the start, you should probably still use these conventions. Take a look at the Twitter and Kiva apps in the SDK (and at http://dev.sencha.com/deploy/touch/examples/ ) foor good examples.
The index.html file can link to each file individually, but of course for production, you are also advised to look at the JSBuilder tool to package and minify them all so that the device can fetch them in one single HTTP request.
I would break it up by major function (i.e. purpose). For mobile apps, you want to avoid having unnecessary postbacks / loading multiple pages and views if you can help it.
If your mobile app has one purpose, I would keep it on one html page and only break up the JavaScript files as you need to keep it organized.
If it has two purposes (e.g. 1-to ENTER a bunch of information, and 2-to display reports on your data), then I would break it up into two html files.
For example, if you have a mobile app that takes you through a series of wizard steps to perform data entry (i.e. single purpose), I would house that whole wizard inside of an Ext.Panel (on a single page), and swap out each content Ext.Panel "step" of the wizard as the user progresses through the wizard.
Start with the simplest thing and refactor later. I've just done an app in Sencha Touch and it gets quite bewildering looking at the example files. I found the easiest way to learn was to create a single html file and as soon as something became unwieldy or obviously needed a refactor I started creating subsequent files.
I've just started developing an ExtJS application that I plan to support with a very lightweight JSON PHP service. Other than that, it will be standalone. My question is, what is the best way to organize the files and classes that will inevitably come into existence? Anyone have any experience with large ExtJS projects (several thousand lines).
I would start here http://blog.extjs.eu/know-how/writing-a-big-application-in-ext/
This site gives a good introductory overview of how to structure your application.
We are currently using these ideas in two of our ASP.NET MVC / ExtJS applications.
While developing your application your file and folder structure shouldn't really matter as you're probably going to want to minimize the release code and stick it in a single JS file when you're done. An automated handler or build script is probably going to be the best bet for this (see http://extjs.com/forum/showthread.php?t=44158).
That said, I've read somewhere on the ExtJS forums that a single file per class is advisable, and I can attest to that from my own experience.
I suggest users are willing to wait for an application to load, so we typically load all of JS during initial app startup. I suggest loading and eval'ing JS files as needed is unnecessary - especially when all JS will be minified before deployment to production.
I suggest namepsaces, one class per file, and a well-defined and well-documented class hierarchy.
When starting new big project, I decided to make it modular. Usually, in big projects not all modules are used by a particular user, so I load them on demand. F.e., if a project would have 50+ modules, the big probability is that user is working only with 10-.
Such architecture lets you to have the initial code relatively small.
Modules are stored on the server and loaded by AJAX call, eval'uating the responseText in AJAX callback. The only issue with this, you must keep track on module dependencies, which could be stored inside modules as well. I have a class called Module, and I check every new module instance for existance within the task. If it doesn't yet exist, I load it from the server.