I am new to the Nuxt development platform, having worked mostly in Vue(v2).
Having delved into the docs and experimenting locally for a bit there are a few things I still require clarity on:
In static generation mode, does each page its get its own Vue app instance? That is, every pre-rendered page that is requested from the server behave as an SPA on the client.
If #1 is true, does every page, and in effect, every app run isolated from all other pages and apps? No shared state?
When using Vuex in SSG mode, does each page get its own Vuex store that is inflated with an initial state while rendering the page on the server? And, does this state get passed down to the client?
This store is destroyed on navigating to a different page (or even refreshing the current page), to be replaced by a new one right?
In SSG, every page is rendered ahead of time but at the end, will be hydrated and hence become an SPA. Nuxt is basically Vue on steroids but it's still Vue.
Not sure what you call isolated here, you can totally pass props, use emit etc... Meanwhile yeah, if you do have pages, they are not all loaded at the same time, there is code splitting + lazy loading + prefetching. Some info can be found here.
You can have namespaced modules out of the box when using Vuex with Nuxt. You do have nuxtServerInit to pre-populate the store yes. You could give a read to the Nuxt lifecycle btw. But you don't have a Vuex module by page out of the box, there is no real usage for this IMO. And yeah, you'll get your Vuex store on the client of course.
If you navigate to a different path with a <a></a> tag, you will nuke your SPA yeah. If you're using <nuxt-link></nuxt-link>, you'll stay in the SPA since the Vue router will make the "navigation". But yeah, if you type a path directly into the url and press Enter or refresh the page with F5, your entire SPA will be nuked and everything will be replaced from the ground up.
I've spoken at the latest Nuxt-nation, the quality is not amazing (sorry for this) but you could maybe get some interesting things out of it.
Related
I have heard SSG generates static sites.
Then I thought SSG generates pure HTML that didn't include React, but I think it may not be true now.
I think:
SSG generates a usual React App and rendered HTMLs for initialization.
As it is a usual React App, if I click a button and trigger a side effect, client-side rendering will be triggered and the page will be updated.
When routing using router is triggered, next page's js file and data obtained when build will be downloaded, and then client-side rendering will be triggered.
The next page's rendered HTML for initialization isn't used here.
Is it true?
SSG (Static Site Generators) like Gatsby and Next, what they do is to create an output HTML based on a React environment code. This doesn't mean that the site is "static" in terms of interaction. This means that the page you are requesting is already created so you are avoiding response and compilation time in the server.
Summarizing, given a "traditional"/"old-fashioned" PHP site. When you request the homepage, for example, your requests go to the server, the server transpiles the PHP into HTML (what the browser can parse and print) and then you get the page. That processing time is omitted in Gatsby/Next because the HTML is already created.
When you build your site in Gatsby/Next, the data is being retrieved from the sources (using GraphQL from markdowns, CMSs, APIs, JSONs, etc) and creates the output (that's why there's a /public folder generated). All your JavaScript and React is bundled into the output HTML so your website will be "dynamic" in terms of user interactivity, React is part of the ecosystem so your side-effects (triggered by useEffect hook for example) or your rehydration process (useState hook for example) will be part of your site.
Explained as:
When you navigate into another page, you are requesting a page that is already built and generated, that's why is so blazingly fast.
I've answered this question few weeks ago on the Nuxt discussions: https://github.com/nuxt/nuxt.js/discussions/9493#discussioncomment-948643
Let's say that SSG bring several things:
SEO
speed
ecology
[probably some other things]
There are several ways of doing SSG and all of them have their pro/cons and their use-cases. For the most part, and if you're using Nuxt.js, you will probably go the target: static, ssr: true route.
This will:
generate fully static pages during build time and you'll be able to host it on Netlify, Vercel or alike
hydrate the static content with some JS after you have fetched the static files
have the Vue behavior afterwards, as a classic SPA (hence managing the routing without further server calls)
This behavior is called Isomorphic or Universal, more info in the linked discussion.
Gatsby and Next.js do work in somewhat similar ways. There are some minor differences but the general is globally the same across those 3 AFAIK.
SvelteKit and Astro handle this a bit differently. May be interesting to give it a look!
In my Ember app (actually mine is an engine within a host/parent app), I want to set the page title.
Now while I do
document.title = "Page title I want"
However, it gets overwritten by the host app i.e. what is set in the index.html
Where can I have the above code to set page title? Already tried adding in beforeModel, didTransition hooks, But that does not work.
I'd highly recommend standardizing across your app and all engines on ember-page-title. Which will allow you to add {{page-title "Blog"}} to a template. It has some additional features that are really nice when working in a larger app, but by delegating all of your title setting needs to a single addon you can have a standard API across your app and all engines for doing this task.
There are other addons that also serve this purpose, but this RFC has movement towards making ember-page-title the default for new ember apps.
I built a small site that uses gatsby for static content, but then for some content that needs to be rendered on the client-side, I'm using client-only routes in gatsby.
I am not sure I fully understand how this works though - Say I have a Header, Footer & a font that I am using in my static site. On my client-only routes, I am using the same Header, Footer & font. Will I benefit at all from having used these elements in my statically components previously? Is the font being loaded anew, for example?
Basically, I would like to know what Gatsby-features my client-site content is losing out on now, and what I should maybe attend to a bit more, since Gatsby won't be handling this for me anymore. Especially in terms of pagespeed.
Yes you should benefit from having used those components and fonts already.
The React components that are being re-used will already be in a JS bundle that you have shipped to the user and shouldn't need to be fetched again. Likewise with the font files - but these will be asset files - not in a JS bundle.
The best way to see what is being fetched will be to test it out in a browser.
Load a static page
Open the Network tab in dev tools
Navigate to a client-only page and check for network activity
While those assets shouldn't be fetched twice I can imagine some instances where an incorrect setup would fetch them twice - so best to just double check.
Will I benefit at all from having used these elements in my statically
components previously?
The answer is yes. Gatsby works with #reach/router under the hood so you will have all benefits of it no matter if you use client-only routes or not.
In other words, the trickiest part of using client-only routes is the internal routing, for your site, in that scenario, Gatsby will handle the routing internally since it extends from #reach/router, so the shared components (header, footer, etc) will only be rendered on-demand and will be shared across your site, no matter if it's a client-only route or a static page.
I would like to know what Gatsby-features my client-site content is
losing out on now, and what I should maybe attend to a bit more, since
Gatsby won't be handling this for me anymore. Especially in terms of
pagespeed.
Summarizing a lot, when a page loads, #reach/router looks at the path prop of each component nested under <Router />, and chooses one to render that best matches window.location. So you will only render the needed code on-demand.
In terms of page speed, your site won't be affected because the site keeps being "static" and pre-mounted once the build is done. The only "negative" part of using client-only routes (if you want to say that) is the SEO part since they won't be intercepted by Google, but, that's the reason why you are using client-only paths, in most of the cases you don't want to index those pages.
I am developing an involved web app with asp.net core.
I am developing React components, writing all of my components with ES and JSX syntax.
I run webpack to transpile all of my code (so now I have pre-transpiled files ready to be served)
When a request comes in, I just serve my pre-transpiled bundles.
I wanted to have a way of only bundling and sending user-specific components (based on a list of features they have access to) to the client.
The only way I could figure doing this is to do "on-the-fly permission-controlled component bundling combined with on-the-fly jsx compilation" to serve my components.
I gather that webpack shouldn't be used as an on-the-fly bundler like this, so that is out of the picture...
Partial scrappy solution I came up with:
Using no importing or export mechanism in my js, I use Razor to cycle through my feature list, adding the appropriate (mostly modular) components in what I call "Dependency First Order" to the page, and at the end of each components' code, I write
class ComponentA extends React.Component { //Component Code Here }
window.ComponentA = ComponentA;
So all components are global and can be rendered.
This way, I am able to select what Components get sent to the client with Razor.
NOW, remember when I said "mostly modular"? Well if I am rendering a component within another component that the user doesn't have access to, this partial solution would leave the render statement embedded in main component that is rendering the sub-component itself, without the component code it's supposed to render actually being there. This being a dirty partial solution, I would just suppress the error if the component was non-existent and move on.
Bottom line is I am having a real difficult time making my react components 100% modular and being able to control the granularity of my 'component dependencies' so that no code is on the client that a user shouldn't have access to.
Ridiculous solution someone offered me:
It is also certainly out of the question that I would generate a set of bundles for every user and whenever an admin changes what a user has access to, I would re-render that bundle with webpack. (especially since I am dealing with thousands of users here).
As I am writing all of this, the more and more I feel like I am just being a perfectionist and should just go with the above paragraph.
The solution I should probably go with:
There is the ideology out there to just send all of your js to the browser and then selectively render them based on the permission of the user. Any security loopholes here would just be handled by server-side access control to lock down endpoints if a specific user did try to forge requests to parts of the application they don't have access to (which would be implemented regardless).
I am under the gun here and feel like I am overthinking most of this. I would be greatly appreciative of any feedback. Thank you.
It is possible to ship permission based JS bundle to client. You can leverage webpack dynamic import logic to load only required features JS bundles.
You need to create directory structure based on features and load them based on user permissions. Basically what webpack does is, it creates separate bundle for each feature and load it via dynamic import when requested.
Solution here 👇
Note: You might not see lazy bundles in codesandbox.io network panel, but, you can download project and run server locally to see bundles being lazy loaded.
Per default, Angular fetches the HTML templates from the server when the user navigates to a route. With that in mind, imagine this scenario:
User loads the Angular app. The main view has a subpage called "Order".
While the user is studying the main view a new version of the app is rolled out in production. The new version has a complete rewrite of the Order page with new Javscript and HTML.
The user navigates to the Order page. The Javascript is already loaded by the browser in step 1, so the user is on the old version until app is reloaded. But the new template gets fetched from the server on navigation. So now the Javascript and template are our of sync!
Is my assumption that the Javascript/HTML is out of sync, correct?
If so, are there any best practices related to this issue?
I guess one solution is the make Angular fetch all the templates on app initialization. But this could be a performance penalty if the app has hundreds of HTML views.
I've never wondered about that issue myself. One possible idea would be to reuse the pattern known as assets versioning, where upon new release, you rename all your assets.
For instance, instead of login.html you'd use login-xyz.html as a name of a template. xyz could be a random value, or a checksum of the file. Checksum might be a slightly better option because if the new release is small (i.e. you fixed just some small bug in one file), if user loads any page but the fixed one, he/she will not be bothered with a reload - all other files will have the same checksums, they'll work with no interruptions.
This way, when an outdated Anguar app tries to fetch a template, it'd get a HTTP 404 error. As an addition to that, you could write a simple $http interceptor, which would detect a 404 response, and reload page automatically (or offer user an option of doing so).
There are modules which are capable of renaming assets, such as gulp-rev - but I never heard of using that for Angular templates. You might implement something like that on your own, though.
Of course you might want to keep both the new and old versions of files to allow users to work without interrupting them with a refresh. Depends on what your requirements are. I assume you're trying to avoid that, though.
Sample 404 interceptor (CoffeScript, as I have it handy now):
m.factory 'notFoundInterceptor', ($q) ->
return {
responseError: (response) ->
if response?.status == 404
# Reload, or warn user
return $q.defer()
# Not a 404, so handle it elsewhere
$q.reject response
}
m.config ($httpProvider) ->
$httpProvider.interceptors.push 'notFoundInterceptor'
Thanks for good answers.
It turned out that this problem solved itself for us. Every time we roll out a new release all the users sessions gets deleted and users will be sent to the login page. This will trigger a page load and fresh JavaScript/HTML gets loaded.
I've read about this issue long time ago, and one option is to do versioning on changed pages and application.js file.
For example on your version 1 of your application you can on your html file use something like:
<script src="js/angular_app_v1.js"></script>
Inside your routes also version the templateURL
templateUrl: 'templates/view_product_v1.html'
So when you roll out a new version you won't be overwriting templates and users already working will have the old version until they reload the browser but won't have version inconsistences.
Versioning of the assets using the file names would become unmaintainable for even a medium sided app.
Although it is a heavy weight approach for web assets you could look into content negotiation. This is where the call for a resource, generally a REST api returns the version of the resource, Content-Type: application/vnd.contentful.delivery.v1+json.. On the client you can check that the version matches what it expects. So if the client only knows how to load v1.1 and the resource responses with v1.2 the UI would know it cannot process that and should reload the page.
Another possibility is to load all templates up front in the UI. There are build processes in Grunt you can run such as https://github.com/ericclemmons/grunt-angular-templates that will combine all of your templates into a single file for delivery and then load them into $templateCache so no requests ever get to the server.
If you have some sort of server-side language you can build a filter in (.NET, Rails, Java or whatever), and pass along a version number with your template requests. If the version requested by the client is older than what's deployed, you'd send an error to the client. Your client would watch for that error ($http interceptor) and force a page-refresh to pull down the newer javascript code. (Maybe show an alert to the user first so they know what's going on).
You can preload all your templates into a $templateCache and serve them as one templates.js file. There is a gulp task for this https://www.npmjs.com/package/gulp-angular-templatecache. Then your application will load all templates in a single request together with application scripts on start up, thus they will be in sync. Read http://www.johnpapa.net/angular-and-gulp/ for more info.
It always makes sense to have a version number and use it when syncing resources. It's not only a good practice for the use case you described, but also for other situation, such as rolling back to a specific version or having two versions live and usable (for example in order to let some users preview the next version)