I am working on server side rendering project followed by Angular Universal guide. Everything is working fine, except when I am navigating to other routes other than first page, I still see first page's source when hitting "view page source" in browser.
I have gone through this issue, but in my case routes are not under authorization.
Any idea why I cannot see page source of other routes?
That's normal behaviour. When you make the first request to the server, the page content will be rendered server side, which means that you'll be able to see that content if you see the page's source.
After that, when you navigate through your app's links, all content is rendered by javascript, with data fetched from server using ajax. As you are not changing page (it's a Single Page app) , the View source in the browser's page is never updated. If you type in directly the url of another route in the browser, you should get the corresponding content if you check the source
I solved this problem.In my project I have used Hash Location Strategy. Removing HashLocation Strategy allows me to view source code of other pages.
Related
I have a NextJS application and here are the how the pages are laid out:
App
--pages
----_app.js
----index.js
----company.js
----users
------[userID].js
So, I have a dynamic page [userID].js that get's the userID through router to show information for different users. I am using next/head to generate meta tags dynamically. When I load the pages and I do inspect element I can see the meta tags there. But when I do "view source" for the dynamic page [userID].js, then I don't see the meta tags. I can see the meta tags for all the other pages when I do "view source".
Does anyone know why is that and what I can do to fix it?
Thanks.
Next.js router is a client-side router. If you're creating meta tags based on the user information, which depends on the Next.js router, it would be added on a client-side.
So, you would see it in DevTools Elements panel because client-side JS has executed but won't see it in the page source as it just plain server response without JS execution.
It doesn't sound like an issue that needs to be fixed. But you could render the page on the server-side using getServerSideProps if these meta tags must be present at first render. However you would lose static optimization for this page.
The application I'm working on right now contains lots of ng-include directive and I hate to reload the whole application just to see an HTML update.
I've tried Replaying the XHR manually using the Network and it gets back the updated HTML View but definitely, it doesn't get reflected in the DOM.
What I am searching for is a way that all the HTML views get fetched again without me hitting the reload button.
It can be a browser extension or a code snippet (which I'll turn into a browser extension to be used for others) or any other sane way.
Check disable cache checkbox on network page and then try replay XHR. I can't see why you don't want to reload the whole page but whatever.
I'm having hard time understanding how this page is being loaded.
While using inspect code I can view different divs but when I click on view source i dont see any div or elements.
https://outlook.live.com/owa/
view-source:https://outlook.live.com/owa/
Can you please explain how they are using js and iframe to load it?
The page is rendered by the client side. Look this https://medium.freecodecamp.org/what-exactly-is-client-side-rendering-and-hows-it-different-from-server-side-rendering-bd5c786b340d
This is called One Page Application.
Just open developer mode in your browser(F12 for chrome) and go to network tab. you should see that the real data traffic between the page and server is there. The page you see is a single page that loads the data on the client side
I am making one-page web on my school project. I am using very simple history API, to just change the URL so user thinks he is on another page (but I am hiding and displaying different elements on a page)
It looks like this:
www.mypage.com/main
www.mypage.com/slideshow
When I am using the application with back/forward history buttons it works fine, but when I want to reload the page, the browser tries to load that fake URL and that cause a crash of course. How do I manage to stay always on index.html no matter what url is displayed to the user please?
I tried to manage this with htaccess, but I wasn't sucessful
It seems you are not using a back-end, which is the only way to achieve your desired result. (If my assumption is correct) The browser gives error (cannot load /slideshow after refresh) because it is trying to fetch that file (from you local machine) but that does not exist. SO answer explaining this well.
So in your example you should instruct the back-end to render the same view for all routes (using a wildcard), and do the displaying on front-end based on the given url.
You do not have to use React-Router, but instead create a router-handling function which runs at each refresh (that is, when your javascript is loaded) which tells your page what to render based on what route (or url, call them as you like).
(you will know that the javascript will be run for every url, because the back-end already handles routing with the wildcard, *)
If you want to rewrite all requests to only one URL, you can do so with just
RewriteRule ^ /index.html [L]
I have this url:
domain/?budget=0-&pn=1
Now i have a button which clicks a special view on the same page. I have done it like:
domain/?budget=0-&pn=1#special
The problem is that i am implementing history api and change in hash is causing popstate to be triggered which is not good.
What should i use instead of hash for such situation with html5 history api?
Using Single Page Applications (SPA The wiki definition is https://en.wikipedia.org/wiki/Single-page_application)
where a single page acts as the host page for other content loaded on demand into the page. In this case, the use of page templates ensures that every page, at one time or another, acts as the content host as well as a content page.
depicts how the HTML pages are structured. Each has the required structure around the page along with a content container whose inner HTML is replaced with a fragment of HTML from a content page. The identifier in the URL represents the unique identifier of the page content that is loaded into the page.
What makes use of the history object’s new members so valuable is that even though you can programmatically change the browser location without posting back to the server, at the same time that updated location is nothing more than a regular URL, which can be shared, copied or bookmarked. This means that you need to make sure your application works just as well upon the initial request of the page as it does when JavaScript is used to fetch the page.
demonstrates how Page 1 is affected by navigating to Page 2 using the history.pushState function. The overall structure of the page is preserved and after the response of an Ajax call is received, only the page fragment is injected into the content container. Notice how the page title remains the same, but the URL and content in the container reflect content from Page 2.
Consider though if you refresh the page after you navigated to Page 2 using pushState. Figure 3 shows how the page, when refreshed, keeps the correct URL and preserves the page content, but the page title reflects that it was served directly from Page 2.
This behavior is achieved by all pages having the same layout structure but include identifiers in the markup to specify the content fragment in the page. This is the same spirit in which a normal client/server web application would serve full HTML pages upon a standard GET request of the page, but then use a service in conjunction with an Ajax call to only update specific parts of the page.
Remember
As with most areas of new HTML5 capabilities, the functionality found in the new history object is available via a polyfill library that can fill in the gaps for older browsers or those that are yet to implement the standard.