I'm creating a PWA for my website.
It was a multiple page website built using jquery. Now I have created an app shell which consists of a common header for the site. My site has good SEO ranking and ranks in top 3 links usually.
Now when I want to go from page A to page B and I want the header to be preloaded and a loader gets displayed until the data for page B is received from the server.
I am still using jquery as most of the modules are already built in that and I don't want to rebuild them.
Now the only solutions I can think of to achieve this is either using an ajax call or using routes on the frontend. I have few queries and perceptions about these solutions and want to know if I am right.
1) Using Ajax -
When calling page B the HTML in response will have only the header and the loader and a js file in which will have the ajax code to load the data on the page.
Once cached the HTML containing header and loader will be fetched from cache and ajax call will be made to call data.
Please correct me if I am wrong.
Now, the problem I see with this is that consider I have page 3 and want the same functionality in it then, I'll be storing the same header again in the cache for this purpose.
2) I have heard that routes on frontend are not SEO friendly or require extra effort to make them SEO friendly. So I don't want my ranking to be affected due to this. Moreover, I don't have any idea about good routing library which handles everything for me in Jquery or Core Javascript.
Any help will be highly appreciated. I am highly confused at this moment so any guidance or reference in the correct direction will be of help.
There's some guidance about possible options in this blog post.
Given your current setup, and your stated desire not to overhaul your architecture too much, I'd recommend trying to add in a service worker implementation that cached your header, footer, and dynamic content separately. The service worker could then concatenate the header (from the cache) + content body (retrieved via fetch() or potentially from the cache) + footer, and return the complete HTML document as the response.
You would need to start exposing your partial HTML header + footer as resources with unique URLs that could be precached, which you're probably not doing right now, and you'd also have to expose just the content body for a given page via a unique URL, and ensure that they're all in a format that could just be concatenated without requiring any complex templating logic.
So it is some extra work, but it does not involve rewriting anything for clients that don't support service workers, and your site would continue to look and behave the same for those clients.
Related
Note: I am kind of new to web development so please help me through any misconceptions I might have.
I am trying to learn the MERN stack. As an example, I am trying to make a two page site with a homepage and an about page. I made a ./public folder and added an index.html and an about.html to that folder. Then I started by learning some basic express where I have this line which will let the server serve static files from the ./public folder:
// set static folder
app.use(express.static(path.join(__dirname, 'public', )));
After I felt good about this, I wanted to add React to my existing project. This is where I ran into some confusion. It seems like React doesn't allow multiple pages in my website; I saw this SO post. So, React is good for building single page applications. I also saw this YouTube video where the first three minutes explain the difference between traditional multiple-page applications and React SPAs. He stated that all the pages will be sent at one time and the react-router will intercept any page requests and re-render the browser with the pages it already received on the first page load.
So, finally we get to my question. Going back to my example, suppose my about page is VERY large, with a lot of text and images. Isn't that potentially wasteful if I load the homepage, but the server has to send the entire website including the large about page, even though I might never even click on the about page? Once it is all loaded, I understand that the client has a smooth experience going from one page to the next without having to contact the server. But doesn't it mean that the client experiences long initial wait times. And, isn't this wasting server resources by sending pages the user might never click on. And the problem only seems to get worse the more unique pages the website has.
"He stated that all the pages will be sent at one time and the react-router will intercept any page requests and re-render the browser with the pages it already received on the first page load."
This is an oversimplification that is at the root of your misunderstanding.
Your react app is going to be a small amount of HTML that acts as the document root, and then a fair amount of CSS and JS. This JS React app will execute to generate your pages based on how you have configured your views and wired them to any data model. Generally with a SPA like this, you will load one view at a time, and it will, if applicable, make a request to the server for any data it needs to render, which will generally be returned as JSON. Once the JSON is received, it will get parsed in the browser into the data model and the UI will update to reflect the new state of the data model. Critically, each view (if wired correctly) will only fetch data from the server when the view loads; furthermore, any images or other assets in the UI will only be loaded when the UI renders them, so there will be no prefetching that wastes resources.
Because the react components can basically act as templates, you can actually save bandwidth in this way. Say you had 100 product pages on your site that were identical except for the product information. If you were to serve a new HTML document for each page there would be duplicated bandwidth in the markup sent each time. However, in React you can define a single <ProductPage /> component that will fetch only the product information and load it into the markup template each time, removing the issue of sending duplicate HTML.
There are also additional ways to split up your react app using tricks like lazy loading, to only fetch the salient JS when it is about to be used.
So, no, using a React app does not mean that it fetches all the HTML pages and assets for the site all at once. While one could write a React app in a wasteful manner, most properly structured React apps should be smaller than the rendered totality of the site.
So what the title says. Can I check a website's server/database (Or whatever it is called, sorry I'm new to coding) for changes made to the website so I can implement a reload and then implement my code.
In your site's JavaScript you can make small requests back to your web server without reloading the whole page, then take some action; e.g. change the content on part of your page, or maybe navigate somewhere else. You can choose a specific URL to request data from. The common way to structure things is using REST and JSON.
The regular part of your site would live at www.example.com/myaccount, etc., and your REST API would live at www.example.com/api/account/posts?dateafter=blah. You can use client-side JS libraries like JQuery to send requests up to your REST API, deserialize the resulting JSON into JS objects, then take action.
On the server side, various frameworks will help you build APIs, with appropriate routing. It depends what kind of hosting, language expertise, etc. that you have.
I wanted to load HTML contents dynamically, such as updating a Bootstrap's modal dialog contents via AJAX call (because refreshing the page and showing the modal again is weird), but before I do that, I want to know what risks that I will need to concern when doing so, and possible solutions.
The main reason to do this is that I'm developing a portlet for Liferay, and I wished to update the content of my portlet dynamically without refreshing the whole page.
Of course I could return data in JSON from my server to client, but I will have to write complex client side logic to update the DOM, which the logic is probably done easier in, say, JSP
Assume the webapp is HTTPS only, not sure if this will help with anything.
Based on the assumption that the webapp is HTTPS only, it would be very good to let all AJAX calls also use that. This will not create a breach of mixing unsecure and secure connections, and the warning dialogs, which browsers give.
The only risk can be caused by cross-site scripting, if parts of the HTML is generated elsewhere or if parts of it is based on unvalidated user input.
Solutions for that is to always validate ans sanitize the input from other sources. More information about this can be found here: https://www.owasp.org/index.php/Data_Validation
Perhaps you chosen the wrong framework for the job and would have consider something like an client side rendering where you would bind json data to the view (eg: angular, ember, backbone or knockout)
Consider using Element.innerText or $(Elm).text() instead of Element.innerHTML or $(elm).html() when possible
Perhaps it will be a good idea to encode everything that a user can change before you save it to the database or when you are rendering the view
However if you do allow some html you would need a sanitize plugin with a witelist approtch to strict the allowed tag & attributes
the only diffrence with http and https is that it will be much harder for a man-in-the-middle attack to read/intercept/change the content of the request & the response
I'm currently working on a social app using Angular. I'd be keen to have public, search engine indexable pages to accompany the app, such as an indexable homepage, about page and contact page.
What would be the best way to go about this? I'm undecided what my backend infrastructure is going to be, but it either going to be one of the following:
nginx/apache server to vend all content with an external pub/sub service for realtime.
Separate service for both front end and backend - nginx server vending front end content. Separate node server for backend socket stuff.
Any advice on this would great. I'm keen to figure out whether angular handles all of the routing, or whether i handle the static routes separately. This is my first time playing with Angular.
Cheers.
I might be wrong but I think your problem is not an AngularJS one, it's more fundamental than that.
Your issue is one where you are loading HTML content via AJAX. And how does Google etc. crawl AJAX loaded content if it can't execute JavaScript?
This might help:
https://developers.google.com/webmasters/ajax-crawling/
This might help too but its geared towards .NET: http://gregorypratt.github.com/Ajax-Crawling/
If you provide a solution where you let AngularJS do the routing but you still are able to serve static content from the server when ?_escaped_fragment_= is present in the URL then you're good to go. You get single page app benefits whilst still being crawlable.
The following is an example site using AngularJS routing and static content being served for Google et al.
http://artelier.co.uk/#!/about
http://artelier.co.uk/?_escaped_fragment_=/about (turn off JS to see it work)
I'm going to be creating a widget style tool that will work as follows:
It can be included in any page by putting a script/button combo on that page.
The button, when clicked, will load a form from my site, which can be filled out and submitted.
Depending on the submitted information, there may be another form displayed - for example, a second page with a captcha, or a confirmation page with some action required, or maybe just have the form close, or some other action I add later.
If I was to open a popup window, this would be easy, just build it as a regular page. However, including this in a calling page, presents a few issues.
If I actually load the form content into a local div on the calling page, it becomes part of the page content. I know I can manipulate that content to some extent using the script (it will load a script from my site for that particular version of the widget), repost forms and load them into the same div, hide the div when I'm done with it, etc. But it seems like it would complicate things.
My other option is to make it all happen in an iframe, so it's not really part of the loading page calling page. This way I sort of feel like I'm losing some functionality I may need later. This is probably the way I will go anyway.
However, before I start down that road, I thought I would ask if anyone has any tips on the best way to include a complex series of pages in another page?
Whatever I do, I want to be sure I don't close doors on my ability to change the way the widget functions, I will definitely be adding functionality later - so keeping my options open for change is important.
Thanks!
Edit
Regarding JotaBe's question, on the server side, I will be using PHP.
However, I'm pretty sure that doesn't matter, as the PHP processing will (as always) happen on the server. All the client (requesting page), will know is that it requested, and then received JS/HTML/CSS/JSON/ETC assets via HTTP. It won't know, or care, how the server generated those resources.
If it helps you to picture it though, this is the basic idea of how the client and server will (if I don't think of a better way) work together:
The client page will (at load) request a .js file from my site. This file will be generated dynamically based on the requested URL. For example, account #74 would request http://mysite.com/widget/js/74.js and receive JavaScript assets customized for that account.
When the "widget button" is clicked, it will use the JavaScript to request an HTML form from my site, and include it in the page, one way or another (which way is best is the whole question). The HTML form will be generated dynamically and customized for account # 74 based on the request (ie: http://mysite.com/widget/form/74.php).
Once filled by the user, the form will be submitted to my site. The post will be processed and stuff will happen, again based on the requesting account (ie: action=http://mysite.com/widget/post/74.php). At this point, I may need to reject the form and request a re-post for failed validation, captcha verification, etc., or I may need to serve a confirmation page, or simply close the form, or some other action.
Any way you do this (language / platform / etc), the best practice is to use mvc, or hmvc, to keep the various logical components of the application separate. You can create popups by having some view templates which don't render their containing tags, designed for javascript / jquery .load() statements. You could then design a series of components designed to be loaded with arguments passed over the uri.
here is an article that might help explain one possible design pattern for what you are going for:
http://www.phpied.com/ajax-mvc/