I am currently building a project, which is a survey creator. I created a Rest API on the backend with pure Node.js, and am also working on the frontend.
Obviously, rendered pages need to be rendered depending whether a user is logged in or not, the current survey, etc. Normally, I use Express.js and integrate a template engine like Pug. However this project was designed to be as dependency-less as possible, so no template engines. Instead I simply send "static" HTML files to the client when the user sends a request. Then, on the frontend, I use template strings to "fill in" HTML like this:
document.querySelector('cta').insertAdjacentHTML('beforeend', `<div class="login" style=${isLoggedIn? "display: none;" : ""}`); // etc.
This made me wonder if I am really building a dynamic website. Technically, I am "dynamically" generating the HTML?
But there seem to be conflicting messages from the Wikipedia definition and a Udemy course, both which seem to say that dynamic websites are generated on the server side like this:
When user hits request:
Backend builds template --> compiled to html --> file served to user
The way I do it looks like this:
Html file served --> JavaScript generates html
The terminology is very important here - is my website dynamic or static?
TLDR: it is a hybrid page. If you do not care on SEO it may be redundant to worry about such things, just do in a way that convenient for you.
So, your thinking way is valid, if you provide clients with the page which contents never changes on the client's side - it is the static page. You may show/hide the existing pre-rendered elements (like changing style attribute from display: none;). Any manipulations with shadow DOM or attaching HTML elements on the runtime promotes the page from "static" to dynamical or hybrid page.
Next, if you navigate to a new page on your website and you see the browser fetches a new .html file for the new page to display, it is a mark for the static page. But, if contents of the fetched page are changed afterwards by the script of your website on the client side the page cannot be called "static" anymore, is more like hybrid or dynamic page. Re-rendering the same page is about single-page-applications, all the pages of it are pure dynamic pages.
The main point why we care about is it static page, dynamic page, or hybrid page is SEO optimisation. Web crawlers analyze your page contents to detect what is your page about, to show later in Google, Bing, etc. The crawlers may (and mostly will) analyze your dynamical content in a bit unexpected way for you, so some your target audience risk never see your page. Thus, if you need the crawlers to analyze your page as "internet toy shop" you should fetch all the promotional and descriptional contents from the server and never change it afterwards. If you are making something like users personal cabinet, you can omit worrying on such stuff and just generate contents on the client's side.
As long as it works without the JS I would say that it is a static website with progressive enhancement, as the pages are not generated by the server on each request and then JS (if enabled) is used the provide additional non essential features or functionality.
A static website contains Web pages with fixed content. Each page is coded in HTML and displays the same information to every visitor. Static sites are the most basic type of website and are the easiest to create. Unlike dynamic websites, they do not require any Web programming or database design. A static site can be built by simply creating a few HTML pages and publishing them to a Web server.
Source
So in my opinion plain html without any frontend, any backend generation is static website, what you describing is dynamic website.
So it is something very basic, consider it as generated pdf, you cannot change it, in order to modify it you would need to go to file edit it, save, and then publish to end users
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>My static website</title>
</head>
<body>
<p> Static means no process </p>
</body>
</html>
Related
I am currently working on a custom URL shortener and am trying to figure out how to "inject" my own social preview metadata dynamically for each page. (eg. for Twitter Cards) I had originally planned on doing this in much the same way as I am with the actual redirect, fetching the data using the JavaScript fetch API. After reading a little more though it does not appear that this approach will work since it doesn't look like the twitter (and other social media web crawlers) run JS when looking for the metadata.
Is this correct?
If so, is there a way I can load the metadata from a dynamic source instead of just having to create a new html file for every redirect?
It looks like I can probably do something, at least for the image based on a test of this link (using https://source.unsplash.com/random for the image) through the twitter card validator. But what would be the best approach to doing something similar? Everything I can think of would use JS.
I have similar pages in production.
You'll need to use a server-side language (like PHP, or node.js) to set the meta tags for your twitter cards, and use javascript to redirect the page.
Between the following setups, which one would be performing the fastest page load for a front end user. I am only interested in the speed performance for frontend users and not the maintenance requirement for backend developers.
A website that only uses static .html files, no JavaScript, no PHP, no server side programming language to render the html. Basically the origins of the internet, where each click on an internal link loads a static .html file. Each page is a pre-created physical .html file on the server.
A website with a physical pre-created .html file, however the main content (article) on each page is fetched via Javascript from a noSQL server (Google Cloud Firestore or Fauna DB). Each click on an internal link only replaces the main content of the page via database call. The rest of the website (menu, logo, sidebar, footer) is all static and never needs to reload.
A website with a physical pre-created .html file, but the main content (article) on each page itself is fetched via JavaScript from a local JSON file, no database, just a regular .json file in the same directory as the .html file on the same server. Each click on an internal link only replaces the main content of the page using JavaScript (probably vanilla JavaScript using fetch, unless react is somehow faster, which I doubt). The rest of the website (menu, logo, sidebar, footer) is all static and never needs to reload.
Of course server performance and user location does always play a role in speed tests, but for argument sake let’s assume it’s the same user visiting the same web server. Additionally in regards to noSQL, let's say it’s a fast and reliable performing 3rd party server such as Google Cloud Firestore.
Which one of these setups would be the fastest? Has anyone tested this? I heard some people argue that basic static .html files are always fastest, while others argue that a static html file where the content is loaded via JavaScript is faster when navigating internal links once the initial page load is done. Both arguments make sense.
Any major pros or cons for one of the mentioned setups, or past benchmarks?
The speed of the webpage has two big components:
A. How fast the server responds/the size of the response
B. How fast the browser can render whatever it fetched
So, static files without JS will be the fastest, there is no delay on the server side, and the browser is very efficient in rendering static assets
The third option is still fast, but slightly slower than the first one as there is some work for the browser exists (transforming the JSON to HTML via JS)
The second option will be the slowest, as it is the only option where the server is not responding instantly with a file, but needs to connect to a DB, fetch the results, transform them, and only then send back.
All of it is relevant only in case we are talking about exactly the same content, but in different forms.
The question is slightly flawed, but to answer
Static content is fastest, the browser will render the content and cache it.
Getting content from a database adds overhead to the call and retrieval, the main page will be downloaded once and cached on the client side, the calls for content can not be cached as the browser needs to make the call to see what the content is. The upside is that the call will only return the content that needs to be displayed and DB searches are pretty quick from the big cloud service providers
This option is probably slower than 2, because the whole JSON file will need to be downloaded for the JavaScript to pick out the content for one article from all the content.
I would suggest option 2 is best from a maintainability vs speed point of view as it will only send the required data across the network and the rest is cached.
If you like option 3, have a look at using the browser cache https://web.dev/cache-api-quick-guide/ to cache your JSON file, this way the user will only need to download an updated version when you change the content
I am relatively new to the world of HTML snapshots and JavaScript so I apologize if this is not that hard.
The app we make at our company uses JavaScript to dynamically load image and text content on to a webpage. As you know, JS rendered content doesn't get indexed by search engines. However, I have learned of the otpion called HTML snapshots where you can feed to Google and other search engines all the rendered HTML of the page and it will consume it as long as you follow their guidelines.
My question is that since my script is a 3rd party script that can be embedded on x amount of pages, can I still somehow leverage HTML snapshots or will my clients need to do that?
Although I have not work with this technology yet, I believe it depends by your application and by whom create the data (your client server or you library).
If a lot of content is generated at server side level, server should creates the snapshot.
If a lot of content is generated/manipulated at client, client could creates a HTML snapshot. For example using HtmlUnit.
More info on this page:
https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot
I am quite new to web application development and I need to know how would I make other sites use it.
My webapp basically gets a username and returns some data from my DB. This should be visible from other websites.
My options are:
iframe. The websites owners embed an iframe and they pass the userid in the querystring. I render a webpage with the data and is shown inside the iframe.
pros: easy to do, working already.
cons: the websites wont know the data returned, and they may like to know it.
javascript & div. They paste a div and some javascript code in their websites and the div content is updated with the data retrieved by the small javascript.
pros: the webside would be able to get the data.
cons: I could mess up with their website and I don't know wow would I run the javascript code appart from being triggered by a document ready, but I wouldn't like to add jquery libraries to their sites.
There must be better ways to integrate web applications than what I'm thinking. Could someone give me some advice?
Thanks
Iframes cannot communicate with pages that are on a different domain. If you want to inject content into someone else's page and still be able to interact with that page you need to include (or append) a JavaScript tag (that points to your code) to the hosting page, then use JavaScript to write your content into the hosting page.
Context Framework contains embedded mode support, where page components can be injected to other pages via Javascript. It does depend on jQuery but it can always be used in noConflict-mode. At current release the embedded pages must be on same domain so that same-origin-policy is not violated.
In the next release, embedded mode can be extended to use JSONP which enables embedding pages everywhere.
If what you really want is to expose the data, but not the visual content, then I'd consider exposing your data via JSONP. There are caveats to this approach, but it could work for you. There was an answer here a couple of days ago about using a Web Service, but this won't work directly from the client because of the browser's Same Origin policy. It's a shame that the poster of that answer deleted it rather than leave it here as he inadvertently highlighted some of the misconceptions about how browsers access remote content.
I'm working on a big website at the moment and the designers are talking about making a facebook-like content area.. By this they mean that they want to keep the header loaded at all times and then only reload the content area when a link is clicked. Still we want to change the url to keep the framework working as well when accessing some page directly.
I'm not sure how to explain this any further - check out facebook and take a close look at the header whenever you navigate to another page..
Thanks..
I'm not sure if you're even asking a question, but here's my response.
Facebook, like most other major websites, use frameworks (custom built, or not) to separate a template into components, separate code logic from design, and more.
The reason why the url and the header will not change is because one of the designated areas of the body is acting as a container. When links are clicked, the data is gotten via remote procedure calls... via their facebook API. The content that is returned is then loaded into that container via javascript.
keywords: ajax, rpc, rest api, javascript, mvc, framework.
all of those things are important to that style of web development.