embedded vs linked JS / CSS - javascript

I am familiar with the benefits of linked CSS vs embedded and inline for maintainability and modularity. I have, however, read that in certain mobile applications of web dev, it can be beneficial(faster performance) to embed or even inline your CSS.
I would would avoid any inline JS, and to a lesser extent CSS, but I have noticed on many sites including plenty google pages, JS is embedded right in the header of pages.
A colleague of mine insists on always linking to external js files. I think it makes more sense to embed js if the function is specific to one page or varies slightly per page, to save the processing overhead of a linked script.

The one thing the other answers didn't touch on is developer efficiency. If it's easier to put it inline, and there's no immediate performance requirement/concern, then do that. There is real business value to "easy", and it trumps eventual or non-existent performance concerns. Don't prematurely optimize.

The advantage of linked JS files is that they can be cached by the browser and loaded from local disk or memory on subsequent pages.
The advantage of inline JS is that you might have fewer network requests per page.
The best compromise is usually a small number of linked JS files (one or two) that consist of a mininified combination of all your JS so they are combined into as few files as possible and as small as possible.
Getting the benefits of local caching far exceed the extra parsing of a little JS that might not be used on some pages.
Embedded JS that makes sense (even most of your JS is in linked files) is the settings of a few JS variables that contain state that is specific to your page. That would typically be embedded into the section of the page as it's generated dynamically at your server, different for every page and usually not cacheable. But, this data should typically be small and page-specific.

Linking a script incurs a small penalty in the form of an extra request to the server. If you keep it inline this request is not made and depending on the situation you may get a faster loading page. It makes sense to inline your code if:
it is very small
it is dynamcally generated since then you won't get the benefits of caching anyway
In the case of google and facebook you're most likely seeing inline javascript because it's being generated by server side code.

Other answers have already mentioned the advantages of caching with external JS files. I would almost always go that way for any library or framework type functionality that is likely to be used by at least two pages. Avoid duplication when you can.
I thought I would comment on "inline" vs "embedded".
To me, "inline" means mixing the JS in with the HTML, perhaps with several separate <script> blocks that may or may not refer to each other, almost certainly with a number of event handlers set directly with HTML element properties like <div onclick="..."/>. I would discourage this in most circumstances, but I wouldn't get too hung up about it for occasional uses. Sometimes it's simply less hassle and pretending otherwise just wastes time that you could spend on more important issues.
I would define "embedded" as having (preferably) a single <script> block in the head or at the end of the body, with event handlers assigned within that block using document ready or onload function(s). I see nothing wrong with this for functions specific to one page, in fact I tend to prefer this over an external file for that purpose if it's only a small amount of script and I don't care about caching it client-side. Also if the page is generated dynamically and something in the JavaScript needs to be generated on the server it is generally much easier to do it if the script is on the same page.
One last note on external files: during development watch out for IE's tendency to "over cache". Sometimes while testing I've made some small changes to an external file or two and pulled my hair out wondering why it didn't work only to eventually realise that IE was still using an old cached version. (On the one hand of course this is my fault, but on the other I know a lot of people who have fallen victim to this from time to time.)

All the answers above are very good answers. But I would like to add my own based on 15 years of web dev experience.
ALWAYS use linked resources for both CSS and ECMAScripted resources rather than inline code as linked content is cached by the browsers in most cases and used across potentially thousands of pages over hours and days of interaction by a user with a given domain online. The advantages are as follows:
The bandwidth savings using linked scripts are HUGE as you simply deliver less script over the wire over the user experience which might use the cache for weeks.
There's also better cascade of CSS, as embedded and inline styles override, by weight, linked styles causing confusion in designers.
There is avoidance of duplicate scripts which happens a lot with inline scripts.
You reuse the same libraries over many pages with cache control on the client now possible using versioning and date-based querystrings.
Linked resources tell the browser to preload all resources BEFORE initializing scripts of styles in the DOM. Ive seen issues related to this where users pressed buttons prior to pre-processing of date-times in time sheet apps by scripts causing major problems.
You have more control over script and CSS libraries by all developers in a project, avoiding polluting your app with hundreds of poorly vetted custom scripting in pages
Its very easy to update libraries for your users as well as version linked libraries.
External script libraries from Google or others are now accessible. You can even reuse your own linked libraries and CSS using linked resources.
Best of all there are processing speed gains using cached client-side resources. Cached resources out perform on-demand resources any time!
Linked scripts also enforces style and layout consistencies instead of custom layout shifts per page. If you use HTML layouts consistently, you can simulate flash-free page views because cached CSS is used by the DOM across web pages to render pages faster.
Once you pull in linked resources on the first domain request/response the user's experience is fast and server-side page delivery means the DOM and HTML layouts will not shift or refresh despite numerous page views or links to pages across the site. We often then added limited custom page-level embedded style and script resources as needed to the cached linked stack of libraries on a page level if needed for a narrow range of customizations. Global variables and custom CSS can then override linked values. This allows you to maintain websites much easier without guesswork page-by-page as to what libraries are missing or already used. Ive added custom linked JQuery or other libraries in sub-sections to gain more speed this way, which means you can use linked scripts to manage custom subgroups of website sections, as well.
GOOGLE ANGULAR
What you are seeing in Google's web presence is often implementation of Angular's very complex ES5 and ES6 modular scripted cache systems that utilize fully Javascripted DOM manipulation of single page applications using the scripts and embedded CSS in page views now exclusively managed in Angular(2+). They utilize elaborate modules to load on-demand and lazy load components and modules with HTML/CSS templates pre-loaded into memory and from the server behind the scenes to speed delivery of news and other pages they manage.
The FLAW in all that is they demand browsers stream HUGE megabytes of ECMAScript preloaded with HTML and CSS embedded into these webpages behind the scenes as the user interacts with these cached pages and modules. The problem is they have HUGE stacks of the same CSS and scripts that get injected into multiple modules then parts of the DOM which is sloppy and wasteful. They argue there is no need now for server-side delivery or caching when they can easily manage all that via inline style and script content downloaded via XMLHTTPRequest hidden WebAPI calls to and from the server. Why download all that and rebuild and store inline pages in memory constantly when a much smaller file linked from the page would suffice?
Honestly, this is the sloppiest approach to cache management of styles, content, and CSS I have seen yet in web dev frameworks, as it still demands huge megabytes of scripts just to parse a simple news page with a few lines of text. Someone at Google didn't really think that framework through lol. Its wasteful of bandwidth and processing in the browser, if you ask me, and overkill. Its typical of over-engineering at these bloated vendors.
That is why I always argue for linked CSS and scripts. Less code and more content is why these frameworks were invented. Linked and cached code means SIMPLER, OLDER models have worked better using the fast delivery of smaller markup pages that cache tiny kilobytes of linked ECMAScript and CSS libraries. It means less code is used to display content. The browser's relationship with the server now is so fast and efficient today compared to years ago that the initial caching of these smaller linked files directly from the server (rather than giant inline pages of duplicate scripts yanked down via Web API in Angular on every page view) means linked resources are delivered much faster over the initial visit of a typical web domain visit.
Its only recently the 'script kiddies' have forgotten all this and so have started going backwards to a failed way of using local embedded and inline styles and scripts which we stopped using 20 years ago for a reason. It is a very poor choice and shows inexperience with the web and its markup and content model by many new developers today.
Stokely

Related

Is it faster to load if all webpage resources are compiled into a single HTML file?

What if I had a compilation step for my website that turned all external scripts and styles into a single HTML file with embedded <script> and <style> tags? Would this improve page load times due to not having to send extra GETs for the external files? If so, why isn't this done more often?
Impossible to say in general, because it is very situational.
If you're pulling resources from many different servers, these requests can slow your page loading down (especially with some bad DNS on the visiting side).
Requesting many different files may also slow down page load even if they're from the same origin/server.
Keep in mind not everyone has gigabit internet (or even on megabit level). So putting everything directly into your HTML file (inlining or using data URIs) will definitely reduce network overhead at first (less requests, less headers, etc.).
In addition (and making the previous point even worse) this will also break many other features often used to reduce page loading times. For example, resources can't be cached - neither locally nor on some proxy - and are always transferred. This might be costly for both the visitor as well as the hosting party.
So often the best way to approach this is going the middle ground, if loading times are an issue to you:
If you're using third party scripts, e.g. jQuery, grab these from a public hosted CDN that's used by other pages as well. If you're lucky, your visitor's browser will have a cached copy and won't do the request.
Your own scripts should be condensed and potentially minified into a single script (tools such as browersify, webpack, etc.). This must not include often changing parts, as these would force you to transfer even more data more often.
If you've got any scripts or resources that are really only part of your current visitor's experience (like logged in status, colors picked in user preferences, etc.), it's okay to put these directly into the parent HTML file, if that file is customized anyway and delivering them as separate files wouldn't work or would cause more overhead. A perfect example for this would be CSRF tokens. Don't do this if you're able to deliver some static HTML file that's filled/updated by Javascript though.
Yes, it will improve page load time but still this method is not often used due to these reasons:
Debugging will be difficult for that.
If we want to update later, it also won't be so easy.
Separate css and .js files remove these issues
And yeah, for faster page load, you can use a BUILD SYSTEM like GRUNT, GULP, BRUNCH etc. for better performance.

What are the best practices for including CSS and JavaScript in a page? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I would like to know if it is advisable to embed (inline) all CSS and JavaScript that are required by a webpage, into and tags instead of letting the browser download these files. Is such a practice advisable?
My "application" is a SPA and I have managed to compile everything, even images and font-icons (as base64) into a single index.html, but I am not sure if this is a common practice.
Thanks in advance.
You are ignoring some crucial things:
Browser can fetch specific resources in parallel, thus reducing the time of load compared to the "pack altogether" approach.
Browser can apply different caching policies for a different type of resources, which also can be used for some clever time and/or badnwidth reducing tuning.
People can get some useful data even if not all resources are loaded.
Not all functionality in SPA is heavily used, so sometimes it makes sense to load some entities lazily, on demand.
This is a very basic and simplified overview, there's a lot of thing to consider here. Moreover, budling to a bigger chuncks is actually something used in practice. Say, pretty often all js resources are bundled. But definitely trying to get rid of any additional http request will make your architecture less flexible, less cacheable and so on. So, it's overkill.
Best practice would be to split each resources(scripts, CSS, images, etc.) into separate files. Which will allow browser to download and cache each resource for future reuse(even for other pages). But browsers have limit into six(on time of writing) parallel connections per one origin. That why a lot of external resources on page cause bad page loading performance and bad waterfall.
There are a lot of techniques to improve performance such as: bundling, domain sharding, image sprites etc. Also for some critical resources you can use inline technique. It allows browsers to use this resources instantly without additional requests. For example, you can embed all resources(image, CSS, scripts) that are required for loading indicator and browser will render it without additional requests.
For best development style do not embed resources and use separate files. In case you care about performance you should investigate waterfall of your page(e.g. here or network tab in developer tool of any browser) and use some techniques to improve it. If you are interested in this field I recommend you to read books below:
High Performance Web Sites by Steve Souders
Even Faster Web Sites by Steve Souders
High Performance Browser Networking by Ilya Grigorik
Note that this techniques are relevant only for HTTP 1.1. For HTTP 2.0 it will be only harmful because new version is designed to improve performance.
No, always avoid inline-styling and scripting to reduce the page load of the html file. Also, separating your html, your css and your js keeps your coding clean, semantic and reusable for other external pages or applications that may require a common css property or script.
It's all about where in the pipeline you need the CSS as I see it.
inline css
Pros: Great for quick fixes/prototyping and simple tests without having to swap back and forth between the .css document and the actual HTML file.
Pros: Many email clients do NOT allow the use of external .css referencing because of possible spam/abuse. Embedding might help.
Cons: Fills up HTML space/takes bandwidth, not resuable accross pages - not even IFRAMES.
embedded css
Pros: Same as above regarding prototype, but easier to cut out of the final prototype and put into an external file when templates are done.
Cons: Some email clients do not allow styles in the [head] as the head-tags are removed by most webmail clients.
external css
Pros: Easy to maintain and reuse across websites with more than 1 page.
Pros: Cacheable = less bandwidth = faster page rendering after second page load
Pros: External files including .css can be hosted on CDN's and thereby making less requests the the firewall/webserver hosting the HTML pages (if on different hosts).
Pros: Compilable, you could automatically remove all of the unused space from the final build, just as jQuery has a developer version and a compressed version = faster download = faster user experience + less bandwidth use = faster internet! (we like!!!)
Cons: Normally removed from HTML mails = messy HTML layout.
Cons: Makes an extra HTTP request per file = more resources used in the Firewalls/routers.
Source/Reference : here
Keeping all the HTML, CSS, and JavaScript code in one file can make it difficult to work with. Stylesheet and JavaScript files must contain the and tags respectively, because they are HTML snippets and not pure .css or .js files.
Many web developers recommend loading JavaScript code at the bottom of the page to increase responsiveness, and this is even more important with the HTML service. In the NATIVE sandbox mode, all scripts you load are scanned and sanitized client-side, which may take a couple of seconds. Moving your tags to the end of your page will let HTML content render before the JavaScript is processed, allowing you to present a spinner or other message to the user.
When you're hard at work on your markup, sometimes it can be tempting to take the easy route and sneak in a bit of styling.
I'm going to make this text red so that it really stands out and makes people take notice!
Sure -- it looks harmless enough. However, this points to an error in your coding practices.
Instead, finish your markup, and then reference that P tag from your external stylesheet.
someElement > p { color: red;}
Remember -- the primary goal is to make the page load as quickly as possible for the user. When loading a script, the browser can't continue on until the entire file has been loaded. Thus, the user will have to wait longer before noticing any progress.
If you have JS files whose only purpose is to add functionality -- for example, after a button is clicked -- go ahead and place those files at the bottom, just before the closing body tag. This is absolutely a best practice.

what is better? using iframe or something like jquery to load an html file in external website

I want my customers create their own HTML on my web application and copy and paste my code to their website to showing the result in the position with customized size and another options in page that they want. the output HTML of my web application contain HTML tags and JavaScript codes (for example is a web chart that created with javascript).
I found two way for this. one using iframe and two using jquery .load().
What is better and safer? Is there any other way?
iframe is better - if you are running Javascript then that script shouldn't execute in the same context as your user's sites: you are asking for a level of trust here that the user shouldn't need to accede to, and your code is all nicely sandboxed so you don't have to worry about the parent document's styles and scripts.
As a front-end web developer and webmaster I've often taken the decision myself to sandbox third-party code in iframes. Below are some of the reasons I've done so:
Script would play with the DOM of the document. Once a third-party widget took it upon itself to introduce buggy and performance-intensive PNG fix hacks for IE across every PNG used in img tags and CSS across our site.
Many scripts overwrite the global onload event, robbing other scripts of their initialisation trigger.
Reading local session info and sending it back to their own repositories.
Loading any number of resources and perform CPU-intensive processes, interrupting and weighing down my site's core experience.
The above are all examples of short-sightedness or malice on the part of the third parties you may see yourself as above, but the point is that as one of your service's users I shouldn't need to take a gamble. If I put your code in an iframe, I know it can happily do its own thing and not screw with my site or its users. I can also choose to delay load and execution to a moment of my choosing (by dynamically loading the iframe at a moment of choice).
To argue the point in terms of your convenience rather than the users':
You don't have to worry about any of the trust issues associated with XSS. You can honestly tell your users they're not exposing themselves to any unnecessary worry by running your tool.
You don't have to make the extra effort to circumvent the effects of CSS and JS on your users' sites.

How to optimally serve and load JavaScript files?

I'm hoping someone with more experience with global-scale web applications could clarify some questions, assumptions and possible misunderstandings I have.
Let's take a hypothetical site (heavy amount of client-side / dynamic components) which has hundreds of thousands of users globally and the sources are being served from one location (let's say central Europe).
If the application depends on popular JavaScript libraries, would it be better to take it from the Google CDN and compile it into one single minified JS file (along with all application-specific JavaScript) or load it separately from the Google CDN?
Assetic VS headjs: Does it make more sense to load one single JS file or load all the scripts in parallel (executing in order of dependencies)?
My assumptions (please correct me):
Compiling all application-specific/local JS code into one file, using CDNs like Google's for popular libraries, etc. but loading all of these via headjs in parallel seems optimal, but I'm not sure. Server-side compiling of third party JS and application-specific JS into one file seems to almost defeat the purpose of using the CDN since the library is probably cached somewhere along the line for the user anyway.
Besides caching, it's probably faster to download a third party library from Google's CDN than the central server hosting the application anyway.
If a new version of a popular JS library is released with a big performance boost, is tested with the application and then implemented:
If all JS is compiled into one file then every user will have to re-download this file even though the application code hasn't changed.
If third party scripts are loaded from CDNs then the user only has download the new version from the CDN (or from cache somewhere).
Are any of the following legitimate worries in a situation like the one described?
Some users (or browsers) can only have a certain number of connections to one hostname at once so retrieving some scripts from a third party CDN would be result in overall faster loading times.
Some users may be using the application in a restricted environment, therefore the domain of the application may be white-listed but not the CDNs's domains. (If it's possible this is realistic concern, is it at all possible to try to load from the CDN and load from the central server on failure?)
Compiling all application-specific/local JS code into one file
Since some of our key goals are to reduce the number of HTTP requests and minimize request overhead, this is a very widely adopted best practice.
The main case where we might consider not doing this is in situations where there is a high chance of frequent cache invalidation, i.e. when we make changes to our code. There will always be tradeoffs here: serving a single file is very likely to increase the rate of cache invalidation, while serving many separate files will probably cause a slower start for users with an empty cache.
For this reason, inlining the occasional bit of page-specific JavaScript isn't as evil as some say. In general though, concatenating and minifying your JS into one file is a great first step.
using CDNs like Google's for popular libraries, etc.
If we're talking about libraries where the code we're using is fairly immutable, i.e. unlikely to be subject to cache invalidation, I might be slightly more in favour of saving HTTP requests by wrapping them into your monolithic local JS file. This would be particularly true for a large code base heavily based on, for example, a particular jQuery version. In cases like this bumping the library version is almost certain to involve significant changes to your client app code too, negating the advantage of keeping them separate.
Still, mixing request domains is an important win, since we don't want to be throttled excessively by the maximum connections per domain cap. Of course, a subdomain can serve just as well for this, but Google's domain has the advantage of being cookieless, and is probably already in the client's DNS cache.
but loading all of these via headjs in parallel seems optimal
While there are advantages to the emerging host of JavaScript "loaders", we should keep in mind that using them does negatively impact page start, since the browser needs to go and fetch our loader before the loader can request the rest of our assets. Put another way, for a user with an empty cache a full round-trip to the server is required before any real loading can begin. Again, a "compile" step can come to the rescue - see require.js for a great hybrid implementation.
The best way of ensuring that your scripts do not block UI painting remains to place them at the end of your HTML. If you'd rather place them elsewhere, the async or defer attributes now offer you that flexibility. All modern browsers request assets in parallel, so unless you need to support particular flavours of legacy client this shouldn't be a major consideration. The Browserscope network table is a great reference for this kind of thing. IE8 is predictably the main offender, still blocking image and iFrame requests until scripts are loaded. Even back at 3.6 Firefox was fully parallelising everything but iFrames.
Some users may be using the application in a restricted environment, therefore the domain of the application may be white-listed but not the CDNs's domains. (If it's possible this is realistic concern, is it at all possible to try to load from the CDN and load from the central server on failure?)
Working out if the client machine can access a remote host is always going to incur serious performance penalties, since we have to wait for it to fail to connect before we can load our reserve copy. I would be much more inclined to host these assets locally.
Many small js files is better than few large ones for many reasons including changes/dependencies/requirements.
JavaScript/css/html and any other static content is handled very efficiently by any of the current web servers (Apache/IIS and many others), most of the time one web server is more than capable of serving 100s and 1000s requests/second and in any case this static content is likely to be cached somewhere between the client and your server(s).
Using any external (not controlled by you) repositories for the code that you want to use in production environment is a NO-NO (for me and many others), you don't want a sudden, catastrophic and irrecoverable failure of your whole site JavaScript functionality just because somebody somewhere pressed commit without thinking or checking.
Compiling all application-specific/local JS code into one file, using
CDNs like Google's for popular libraries, etc. but loading all of
these via headjs in parallel seems optimal...
I'd say this is basically right. Do not combine multiple external libraries into one file, since—as it seems you're aware—this will negate the majority case of users' browsers having cached the (individual) resources already.
For your own application-specific JS code, one consideration you might want to make is how often this will be updated. For instance if there is a core of functionality that will change infrequently but some smaller components that might change regularly, it might make sense to only compile (by which I assume you mean minify/compress) the core into one file while continuing to serve the smaller parts piecemeal.
Your decision should also account for the size of your JS assets. If—and this is unlikely, but possible—you are serving a very large amount of JavaScript, concatenating it all into one file could be counterproductive as some clients (such as mobile devices) have very tight restrictions on what they will cache. In which case you would be better off serving a handful of smaller assets.
These are just random tidbits for you to be aware of. The main point I wanted to make was that your first instinct (quoted above) is likely the right approach.

Why doesn't Facebook combine its CSS/JS files?

I am curious as to why the Facebook developers have chosen to not combine their scripts and stylesheets into single files. Instead they are loaded on demand via their CDN.
Facebook is obviously a very complex application and I can understand how such modularity might make Facebook easier to maintain, but wouldn't the usual optimisation advice still apply (especially given its high level of usage)?
Or, does the fact that they are using a CDN avoid the usual performance impact of having lots of small scripts / styles?
In a word BigPipe. They divide the page up into 'pagelets' each is processed separately on their servers and sent to the browser in parallel. Essentially almost everything (CSS, JS, images, content) is lazy loaded, thus it comes down in a bunch of small files.
They might be running into the case where the savings of being able to serve different combinations of JS files to the browser at different times (for different pages or different application configurations for different users) represents a larger savings than the reduced HTTP request overhead of combining all of the files into one.
If a browser is only ever executing a small percent of the total JS code base at any given time, then this would make sense. Because they have so many different users and different parts of different applications running in different configurations for those users, it is arguable that this is the case.
Second, those files only need to be downloaded once, then the browser won't ask for them again until they have changed or the cache has expired, so only the first visit really benefits from the all-in-one style. And yes having and advanced CDN with many edge locations around the world definitely helps.
Maybe they think it's more likely that you visit Facebook more often than you clear your browser cache.

Categories