How to load 3rd-party scripts into a site using web workers - javascript

I have seen a lot about offloading large scripts from the main thread to help improve website performance and Google's Core Web Vitals. I am curious if there is a way to use web workers to load a third-party script, like Google Analytics or the Facebook pixel (or really any third party script), so that these processes don't bog down the main thread.
If it is possible, could this also be done, in theory, for external CSS stylesheets or CSS libraries?

Since this answer appears on the first page when googling on how to offload 3rd party scripts to Web Worker I thought it would be nice to update:
There is a library called Partytown which does exactly this and much more

I am curious if there is a way to use web workers to load a third-party script, like Google Analytics or the Facebook pixel (or really any third party script), so that these processes don't bog down the main thread.
No. Those scripts are designed to interact with the DOM (which isn't available inside a worker). They aren't CPU intensive anyway so you wouldn't see any performance benefits.
If it is possible, could this also be done, in theory, for external CSS stylesheets or CSS libraries?
No. Workers run JS not CSS (and again, CSS interacts with the DOM).

Related

Should web worker be used to handle style rendering in css-in-js?

I am writing a css-in-js library, and I came across web worker recently, and I learnt how it can help me run code in parallel.
As the main thread is treated as UI thread, should I push some of the css-generating works from my library to web-worker? Would this be considered as anti-pattern?
There is no generic rule. You move something to web worker to make page operable during the calculation. In case of css-in-js in most of cases page should wait css to do something, so usage of workers looks limited there. The only case i see - web worker could prepare the skin update in background without the page blocking. Generally speaking, you need some real project to check is it a good idea

Load libraries, scripts, etc for the website from external or internal sources?

I just started my adventure with frontend, most likely with web design. I've been struggling to answer one technical question and I couldn't find yet a reasonable answer.
There's so many libraries you can load, download to make your web developing faster. Therefore there is my question.
Is it better to download these libraries (e.g. Boostrap, jQuery, Angular, fonts from Google and so) and link to them (externally) from the official source or download it, upload to your server and then link to the location file (internal source) on your server?
My imagination tells me that if I would download them and upload em on my server, then link to it would make the whole website load quicker. Is that a good thinking?
Pro hosting and linking to external resources (may it be JS libraries, images or whatever):
Spreading the load: your server doesn't have to serve all content, it can concentrate on its main functionality and serve the application itself.
Spread the HTTP connections: due to more and more asynchronously working applications it is a good thing to use the maximum parallel HTTP connections per site/subdomain to deliver application data and load all necessary additional resources from other servers.
as Rafael mentioned above, CDNs scale very good and seldom go offline.
Cons
Even with fast internet connections there is a high chance that resources will be served faster when they are located on the same Intranet. That's why some companies have their own "Micro-CDNs" inside their local networks to combine the advantages of multiple servers and local availability.
External dependancy: as soon as an Internet connection becomes unavailable or a Proxy server goes down, all external resources become unavailable leaving the application in a broken state.
Sometimes it may be actually faster if you link from an external source. That's because the browser stores recent data it accesses, and many sites use Bootstrap, jQuery and the such. It might not happen frequently with less popular libraries.
Keep in mind, though, since you're downloading from external sources, you're at the mercy of their servers. If for some reason or another it gets offline, your page won't work correctly. CDNs are not supposed to go offline for that very reason, but it's good to be aware of that. Also, when/if you're offline and working on your page, you won't be able to connect during development.
It is always better to download these files locally if you are developing some application for more security so that you do not really have to depend on any third party server which hosts the CDN.
Talking about performance using CDN might be beneficial because the libraries that you require might be cached in your browser so the time to fetch the file is saved. But if the file is available locally loading these files will definately take time and space.
https://halfelf.org/2015/cdn-vs-local/
https://www.sitepoint.com/7-reasons-not-to-use-a-cdn/
I agree with Rafael's answer above, but wanted to note a few benefits of serving up these libraries locally that he or she omitted.
It is still considered best practice (until HTTP2 becomes widespread) to try to minimize the amount of downloads being made by your site by concatenating many files into a single file. SO-- if you are using three Javascript libraries/frameworks (for instance, Angular, jQuery and Moment.js), if you are using a CDN that is three separate script elements pulling down three separate .js files. However, if you host them locally, you can have your build process include a step where it bundles the three libraries together into a single file called "vendor.js" or something along those lines. This has the added bonus of simplifying dependency loading to some degree, as you can concatenate them in a certain order should the need be.
Finally, although it is a little advanced if you are just getting started, if you are considering hosting your library files with your project it would definitely be worth looking into bower (https://bower.io/docs/api/) -- it is a node build tool that allows you to define what packages need to be included in your project and then install them with a single command-- particularly useful for keeping unnecessary library files out of your version control. Good luck!

Speed up slow loading website on mobile due to 3rd party analytics scripts

Speed tests report that my site loads in about 8 seconds on mobile and 2 on desktop.
When looking at the "waterfall" of the asset/script loading it seems to be mostly due to 3rd party analytic scripts like zendesk, hubspot, and google analytics.
Is there anyway to optimize a site for mobile when using these type of scripts?
As far as the files go on my site I've optimized them nearly as much as I could. I've even used a cron (probably not a great idea) to cache the google analytics script locally and fetch a newer version every few days.
I've looked into using Tag Manager or something like Segment to optimize the script loading of these files, but I'm not sure if that will actually improve performance or if those services are mostly just for convenience.
I've also looked at service workers for mobile app caching, but I'm not sure if that will help either and don't want to dive into learning how to use them if it won't actually make a difference.
To sum up, is there a way to speed up mobile with multiple 3rd party analytic scripts or am I just going to have to forgo using some of them or possibly add a mobile version or AMP version without using them or some of them.
Since you've tagged AMP in your question, I'll answer from the AMP perspective:
AMP solves this problem by using a mediator that connects directly to the endpoints of various analytics providers, replacing their clunky and power hungry JS scripts.
Of course, AMP comes with its own set of constrains and is really only targeted at powering static content, but if you're willing to go that route, learn more about AMP Analytics here.

embedded vs linked JS / CSS

I am familiar with the benefits of linked CSS vs embedded and inline for maintainability and modularity. I have, however, read that in certain mobile applications of web dev, it can be beneficial(faster performance) to embed or even inline your CSS.
I would would avoid any inline JS, and to a lesser extent CSS, but I have noticed on many sites including plenty google pages, JS is embedded right in the header of pages.
A colleague of mine insists on always linking to external js files. I think it makes more sense to embed js if the function is specific to one page or varies slightly per page, to save the processing overhead of a linked script.
The one thing the other answers didn't touch on is developer efficiency. If it's easier to put it inline, and there's no immediate performance requirement/concern, then do that. There is real business value to "easy", and it trumps eventual or non-existent performance concerns. Don't prematurely optimize.
The advantage of linked JS files is that they can be cached by the browser and loaded from local disk or memory on subsequent pages.
The advantage of inline JS is that you might have fewer network requests per page.
The best compromise is usually a small number of linked JS files (one or two) that consist of a mininified combination of all your JS so they are combined into as few files as possible and as small as possible.
Getting the benefits of local caching far exceed the extra parsing of a little JS that might not be used on some pages.
Embedded JS that makes sense (even most of your JS is in linked files) is the settings of a few JS variables that contain state that is specific to your page. That would typically be embedded into the section of the page as it's generated dynamically at your server, different for every page and usually not cacheable. But, this data should typically be small and page-specific.
Linking a script incurs a small penalty in the form of an extra request to the server. If you keep it inline this request is not made and depending on the situation you may get a faster loading page. It makes sense to inline your code if:
it is very small
it is dynamcally generated since then you won't get the benefits of caching anyway
In the case of google and facebook you're most likely seeing inline javascript because it's being generated by server side code.
Other answers have already mentioned the advantages of caching with external JS files. I would almost always go that way for any library or framework type functionality that is likely to be used by at least two pages. Avoid duplication when you can.
I thought I would comment on "inline" vs "embedded".
To me, "inline" means mixing the JS in with the HTML, perhaps with several separate <script> blocks that may or may not refer to each other, almost certainly with a number of event handlers set directly with HTML element properties like <div onclick="..."/>. I would discourage this in most circumstances, but I wouldn't get too hung up about it for occasional uses. Sometimes it's simply less hassle and pretending otherwise just wastes time that you could spend on more important issues.
I would define "embedded" as having (preferably) a single <script> block in the head or at the end of the body, with event handlers assigned within that block using document ready or onload function(s). I see nothing wrong with this for functions specific to one page, in fact I tend to prefer this over an external file for that purpose if it's only a small amount of script and I don't care about caching it client-side. Also if the page is generated dynamically and something in the JavaScript needs to be generated on the server it is generally much easier to do it if the script is on the same page.
One last note on external files: during development watch out for IE's tendency to "over cache". Sometimes while testing I've made some small changes to an external file or two and pulled my hair out wondering why it didn't work only to eventually realise that IE was still using an old cached version. (On the one hand of course this is my fault, but on the other I know a lot of people who have fallen victim to this from time to time.)
All the answers above are very good answers. But I would like to add my own based on 15 years of web dev experience.
ALWAYS use linked resources for both CSS and ECMAScripted resources rather than inline code as linked content is cached by the browsers in most cases and used across potentially thousands of pages over hours and days of interaction by a user with a given domain online. The advantages are as follows:
The bandwidth savings using linked scripts are HUGE as you simply deliver less script over the wire over the user experience which might use the cache for weeks.
There's also better cascade of CSS, as embedded and inline styles override, by weight, linked styles causing confusion in designers.
There is avoidance of duplicate scripts which happens a lot with inline scripts.
You reuse the same libraries over many pages with cache control on the client now possible using versioning and date-based querystrings.
Linked resources tell the browser to preload all resources BEFORE initializing scripts of styles in the DOM. Ive seen issues related to this where users pressed buttons prior to pre-processing of date-times in time sheet apps by scripts causing major problems.
You have more control over script and CSS libraries by all developers in a project, avoiding polluting your app with hundreds of poorly vetted custom scripting in pages
Its very easy to update libraries for your users as well as version linked libraries.
External script libraries from Google or others are now accessible. You can even reuse your own linked libraries and CSS using linked resources.
Best of all there are processing speed gains using cached client-side resources. Cached resources out perform on-demand resources any time!
Linked scripts also enforces style and layout consistencies instead of custom layout shifts per page. If you use HTML layouts consistently, you can simulate flash-free page views because cached CSS is used by the DOM across web pages to render pages faster.
Once you pull in linked resources on the first domain request/response the user's experience is fast and server-side page delivery means the DOM and HTML layouts will not shift or refresh despite numerous page views or links to pages across the site. We often then added limited custom page-level embedded style and script resources as needed to the cached linked stack of libraries on a page level if needed for a narrow range of customizations. Global variables and custom CSS can then override linked values. This allows you to maintain websites much easier without guesswork page-by-page as to what libraries are missing or already used. Ive added custom linked JQuery or other libraries in sub-sections to gain more speed this way, which means you can use linked scripts to manage custom subgroups of website sections, as well.
GOOGLE ANGULAR
What you are seeing in Google's web presence is often implementation of Angular's very complex ES5 and ES6 modular scripted cache systems that utilize fully Javascripted DOM manipulation of single page applications using the scripts and embedded CSS in page views now exclusively managed in Angular(2+). They utilize elaborate modules to load on-demand and lazy load components and modules with HTML/CSS templates pre-loaded into memory and from the server behind the scenes to speed delivery of news and other pages they manage.
The FLAW in all that is they demand browsers stream HUGE megabytes of ECMAScript preloaded with HTML and CSS embedded into these webpages behind the scenes as the user interacts with these cached pages and modules. The problem is they have HUGE stacks of the same CSS and scripts that get injected into multiple modules then parts of the DOM which is sloppy and wasteful. They argue there is no need now for server-side delivery or caching when they can easily manage all that via inline style and script content downloaded via XMLHTTPRequest hidden WebAPI calls to and from the server. Why download all that and rebuild and store inline pages in memory constantly when a much smaller file linked from the page would suffice?
Honestly, this is the sloppiest approach to cache management of styles, content, and CSS I have seen yet in web dev frameworks, as it still demands huge megabytes of scripts just to parse a simple news page with a few lines of text. Someone at Google didn't really think that framework through lol. Its wasteful of bandwidth and processing in the browser, if you ask me, and overkill. Its typical of over-engineering at these bloated vendors.
That is why I always argue for linked CSS and scripts. Less code and more content is why these frameworks were invented. Linked and cached code means SIMPLER, OLDER models have worked better using the fast delivery of smaller markup pages that cache tiny kilobytes of linked ECMAScript and CSS libraries. It means less code is used to display content. The browser's relationship with the server now is so fast and efficient today compared to years ago that the initial caching of these smaller linked files directly from the server (rather than giant inline pages of duplicate scripts yanked down via Web API in Angular on every page view) means linked resources are delivered much faster over the initial visit of a typical web domain visit.
Its only recently the 'script kiddies' have forgotten all this and so have started going backwards to a failed way of using local embedded and inline styles and scripts which we stopped using 20 years ago for a reason. It is a very poor choice and shows inexperience with the web and its markup and content model by many new developers today.
Stokely

What are advantages of using google.load('jQuery', ...) vs direct inclusion of hosted script URL?

Google hosts some popular JavaScript libraries at:
http://code.google.com/apis/ajaxlibs/
According to google:
The most powerful way to load the libraries is by using google.load() ...
What are the real advantages of using
google.load("jquery", "1.2.6")
vs.
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js"></script>
?
Aside from the benefit of Google being able to bundle multiple files together on the request, there is no perk to using google.load. In fact, if you know all libraries that you want to use (say just jQuery 1.2.6), you're possibly making the user's browser perform one unneeded HTTP connection. Since the whole point of using Google's hosting is to reduce bandwidth consumption and response time, the best decision - if you're just using 1 library - is to call that library directly.
Also, if your site will be using any SSL certificates, you want to plan for this by calling the script via Google's HTTPS connection. There's no downside to calling a https script from an http page, but calling an http script from an https page will causing more obscure debugging problems than you would want to think about.
It allows you to dynamically load the libraries in your code, wherever you want.
Because it lets you switch directly to a new version of the library in the javascript, without forcing you to rebuild/change templates all across your site.
It lets Google change the URL (but they can't since the URL method is already established)
In theory, if you do several google.load()s, Google can bundle then into one file, but I don't think that is implemented.
I find it's very useful for testing different libraries and different methods, particularly if you're not used to them and want to see their differences side by side, without having to download them. It appears that one of the primary reason to do it, would be that it is asynchronous versus the synchronous script call. You also get some neat stuff that is directly included in the google loader, like client location. You can get their latitude and longitude from it. Not necessarily useful, but it may be helpful if you're planning to have targeted advertising or something of the like.
Not to mention that dynamic loading is always useful. Particularly to smooth out the initial site load. Keeping the initial "site load time" down to as little as possible is something every web designer is fighting an uphill battle on.
You might want to load a library only under special conditions.
Additionally the google.load method would speed up the initial page display. Otherwise the page rendering will freeze until the file has been loaded if you include script tags in your html code.
Personally, I'm interested in whether there's a caching benefit for browsers that will already have loaded that library as well. Seems like if someone browses to google and loads the right jQuery lib and then browses to my site and loads the right jQuery lib... ...both might well use the same cached jQuery. That's just a speculative possibility, though.
Edit: Yep, at very least when using the direct script tags to the location, the javascript library will be cached if someone has already called for the library from google (e.g. if it were included by another site somewhere).
If you were to write a boatload of JavaScript that only used the library when a particular event happens, you could wait until the event happens to download the library, which avoids unnecessary HTTP requests for those who don't actually end up triggering the event. However, in the case of libraries like Prototype + Scriptaculous, which downloads over 300kb of JavaScript code, this isn't practical.

Categories