I'm building a blogging plugin that enables commenting on specific pages. The way it works at the moment is that you include a js script in your html page, which then triggers at load time and adds a comment box to the page.
It's all running properly, however when running Lighthouse reports I still get a "Remove unused Javascript".
I assume this is happening because the code isn't immediately used, but rather initiated once the page has fully loaded?
Any idea how I could fix this?
"Remove Unused JavaScript" isn't telling you that the script isn't used, it is telling you that within that script you have JavaScript that isn't needed for page load.
What it is encouraging you to do is "code splitting", which means you serve JS essential for the initial page content (known as above the fold content) first and then later send any other JS that is needed for ongoing usage on the site.
If you look at the report and open the entry you will see it has a value that shows how much of the code base is not needed initially.
If the script is small (less than 5kb), which I would imagine it is if all it does is create a comment box, then simply ignore this point. If it is larger (and you can't easily make it smaller) - split it into "initialisation" code and "usage" code / the rest of the code and serve the "initialisation" code early and the rest after all essential items have loaded or on intent.
It does not contribute to your score directly and is there purely to indicate best practices and highlight potential things that are slowing your site down.
Additional
From the comments the author has indicated the library is rather large at 70kb, due to the fact it has a WYSIWYG etc.
If you are trying to load a large library but also be conscious about performance the trick is to load the library either "on intent" or after critical files have loaded.
There are several ways to achieve this.
on intent
If you have a complex item to appear "above the fold" and you want high performance scores you need to make a trade-off.
One way to do this is instead of having the component initialised immediately you defer the initialisation of the library / component until someone needs to do use it.
For example you could have a button on the page that says "leave a comment" (or whatever is appropriate) that is linked to a function that only loads the library once it is clicked.
Obviously the trade-off here is that you will have a slight delay loading and initialising the library (but a simple load spinner would suffice as even a 70kb library should only take a second or so to load even on a slow connection).
You can even fetch the script once someone's mouse cursor hovers over the button, those extra few milliseconds can add up to a perceivable difference.
Deferred
We obviously have the async and defer attribute.
The problem is that both mean you are downloading the library alongside critical assets, the difference is when they are executed.
What you can do instead is use a function that listens for the page load event (or listens for all critical assets being loaded if you have a way of identifying them) and then dynamically adds the script to the page.
This way it does not take up valuable network resources at a time when other items that are essential to page load need that bandwidth / network request slot.
Obviously the trade-off here is that the library will be ready slightly later than the main "above the fold" content. Yet again a simply loading spinner should be sufficient to fix this problem.
Related
I'm starting to optimize and I have this problem with the Facebook tracking pixel killing my load times:
Waterfall Report
My page finishes around 1.1 seconds but the pixel doesn't finish until almost a full second later.
My pixel script is in the head, per the docs. Is there any way of speeding this up?
The script probably doesn’t necessarily need to be in the head, even if Facebook recommends that as the default. As long as you don’t explicitly call the fbq function anywhere before the code snippet was embedded, it should work fine.
Question would be though, how much of an improvement that actually brings - the embed code is already written in a away, that the actual SDK gets loaded asynchronously.
The <script> block that embeds the SDK might be parser-blocking - but you can’t put async or defer on inline scripts, that would have no effect. Putting that code itself into an external file, and then embedding that with async or defer might help in that regard though.
Another option would be to not use the script at all, but track all events you need to track using the image alternative.
https://developers.facebook.com/docs/facebook-pixel/advanced#installing-the-pixel-using-an-img-tag:
If you need to install the pixel using a lightweight implementation, you can install it with an <img> tag.
“Install” is a bit misleading here though - this will only track the specific event that is specified in the image URL parameters. Plus, you’d have to do all your tracking yourself, there will be no automatism any more. The simple page view default event you could track by embedding an image directly into your HTML code; if you need to track any events dynamically, then you can use JavaScript to create a new Image object and assign the appropriate URL, so that the browser will fetch it in the background.
There are a few additional limitations mentioned there in the docs, check if you can live with those, if you want to try this route.
If you needed to take into account additional factors like GDPR compliance - then you would have to handle that completely yourself as well, if you use the images for tracking. (The SDK has methods to suspend tracking based on a consent cookie, https://developers.facebook.com/docs/facebook-pixel/implementation/gdpr)
A third option might be to try and modify the code that embeds the SDK yourself.
The code snippets creates the fbq function, if it does not already exist. Any subsequent calls to it will put the event to track on a “stack”, that will then get processed once the SDK file has loaded. So in theory, this could be rewritten for example in such a way, that it doesn’t insert the script node to load the SDK immediately, but delays that using a timeout. That should still work the same way (in theory, I have not explicitly tested it) - as long as the events got pushed onto the mentioned stack, it should not matter when or how the SDK loads. (Too big of a timeout could lead to users moving on to other pages before any tracking happens though.)
Last but not least, what I would call the “good karma” option - removing the tracking altogether, and all the sniffing, profiling and general privacy violation that comes with it :-) That is likely not an option for everyone, and if you are running ad campaigns on Facebook or similar, it might not be one at all. But in cases where this is for pure “I want to get an idea what users are doing on my site” purposes, a local Matomo installation or similar might serve that just as fine, if not even better.
I've found that loading the pixel using Google Tag Manager works for my WordPress website.
Before Pixel, my speed was 788ms
After adding the pixel it was 2.12s
Then, adding it through GTM it was 970ms
Source and more details: https://www.8webdesign.com.au/facebook-pixel-making-website-slower/
check this articles- https://www.shaytoder.com/improving-page-speed-when-using-facebook-pixel/
I will put the script in the footer, so it will not affect the “First Paint”, but more important – I will add the yellow marked lines to wrap the “script” part of the pixel code like shown here–
I know you might think it has no relation, but have you tried enabling lazy loading on your website? Adding it in the footer or using the Facebook for WordPress plugin also helps.
I'm developing a Node.JS application. My client side javascript is about 1.000 LoC, plus 3 libraries. I minified and concatenate my javascript files to a file of total 150kb size (50kb gzipped).
I wonder at which point of file size it would be reasonable to think about using AMD, require.js or similiar.
Short answer:
~100-150kb will be a very big warning sign but file size is not the only indicator. Also, a check for if it's loading blocks responsiveness and UI loading and how much (a look at the network panel in Chrome's devtools) is important for making a good decision.
Long answer:
The reason for using Asynchronous loading is less about file size and more about usage patterns of parts of the code and timing. The thought is to load the minimum required in the initial load, so the page will load as fast as it can and the user get the most responsive experience and then load, in the background, resources the will be needed in the future.
A 150kb file is a lot (as an example, yours is 50kb gzipped so less scary) but if the js file loading doesn't block the UI rendering then file size is less of a problem. I mean, if the user doesn't notice a page loading delay then there isn't a file size problem. Also, if this is the case you might want to consider using the async tag attribute. This of course is void for applications built on client-side frameworks like angular or ExtJs as the js is rendering the UI.
Ask yourself how much of your code is used by how many of your users, how often, (be honest with yourself). A rule of thumb is if any UI elements appear later in the application flow, they can be loaded later. But remember that as with all rule-of-thumbs there are exceptions pay attention. Good analytics data is better than an arbitrary file size, you can really see what parts of the UI (and code) is used and when. Maybe some need to load very late in the flow.
If all code is needed upfront (This is rarely the case), then AMD is probably not for you.
If your application has, lets say, a statistics charts dialog that uses a lot of unique code but only 20% of the users click on the button that opens and that button is located in settings page then loading all it's code with everything else on every other page is a waste and better to do so when in the settings page or when the button is clicked, if there isn't a settings page (or even when the mouse is over the button).
Understanding usage patterns and timing is a key criteria.
These two articles are very interesting and might help in making some deployment decisions:
https://alexsexton.com/blog/2013/03/deploying-javascript-applications/
http://tomdale.net/2012/01/amd-is-not-the-answer/
And also if you still want to load everything in one file requireJs also has it's optimizer tool worth a look:
http://requirejs.org/docs/optimization.html
I want my customers create their own HTML on my web application and copy and paste my code to their website to showing the result in the position with customized size and another options in page that they want. the output HTML of my web application contain HTML tags and JavaScript codes (for example is a web chart that created with javascript).
I found two way for this. one using iframe and two using jquery .load().
What is better and safer? Is there any other way?
iframe is better - if you are running Javascript then that script shouldn't execute in the same context as your user's sites: you are asking for a level of trust here that the user shouldn't need to accede to, and your code is all nicely sandboxed so you don't have to worry about the parent document's styles and scripts.
As a front-end web developer and webmaster I've often taken the decision myself to sandbox third-party code in iframes. Below are some of the reasons I've done so:
Script would play with the DOM of the document. Once a third-party widget took it upon itself to introduce buggy and performance-intensive PNG fix hacks for IE across every PNG used in img tags and CSS across our site.
Many scripts overwrite the global onload event, robbing other scripts of their initialisation trigger.
Reading local session info and sending it back to their own repositories.
Loading any number of resources and perform CPU-intensive processes, interrupting and weighing down my site's core experience.
The above are all examples of short-sightedness or malice on the part of the third parties you may see yourself as above, but the point is that as one of your service's users I shouldn't need to take a gamble. If I put your code in an iframe, I know it can happily do its own thing and not screw with my site or its users. I can also choose to delay load and execution to a moment of my choosing (by dynamically loading the iframe at a moment of choice).
To argue the point in terms of your convenience rather than the users':
You don't have to worry about any of the trust issues associated with XSS. You can honestly tell your users they're not exposing themselves to any unnecessary worry by running your tool.
You don't have to make the extra effort to circumvent the effects of CSS and JS on your users' sites.
I came across the problem when due to internet connection problems, some of the required JavaScript files are not loading. Body onload event gets fired however classes required for logic of the page are not present.
One more thing, problem which I want to fix is not in the website, it is in web application which does not have any image or CSS files. Just imagine a JavaScript code running in iframe. Thus, I have problems only with scripts.
Here are my ideas how to fix this, please comment/correct me if I'm wrong:
Obfuscate and combine files into when pushing to live so overall size of the files will be decreased and task will come to reliably loading 1 file
Enable gzip compression on server. So again resulting file size will be much smaller
Put proper cache headers for that file, so once loaded it will be cached in browser/proxy server
Finally, even having all this, there could be a case that file will not be loaded. In this case I plan to load that file dynamically from JavaScript, once page is loaded. There will be "Retry failed load" logic with maximum 5 attempts for example
Any other ideas?
If the "retry" script fails to grab the required dependencies, redirect to a "no script" version of the site, if one is available. Or try to have a gracefully degrading page, so even if all steps fail, the site is still usable.
1 - Correct but double check if JavaScript functions from different files don't overlap each other.
2 - Correct - this should be always on.
3 - Correct but the Browser will still try to get a HTTP 304: Not Modified code from the server.
4 - Correct, consider fallback to a noscript version of the website after 1 or 2 failed attempts (5 is too much).
I don't personally think it's worth it to try to redo the logic that the browser has. What if the images in your page don't load? What if the main page doesn't load. If the user has internet connection problems, they need to fix those internet connection problems. Lots of things will not work reliably until they do.
So, are you going to check that every image your page displays loads properly and if it didn't load, are you going to manually try to reload those too?
In my opinion, it might be worth it to put some inline JS to detect whether an external JS file didn't load (all you have to do is check for the existence of one global variable or top level function in the external JS file) and then just tell the user that they are having internet connection problems and they should fix those problems and then reload the site.
Your points are valid for script loading, but you must also consider the website usage.
If the scripts are not loading for whatever reason, the site must be still completely usable and navigable. The user experience come first before everything else.
The scripts should be loaded after the website interface has been loaded and visualized by the browsers, and should contain code to enhance user experience, not something you must absolutely rely on.
This way even when the connection is really slow, I will still be able to read content and choose to change page or go somewhere else, instead of having a blank page or a page with only the header displayed.
This to me is the most important point.
Also, are you sure about a retry approach? It causes more requests to the server. If the connection is slow or laggy then it may be best to not run the script at all, expecially considering users may spend little time on the page and only need to fast read at content. Also, in the connection is slow, how much time would you set for a timeout? What if the script is being downloaded while your timeout fired and you retry again? How can you effectively determine that amount of time, and the "slowness" of the connection?
EDIT
Have you tried out head.js? Is a plugin aimed at fastest possible sripts loading, maybe it will help.
Yahoo best practices states that putting JavaScript files on bottom might make your pages load faster. What is the experience with this? What are the side effects, if any?
It has a couple of advantages:
Rendering begins sooner. The browser cannot layout elements it hasn't received yet. This avoids the "blank white screen" problem.
Defers connection limits. Usually your browser won't try to make more than a couple of connections to the same server at a time. If you have a whole queue of scripts waiting to be downloaded from a slow server, it can really wreck the user experience as your page appears to grind to a halt.
If you get a copy of Microsoft's Visual Round Trip Analyzer, you can profile exactly what is happening.
What I have seen more often that not is that most browsers STOP PIPELINING requests when they encounter a JavaScript file, and dedicate their entire connection to downloading the single file, rather than downloading in parallel.
The reason the pipelining is stopped, is to allow immediate inclusion of the script into the page. Historically, a lot of sites used to use document.write to add content, and downloading the script immediately allowed for a slightly more seamless experience.
This is most certainly the behavior that Yahoo is optimizing against. (I have seen the very same recommendation in MSDN magazine with the above explanation.)
It is important to note that IE 7+ and FF 3+ are less likely to exhibit the bad behavior. (Mostly since using document.write has fallen out of practice, and there are now better methods to pre-render content.)
As far as I can tell, it simply lets the browser begin rendering sooner.
One side effect would be that some scripts doesn't work at all if you put them at the end of the page. If there is a script running while the page is rendered (quite commmon for ad scripts for example) that relies on a function in another script, that script has to be loaded first.
Also, saying that the page loads faster is not the exact truth. What it really does is loading the visual elements of the page earlier so that it seems like your page is loading faster. The total time to load all components of the page is still the same.
Putting them at the bottom is a close equivalent to using the "defer" attribute (even more info here). This is similar to how a browser cannot continue with page layout unless IMG tags have width and height information -- if the included javascript generates content, then the browser can't continue with layout until it knows what is there, and how big everything is.
So long as your javascript doesn't need to run before the onload event happens, you should be able to either place the script tags at the end, or use the defer attribute.
Your page should actually load faster. Browsers will open more than one connection to download three images in parallel, for example. On the other hand, the <script> tags in most browsers cause the browser to block on that script executing. If its a <script> tag with a src attribute, the browser will block to both download and execute. If you put your <script> tags at the end, you avoid this problem.
At the same time, this means that those pages don't have any JS functionality until they're done loading. A good exercise in accessibility is to ensure your site runs well enough to be usable until the JS loads. This ensures that the page will (a) work well for people with slow connections (b) work well for people who are impaired or use text-only browsers.
if you are developing for firefox/safari, you can always check with firebug/developer console as they show the loading sequence of files
There are some points.
It loads page fast since the JavaScript internal or external is on bottom.
If you have not used a onLoad method of window in JavaScript it will start execution as soon as it rendered. The Script at bottom ensures that your script will execute after page load.
If script is as a file means external then will render after the HTML image and other visual object that forms the page view.
If you are using fireFox then there is a plug in to check the performance.
Please do hit the firefox site for this plugin.