Facebook Pixel slows down page load time by almost a full second - javascript

I'm starting to optimize and I have this problem with the Facebook tracking pixel killing my load times:
Waterfall Report
My page finishes around 1.1 seconds but the pixel doesn't finish until almost a full second later.
My pixel script is in the head, per the docs. Is there any way of speeding this up?

The script probably doesn’t necessarily need to be in the head, even if Facebook recommends that as the default. As long as you don’t explicitly call the fbq function anywhere before the code snippet was embedded, it should work fine.
Question would be though, how much of an improvement that actually brings - the embed code is already written in a away, that the actual SDK gets loaded asynchronously.
The <script> block that embeds the SDK might be parser-blocking - but you can’t put async or defer on inline scripts, that would have no effect. Putting that code itself into an external file, and then embedding that with async or defer might help in that regard though.
Another option would be to not use the script at all, but track all events you need to track using the image alternative.
https://developers.facebook.com/docs/facebook-pixel/advanced#installing-the-pixel-using-an-img-tag:
If you need to install the pixel using a lightweight implementation, you can install it with an <img> tag.
“Install” is a bit misleading here though - this will only track the specific event that is specified in the image URL parameters. Plus, you’d have to do all your tracking yourself, there will be no automatism any more. The simple page view default event you could track by embedding an image directly into your HTML code; if you need to track any events dynamically, then you can use JavaScript to create a new Image object and assign the appropriate URL, so that the browser will fetch it in the background.
There are a few additional limitations mentioned there in the docs, check if you can live with those, if you want to try this route.
If you needed to take into account additional factors like GDPR compliance - then you would have to handle that completely yourself as well, if you use the images for tracking. (The SDK has methods to suspend tracking based on a consent cookie, https://developers.facebook.com/docs/facebook-pixel/implementation/gdpr)
A third option might be to try and modify the code that embeds the SDK yourself.
The code snippets creates the fbq function, if it does not already exist. Any subsequent calls to it will put the event to track on a “stack”, that will then get processed once the SDK file has loaded. So in theory, this could be rewritten for example in such a way, that it doesn’t insert the script node to load the SDK immediately, but delays that using a timeout. That should still work the same way (in theory, I have not explicitly tested it) - as long as the events got pushed onto the mentioned stack, it should not matter when or how the SDK loads. (Too big of a timeout could lead to users moving on to other pages before any tracking happens though.)
Last but not least, what I would call the “good karma” option - removing the tracking altogether, and all the sniffing, profiling and general privacy violation that comes with it :-) That is likely not an option for everyone, and if you are running ad campaigns on Facebook or similar, it might not be one at all. But in cases where this is for pure “I want to get an idea what users are doing on my site” purposes, a local Matomo installation or similar might serve that just as fine, if not even better.

I've found that loading the pixel using Google Tag Manager works for my WordPress website.
Before Pixel, my speed was 788ms
After adding the pixel it was 2.12s
Then, adding it through GTM it was 970ms
Source and more details: https://www.8webdesign.com.au/facebook-pixel-making-website-slower/

check this articles- https://www.shaytoder.com/improving-page-speed-when-using-facebook-pixel/
I will put the script in the footer, so it will not affect the “First Paint”, but more important – I will add the yellow marked lines to wrap the “script” part of the pixel code like shown here–

I know you might think it has no relation, but have you tried enabling lazy loading on your website? Adding it in the footer or using the Facebook for WordPress plugin also helps.

Related

Lighthouse: Remove unused Javascript, but the Javascript is used

I'm building a blogging plugin that enables commenting on specific pages. The way it works at the moment is that you include a js script in your html page, which then triggers at load time and adds a comment box to the page.
It's all running properly, however when running Lighthouse reports I still get a "Remove unused Javascript".
I assume this is happening because the code isn't immediately used, but rather initiated once the page has fully loaded?
Any idea how I could fix this?
"Remove Unused JavaScript" isn't telling you that the script isn't used, it is telling you that within that script you have JavaScript that isn't needed for page load.
What it is encouraging you to do is "code splitting", which means you serve JS essential for the initial page content (known as above the fold content) first and then later send any other JS that is needed for ongoing usage on the site.
If you look at the report and open the entry you will see it has a value that shows how much of the code base is not needed initially.
If the script is small (less than 5kb), which I would imagine it is if all it does is create a comment box, then simply ignore this point. If it is larger (and you can't easily make it smaller) - split it into "initialisation" code and "usage" code / the rest of the code and serve the "initialisation" code early and the rest after all essential items have loaded or on intent.
It does not contribute to your score directly and is there purely to indicate best practices and highlight potential things that are slowing your site down.
Additional
From the comments the author has indicated the library is rather large at 70kb, due to the fact it has a WYSIWYG etc.
If you are trying to load a large library but also be conscious about performance the trick is to load the library either "on intent" or after critical files have loaded.
There are several ways to achieve this.
on intent
If you have a complex item to appear "above the fold" and you want high performance scores you need to make a trade-off.
One way to do this is instead of having the component initialised immediately you defer the initialisation of the library / component until someone needs to do use it.
For example you could have a button on the page that says "leave a comment" (or whatever is appropriate) that is linked to a function that only loads the library once it is clicked.
Obviously the trade-off here is that you will have a slight delay loading and initialising the library (but a simple load spinner would suffice as even a 70kb library should only take a second or so to load even on a slow connection).
You can even fetch the script once someone's mouse cursor hovers over the button, those extra few milliseconds can add up to a perceivable difference.
Deferred
We obviously have the async and defer attribute.
The problem is that both mean you are downloading the library alongside critical assets, the difference is when they are executed.
What you can do instead is use a function that listens for the page load event (or listens for all critical assets being loaded if you have a way of identifying them) and then dynamically adds the script to the page.
This way it does not take up valuable network resources at a time when other items that are essential to page load need that bandwidth / network request slot.
Obviously the trade-off here is that the library will be ready slightly later than the main "above the fold" content. Yet again a simply loading spinner should be sufficient to fix this problem.

Google Tag Manager Script reducing page performance

When I place google tag manager script in the head part of my website as strongly recommended by google;
the script basically delays the onload event causing to increase the Time To Interact of the page.
I had a work around for this issue by injecting the script after a time of 3000 ms using setTimeout() function.
This has improved the performance of the website drastically.
But this is not recommended by the Google.
Since I know there is a problem with the gtm script. Is there a workaround?
I'm surprised its impacting your site performance that drastically, since it should be async (assuming you used the recommended script tag).
Are you using a datalayer? If not, this should help reduce redundancies and improve performance - you can use custom event triggers that wait for page load to send the layer.
It could also be one or more of your tags causing the delay, you may want to pause individual tags and see if you can find the culprit. Keep in mind that if you are testing in preview mode, page load will be slower.
I would recommend to use a caching Plugin like WP Rocket, there you can set how to load scripts in order to optimize your pagespeed :)

Executing code immediately after navigation: Browser plugin?

I am building a tool which uses (dynamically inserted) JavaScript to modify webpages. Any webpage.
The idea is to allow a user to use it to record a series of changes to an existing webpage like google.com, (for the sake of example suppose a change is to apply a 10 pixel solid black border to all <img> tags, this change can obviously be encoded as a short and sweet snippet of jQuery), and the tool generates a link (or identifier) that contains this metadata and the url representing the "starting point" if you will (in this case google.com).
Now the problem I've run into is the entire Same-Origin security policy, whose purpose is to expressly deny the exact kind of thing that it seems like I need to do.
What I need to do is essentially navigate to a particular site, and then execute javascript in the context of that site. Neither I (the author of the tool) nor the user with whom I share my script necessarily have control over the site, so in theory the security model if implemented properly should prevent this concept from working.
Because of this I cannot have a single clickable link that kicks off the process of running my code on some site. It totally makes sense too. It would make it trivial for an attacker site to send a disguised clickable link that will run code that acts as me on any site they want.
But, the way to get around it is to tell the recipient to do a single additional step. First they open the URL of the site just like normal, then they paste a bit of javascript:(function(){.....})(); into the URL/omni bar. That is (AFAICT) completely legitimate and should be permissible because the user understands that this script is being executed. Whether or not it should be allowed to run JS so easily at this point is more or less irrelevant, as it basically just works everywhere now.
This isn't too bad but I think the user experience suffers unnecessarily. For example it looks like a native app would be necessary to get any better than pasting the JS into the URL bar on an iOS device for example, but on a plugin-accepting full browser it seems like a plugin can achieve what I want.
Which is: a navigation to an arbitrary URL followed by code execution (this code originating from an authorized source) with one click.
But I'm not sure where to start. What API could provide me this ability? I am hoping I can get away with Greasemonkey-type scripting (as Greasemonkey compatible plugins are available for pretty much all the good browsers), but I can't tell if there is enough power available.
I am actually still a little unsure about security related problems with this. I used to have a huge paragraph here but it all boils down to "social engineering".
This kind of things are generally done with bookmarklets.
On your website featuring your script, create a link that has href="javascript:(function(){/* ... */})()". Then a user could simply drag and drop that link into his favourites (bookmark it). And use it as button in a favourites bar.
Your bookmarklet could contain directly your script, or a simple loader that injects a <script src=http://mywebsite.com/script.js"> tag into the document, this way you can update your script and "distribute" it directly to all users.
Security is always about knowledge. Or to put it the other way around: Not knowing something makes you feel insecure.
There is no secure way to do what you want which is my web browsers forbid it by default. There are workarounds (like pasting the URL as you explained above) but all of them are only secure as long as the user knows what she is doing.
That being the social implications. Now the technical solutions:
You try a bookmarklet
You can use a browser plugin like Greasemonkey
Both allow to run arbitrary JavaScript. The former needs explicit permission from the user each time, the later one does it automatically.
Of course, if you move the core of the functionality to a remote place, it would be hard for even knowledgeable users like me to understand and trust what is going on.
That is when the meat of the function isn't in the bookmarklet or the greasemonkey script and when you instead add a <script> tag with a remote URL. That would make it harder to make sure your script doesn't do something "odd". For example, you could return a different script when I try to download the JavaScript without using your bookmarklet.

what is better? using iframe or something like jquery to load an html file in external website

I want my customers create their own HTML on my web application and copy and paste my code to their website to showing the result in the position with customized size and another options in page that they want. the output HTML of my web application contain HTML tags and JavaScript codes (for example is a web chart that created with javascript).
I found two way for this. one using iframe and two using jquery .load().
What is better and safer? Is there any other way?
iframe is better - if you are running Javascript then that script shouldn't execute in the same context as your user's sites: you are asking for a level of trust here that the user shouldn't need to accede to, and your code is all nicely sandboxed so you don't have to worry about the parent document's styles and scripts.
As a front-end web developer and webmaster I've often taken the decision myself to sandbox third-party code in iframes. Below are some of the reasons I've done so:
Script would play with the DOM of the document. Once a third-party widget took it upon itself to introduce buggy and performance-intensive PNG fix hacks for IE across every PNG used in img tags and CSS across our site.
Many scripts overwrite the global onload event, robbing other scripts of their initialisation trigger.
Reading local session info and sending it back to their own repositories.
Loading any number of resources and perform CPU-intensive processes, interrupting and weighing down my site's core experience.
The above are all examples of short-sightedness or malice on the part of the third parties you may see yourself as above, but the point is that as one of your service's users I shouldn't need to take a gamble. If I put your code in an iframe, I know it can happily do its own thing and not screw with my site or its users. I can also choose to delay load and execution to a moment of my choosing (by dynamically loading the iframe at a moment of choice).
To argue the point in terms of your convenience rather than the users':
You don't have to worry about any of the trust issues associated with XSS. You can honestly tell your users they're not exposing themselves to any unnecessary worry by running your tool.
You don't have to make the extra effort to circumvent the effects of CSS and JS on your users' sites.

Recording web page events / ajax calls/results and so on

I'm mostly looking for directions here.
I'm looking to record events that happen within a web page. Somewhat similar to your average "Macro-recorder", with the difference that I couldn't care less about exact cursor movement or keyboard input. The kind of events I would like record are modification of input fields, hovers, following links, submitting forms, scripts that are launched, ajax calls, ajax results and so on.
I've been thinking of using Jquery to build a little app for this, and inserting this on whichever pages I would like to test it on (or more likely, loading the pages into an iframe or something). I however can not accommodate the scripts on these pages to work with this so it has to work regardless of the content.
So I guess my first question is: Can this be done? Especially in regards to ajax calls and various script execution.
If it can, how would I go about the ajax/script part of it? If it can't, what language should I look into for this task?
Also: maybe there's something out there that can already do what I'm looking for?
Thanks in advance
Two ways I can think of are:
Use an add on (firefox) or an extension (chrome) to inject a script tags that loads jquery and your jquery app
Set a proxy (you can use node.js or some other proxy server) and in your proxy inject script tags, be sure to adjust the ContentLength header. (tricky in https sites).
A much simpler and faster option where you don't need to capture onload is to write a JavaScript snippet that load jquery and your app by inject script tags, make that a bookmarklet and after the page loads hit the bookmarklet.
Came across this post when looking for a proxy for tag injection.
Yes it's quite possible to trap (nearly) all the function and method calls by a browser via code in a javascript loaded in the page - usually a javascript debugger (firebug?) or HTTP debugger (tamperdata / fiddler) will give you msot of what you require with a lot less effort.
OTOH if you really want to do this with bulk data / arbitary sites, then (based on what I've seen so far) you could use Squid proxy with an icap server/ecap module (not trivial - will involve a significant amount of programming) or implement the javascript via greasemonkey as a browser extension.
Just to clarify, so far I've worked out how to catch function and method (including constructor calls) and proxy then within my own code, but not yet how to deal with processing triggered by direct setting of a property (e.g. img.src='http://hackers-r-us.com') nor handle ActiveX neatly.

Categories