So I'm injecting some html into page (extension written with angular.js) and on pages with google maps frames like airbnb the loading slows down. So I don't have any idea how I affect that. Any ideas?
When you load JavaScript from a third party you should do it asynchronously. You might want to load your own scripts asynchronously too, but for this article let's focus on third parties.
There are two reasons for this:
If the third-party goes down or is slow, your page won't be held up
trying to load that resource.
It can speed up page loads.
At Wufoo, we just switched over to an asynchronous embed snippet. Users who build forms with Wufoo and want to embed them on their site are now recommended to use it. We did it for exactly those reasons above. It's the responsible thing to do for a web service that asks people to link to resources on that services site.
There is a little terminology involved here that will help us understand the umbrella "asynchronous" term.
"Parser blocking" - The browser reads your HTML and when it comes to a it downloads that entire resource before moving on with the parsing. This definitely slows down page loads, especially if the script is in the head or above any other visual elements. This is true in older browsers as well as modern browsers if you don't use the async attribute (more on that later). From the MDN docs: "In older browsers that don't support the async attribute, parser-inserted scripts block the parser..."
To prevent problematic parser blocking, scripts can be "script inserted" (i.e. insert another script with JavaScript) which then forces them to execute asynchronously (except in Opera or pre 4.0 Firefox).
"Resource blocking" - While a script is being downloaded, it can prevent other resources from downloading at the same time as it. IE 6 and 7 do this, only allowing one script to be downloaded at a time and nothing else. IE 8 and Safari 4 allow multiple scripts to download in parallel, but block any other resources (reference).
Ideally we fight against both of these problems and speed up page loading (both actual and perceived) speed.
https://css-tricks.com/thinking-async/
Related
First some backstory:
We have a website that includes a Google Map in the usual way:
<script src="https://maps.googleapis.com/maps/api/js?v=....></script>
Then there is some of our javascript code that initializes the map. Now suddenly yesterday pages started to load but then freezed up entirely. In Chrome this resulted in having to force quit and restart. Firefox was smarter and allowed the user to stop script execution.
Now after some debugging, I found out that a previous developer had included the experimental version of the Google Maps API:
https://maps.googleapis.com/maps/api/js?v=3.exp
So it's likely that something has changed on Google's servers (which is completely understandable). This has uncovered a bug on our end, and both in combination caused the script to hang and freeze up the website and the browser.
Now ok, bug is found and fixed, no big harm done.
And now the actual question:
Is it possible to somehow sandbox these external script references so that they cannot crash my main site. I am loading a decent amount of external javascript files (tracking, analytics, maps, social) from their own servers.
However such code could change at all times and could have bugs that freeze my site. How can I protect my site? Is there a way to maybe define maximum allowable execution time?
I'm open to all kinds of suggestions.
It actually doesn't matter where the scripts are coming from - whether an external source or your own server. Either way they are run in the clients browser. And that makes it quite difficult to achieve your desired sandbox behavior.
You can get a sandbox inside your DOM with the usage of iframes and the keyword "sandbox". This way the content of this iframe is independent from the DOM of your actual website and you can include scripts independent as well. But this is beneficial mainly for security. I am not sure how it would result regarding the overall stability when one script has a bug like an endless loop or similar. But imho this is worth a try.
For further explanation see: https://www.html5rocks.com/en/tutorials/security/sandboxed-iframes/
I am aware that both these apis are used to inject Javascript into the webpage. Is there any difference between loadSubScript and loadFrameScript in Firefox extension development? In which situation would you use them?
I assume that you are asking about mozIJSSubscriptLoader.loadSubScript() and nsIChromeFrameMessageManager.loadFrameScript(). These are two entirely different mechanisms with the only similarity being that both can load and execute code.
mozIJSSubscriptLoader isn't meant to load code into web pages - its primary goal is to load parts of your extension dynamically. This is a very old mechanism that even predates JavaScript code modules.
The goal of loadFrameScript() is to load content scripts however, originally introduced to support multi-process setups (e10s project). It will load scripts that will run with the privileges of the web page in the context of the web page. No direct interaction with the code that loaded it is possible, only messaging.
Most extensions don't have any reason to use loadFrameScript. It's target is remote debugging.
I'm currently doing some optimization work on a large web project. I'm already doing JavaScript file combining, minification and compression. But I'm confused on one point.
For a number of non-technical reasons, my users are about 50% each IE7 and IE8. After doing some research, I'm getting the impression that IE7 loads the JavaScript files sequentially and IE8 loads them in parallel. I understand that going forward that this will not be an issue with more modern browsers (IE9+, FF, Chrome, etc).
Is this an accurate statement? If yes, then what is best practice for loading the files?
That statement is correct, but you should remember that even modern browser will make only a limited number of connections to the same server. So when your page, scripts, css and images are all on the same server, the browser may load only 2 or 4 of those at a time. Therefor it may be a good idea to add a subdomain or a different domain for scripts to trick the browser and make it load the scripts alongside with the images.
An even simpler solution is to merge all scripts into one script. You can do this 'on the fly' or cache it. You can even minimize the scripts (which means comments and whitespace are stripped and variable names are shortened). You shouldn't minimize and combine the original scripts, but you can cache the combined/minimized scripts so they won't need to be minimized with each request.
If you do this, you reduce traffic, and your browser will only need one request for the file, eliminating the overhead of multiple sequential requests.
See this MSDN blog article which shows some other tricks for script loading.
Also, I know css and image files can be downloaded in parallel. But can javascript files be downloaded in parallel? Thanks.
When the HTML parser sees a script tag, barring your using a couple of special attributes, it comes to a screeching halt, downloads the JavaScript file, and then hands the contents off to the JavaScript interpreter. Consequently, script files cannot be downloaded in parallel either with each other or with the main HTML...unless you use the defer or async attributes and the browser supports them. Details in this other answer here on StackOverflow.
Note that even if you have multiple resources that can be downloaded in parallel, there's no guarantee that they will be. Browsers (and servers) put limits on the number of simultaneous connections between the same two endpoints. (With modern browsers the limit is usually at least four — up from two — but browsers may dial things down on dial-up connections and on mobile devices; and of course, they're entirely free to only use a single connection, it's implementation-specific).
Scripts are loaded in parallel in modern browsers and IE8. Firefox and Safari will download as many as six scripts at once. IE8 however seems to not have a limit set - my tests showed as many as 18 scripts load at once.
However, whether it's one script or six, everything else is blocked. So the browser will not load one script and one style sheet.
You can get a good idea of the flow by looking at Firebug in the Net tab. Those bars you see on the right are called a waterfall chart, and gives you a reasonably accurate representation of when things load and how long they take.
You can learn more about page loading in this presentation: Optimize Your Website to Load Faster The stuff on resource loading starts on slide 33.
I'm told that document.write should be avoided in web page since it hurts web page performance. But what is the exact reason?
document.write() itself doesn't seem to be very harmful to page performance in most browsers. In fact, I ran some tests at DHTML Kitchen and found that in Firefox, Opera and Chrome, document.write() was actually faster on the first load, and comparable in speed of standard HTML on subsequent refreshes. Internet Explorer 8 was the exception, but it was actually faster than the other browsers at rendering the HTML (surprisingly).
As Guffa's answer points out, and what I was building up to, the actual performance issues come from inline scripts themselves. Content rendering can only continue when an inline script has finished executing, so if you have a complexe routine inside an inline script you can noticeably halt your page's loading for the end user. That's why waiting for onload/DOMReady and using DOM manipulation is preferred.
It's especially unwise to use document.write() after the document has finished loading. In most browsers, using document.write() after document load also implies document.open(), which will wipe the current HTML off the screen and create a new document.
That doesn't mean that document.write() doesn't have its uses, it's just that most developers use it for the wrong reasons. Real problems with document.write() include:
You can't use it in documents served as XHTML (for browsers that correctly parse XHTML as XHTML).
Overwrites the entire page when used after DOM parsing has completed.
Adds content to the page that isn't accessible to browsers with JavaScript disabled (although <noscript> is sometimes a valid workaround here).
More difficult to maintain than static HTML.
If you have scripts that run in the middle of the page, the browser has to wait for the script to finish before it can continue to parse the rest of the page.
To make your page appear fast, you want the browser to parse the page as soon as possible so that it can be displayed to the user, and after that you can apply the extra functionality that your scripts add.
I think there are some reason why it should be avoided.
but what you mean is, that if you have somewhere in your html code a
<script>
document.write('mystuff')
</script
one of the problem is, that bevore the browser can display your website, it has to load the Javascript interpreter.
if you would start your javascript only by body.onLoad then it can display the whole website to the user, and then run your javascripts...
therefore
the subjective loading time is faster :-)