Page Specific JavaScript using Content Security Policy (CSP) [duplicate] - javascript

This question already has answers here:
Best way to execute js only on specific page
(5 answers)
Closed 5 years ago.
I want to use Content Security Policy (CSP) across my entire site. This requires all JavaScript to be in separate files. I have shared JavaScript used by all pages but there is also page specific JavaScript that I only want to run for a specific page. What is the best way to handle page specific JavaScript for best performance?
Two ways I can think of to workaround this problem is to use page specific JavaScript bundles or a single JavaScript bundle with switch statement to execute page specific content.

there is lots of ways to execute page specific javascript
Option 1 (check via class)
Set a class to body tag
<body class="PageClass">
and then check via jQuery
$(function(){
if($('body').hasClass('PageClass')){
//your code
}
});
Option 2 (check via switch case)
var windowLoc = $(location).attr('pathname'); //jquery format to get window.location.pathname
switch (windowLoc) {
case "/info.php":
//code here
break;
case "/alert.php":
//code here
break;
}
Option 3 Checking via function
make all the page specific script in the function
function homepage() {
alert('homepage code executed');
}
and then run function on specific page
homepage();

Sorry, I know this ended up being a long read, but it'll be worth it to do it as you'll be able to make the choice that's right for your site. For a tl;dr, read the first sentence of each paragraph.
First of all, no matter which route you choose you should put all of the JS common to each page in the same file to take maximum advantage of caching. That's just common sense. Also, in all cases, I assume you're using a competent minifier since that will make a bigger difference than anything else. Packagers also exist if you need one of those -- Google is your friend if you need either of these.
For the page specific JS, you should decide whether it's most important to have your first page load (the user's first contact with your site) be 'fast', or if it's most important to have the following page loads (the user's first contact with any given page) be 'fast'. Modern browser caching is quite good now, so you can rely on the browser loading from cache whenever it can. In general, if it's most important for the first page load to be fast, then create separate JS files (this way, the user isn't stuck downloading 10 MB of data before they even get to your site). If not, then put all the JS in the same file, keeping in mind that if one page has significantly more JS than others, it will adversely affect the load time of every page on your site. Note that this extra load time can be mitigated with the use of async or defer tags, more on that later.
Consider the case where page A has 5 KB of JS and page B has 5 MB of JS. If you put both scripts in the same file, page A will load more slowly (since it needs to load ~5 MB of JS) but page B will load much faster due to the JS file being cached already. If you keep them separate, page A will load much faster than page B, but there will be an average speed decrease compared to the first case. If one page doesn't have significantly more JS than another, use separate files. You'll encounter much better average load time since the "savings" of loading the big file ahead of time will be greatly diminished (you'll also avoid the issue mentioned below).
Another consideration is whether one of the JS files will change often, as this will invalidate the cached version and require the browser to redownload it. If you put all your JS together and only one of the files is volatile (especially if it's a page not often visited, such as a registration page), the end user will face a higher average load time than if you keep them separate. Stack Overflow themselves took an interesting approach to this. It appears they have a function to invalidate the cache of JS unrelated to the page and load it (if necessary) when the JS on the page loads from the cache to save loading time later.
One more thing! Beyond all this, you should also decide whether or not you should use async or defer in your script tags since you're migrating to fully "external" JS.
async allows the page to load and display to the user before the JS is finished downloading. This is a great way to hide the download of a big JS file if you decide to go the "one file to rule them all" route. However, you might also find the JS needs to be downloaded and execute in order for the page to display properly (as is the case when not using async or defer).
As a result, it might be a good idea to use a hybrid of the two suggestions and split your js into individual files that need to be loaded per page for the page to display correctly (one per page), and put all the js that doesn't into a script that loads through an async or defer tag (this being the "one big file"). defer lets the browser load it in the background after the page is displayed to the user.
Ultimately, only you can make the decisions that are right for your app. There's no one magic option that will work in all cases, but that's the reality of software design/engineering. I hope I've made the process clearer for you so you can arrive at the right choice more easily, though.

Related

Javascript parsing time vs http request

I'm working on a large project which is extensible with modules. Every module can have it's own javascript file which may only be needed on one page, multiple pages that use this module or even all pages if it is a global extension.
Right now I'm combining all .js files into one file whenever they get updated or a new module get's installed. The client only has to load one "big" .js file but parse it for every page. Let's assume someone has installed a lot of modules and the .js file grows to 1MB-2MB. Does it make sense to continue this route or should I include every .js when it is needed.
This would result in maybe 10-15 http requests more for every page. At the same time the parsing time for the .js file would be reduced since I only need to load a small portion for every page. At the same time the browser wouldn't try to execute js code that isn't even required for the current page or even possible to execute.
Comparing both scenarios is rather difficult for me since I would have to rewrite a lot of code. Before I continue I would like to know if someone has encountered a similar problem and how he/she solved it. My biggest concern is that the parsing time of the js files grows too much. Usually network latency is the biggest concern but I've never had to deal with so many possible modules/extensions -> js files.
If these 2 conditions are true, then it doesn't really matter which path you take as long as you do the Requirement (below).
Condition 1:
The javascript files are being run inside of a standard browser, meaning they are not going to be run inside of an apple ios uiWebView app (html5 iphone/ipad app)
Condition 2:
The initial page load time does not matter so much. In other words, this is more of a web application than a web page. So users login each day, stay logged in for a long time, do lots of stuffs...logout...come back the next day...
Requirement:
Put the javascript file(s), css files and all images under a /cache directory on the web server. Tell the web server to send the max-age of 1 year in the header (only for this directory and sub-dirs). Then once the browser downloads the file, it will never again waste a round trip to the web server asking if it has the most recent version.
Then you will need to implement javascript versioning, usually this is done by adding "?jsver=1" in the js include line. Then increment the version with each change.
Use chrome inspector and make sure this is setup correctly. After the first request, the browser never sends an Etag or asks the web server for the file again for 1 year. (hard reloads will download the file again...so test using links and a standard navigation path a user would normally take. Also watch the web server log to see what requests are being severed.
Good browsers will compile the javascript to machine code and the compiled code will sit in browser's cache waiting for execution. That's why Condition #1 is important. Today, the only browser which will not JIT compile js code is Apple's Safari inside of uiWebView which only happens if you are running html/js inside of an apple app (and the app is downloaded from the app store).
Hope this makes sense. I've done these things and have reduced network round trips considerably. Read up on Etags and how the browsers make round trips to determine if is using the current version of js/css/images.
On the other hand, if you're building a web site and you want to optimize for the first time visitor, then less is better. Only have the browser download what is absolutely needed for the first page view.
You really REALLY should be using on-demand JavaScript. Only load what 90% of users will use. For things most people won't use keep them separate and load them on demand. Also you should seriously reconsider what you're doing if you've got upwards of two megabytes of JavaScript after compression.
function ondemand(url,f,exe)
{
if (eval('typeof ' + f)=='function') {eval(f+'();');}
else
{
var h = document.getElementsByTagName('head')[0];
var js = document.createElement('script');
js.setAttribute('defer','defer');
js.setAttribute('src','scripts/'+url+'.js');
js.setAttribute('type',document.getElementsByTagName('script')[0].getAttribute('type'));
h.appendChild(js);
ondemand_poll(f,0,exe);
h.appendChild(document.createTextNode('\n'));
}
}
function ondemand_poll(f,i,exe)
{
if (i<200) {setTimeout(function() {if (eval('typeof ' + f)=='function') {if (exe==1) {eval(f+'();');}} else {i++; ondemand_poll(f,i,exe);}},50);}
else {alert('Error: could not load \''+f+'\', certain features on this page may not work as intended.\n\nReloading the page may correct the problem, if it does not check your internet connection.');}
}
Example usage: load example.js (first parameter), poll for the function example_init1() (second parameter) and 1 (third parameter) means execute that function once the polling finds it...
function example() {ondemand('example','example_init1',1);}

How to make page loading feel faster?

I want to decrease the time taken by my pages to load and be displayed, assuming I start with an empty browser cache, and the pages may or may not have inline css and javascript in the html file. does changing the order in which files are sent to the browser decrease the display time, and thus make pages seem to be loading faster?
For example if a page has some .css, .js, .png files and so on, would loading the css first, display things faster?
And is there a standard/specific order to load file types?
Here are few steps that could optimize the performance of your web pages.
put css at top.
put javascript at bottom.
cache everything.
set far future expire header.
return 304 when appropriate.
use unique url for css and js for propagating the change.
apart from that use ajax wherever required.
Beware of too many HTTP connections. It takes time to establish an HTTP connection and it can easily eat up loading time if you have many elements linked in your HTML file.
If you have many small icons, glyphs, etc. combine them into a sprite so only one image is loaded. Facebook for instance makes use of the sprite technique - you can see that if you inspect the images it loads.
You can also consolidate your CSS files into one file - same with Javascript files.
Also, if you have JavaScript that affects the content of your page when it loads then make sure to use the event that notifies you when the DOM is ready, instead of waiting for the body loadevent which doesn't trigger until all resources, such as images, CSS files, JavaScript etc is loaded.
js files block page loading until they're executed. When possible, include those before closing body
At first make sure that your webhoster has no slow servers. This can happen on very cheap shared site webhosters. Than you should check that you remove all unnessesary stuff from your html output. Than you could check if your content is dynamic or static. If it is dynamic try to convert it to static content.
In some conditions you can simply activate the caching functions of a CMS that should also help to send the website content faster. Just on slow connections it could be better to use gzip to compress the output stream. But this costs time. The server and also the client have to compress/decompress. You have to check that too.
If you use javascript and the execution is delayed you could also use the ready event to execute your javascript after the html document is loaded (and not all images and so on) like using the document.onload event.
You can save your page load time to use few trick like :- CSS image sprites rather than call every single image for every single purpose this will Minimize your website's HTTP Requests, remove unnecessary div tags or unnecessary code from your HTML-Markup & CSS
Where we can get good results through CSS and so we should not use Jscripts there.
Should make always clean HTML-Markup without any irreverent code.
Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.
The solutions turned out the simple, combine all the different files into a single large file and compress that file using zip. Unfortunately, if you do this manually you are going to run into maintenance problems. That single compressed file is no longer editable. So after editing one of the original source files you will have to re-combine it with the other files and re-compress it.

Page-level execution of JavaScript when serving concatenated files

Scenario:
A web site with x number of pages is being served with a single, concatenated JavaScript file. Some of the individual JavaScript files pertain to a page, others to plugins/extensions etc.
When a page is served, the entire set of JavaScript is executed (as execution is performed when loaded). Unfortunately, only a sub-section of the JavaScript pertains directly to the page. The rest is relevant to other pages on the site, and may have potential side-effects on the current page if written poorly.
Question:
What is the best strategy to only execute JavaScript that relates directly to the page, while maintaining a single concatenated file?
Current solution that doesn't feel right:
JavaScript related to a specific page is wrapped in a "namespaced" init function for that page. Each page is rendered with an inline script calling the init function for that page. It works hunky-dory, but I would rather not have any inline scripts.
Does anyone have any clever suggestions? Should I just use an inline script and be done with it? I'm surprised this isn't more of an issue for most developers out there.
Just use an inline script. If it's one or two lines to initialize the JavaScript you need that's fine. It's actually a good design practice because then it allows re-use of your JavaScript across multiple pages.
The advantages of a single (or at least few) concatenated js files are clear (less connections in the page mean lower loading time, you can minify it all at once, ...).
We use such a solution, but: we allow different pages to get different set of concatenated files - though I'm sure there exists different patterns.
In our case we have split javascript files in a few groups by functionality; each page can specify which ones they need. The framework will then deliver the concatenated file with consistent naming and versioning, so that caching works very well on the browser level.
We use django and a home-baked solution - but that's just because we started already a few years ago, when only django-compress was available, and django compress isn't available any more. The django-pipeline successor seems good, but you can find alternatives on djangopackages/asset-managers.
On different frameworks of course you'll find some equivalent packages. Without a framework, this solution is probably unachievable ;-)
By the way, using these patterns you can also compress your js files (statically, or even dynamically if you have a good caching policy)
I don't think your solution is that bad although it is a good thing that you distrust inline scripts. But you have to find out on what page you are somehow so calling the appropriate init function on each page makes sense. You can also call the init function based on some other factors:
The page URL
The page title
A class set in the document body
A parameter appended to your script URL and parsed by the global document ready function.
I simply call a bunch of init functions when the document is ready. Each checks to see if it's needed on the page, if not, simply RETURN.
You could do something as simple as:
var locationPath = window.location.pathname;
var locationPage = locationPath.substring(locationPath.lastIndexOf('/') + 1);
switch(locationPage) {
case 'index.html':
// do stuff
break;
case 'contact.html':
// do stuff
break;
}
I'm really confused exactly why it doesn't feel right to call javascript from the page? There is a connection between the page and the javascript, and making that explicit should make your code easier to understand, debug, and more organized. I'm sure you could try and use some auto wiring convention but I don't think it really would help you solve the problem. Just call the name spaced function from your page and be done with it..

jquery and script speed?

Quick question, I have some scripts that only need to be run on some pages and some only on a certain page, would it be best to include the script at the bottom of the actual page with script tags or do something like in my js inlcude;
var pageURL = window.location.href;
if (pageURL == 'http://example.com') {
// run code
}
Which would be better and faster?
The best is to include the script only on pages that need it. Also in terms of maintenance your script is more independant from the pages that are using it. Putting those ifs in your script makes it tightly coupled to the structure of your site and if you decide to rename some page it will no longer work.
I can recommend you to use an asynchrounous resource loader, LAB.js for example. Then you could build a dependencies list, for instance:
var MYAPP = MYAPP || {};
/*
* Bunches of scripts
* to load together
*/
MYAPP.bunches = {
defaults: ["libs/jquery-1.6.2.min.js"],
cart: ["plugins/jquery.tmpl.min.js",
"libs/knockout-1.2.1.min.js",
"scripts/shopping-cart.js"],
signup: ["libs/knockout-1.2.1.min.js",
"scripts/validator.js"]
/*
... etc
*/
};
/*
* Loading default libraries
*/
$LAB.script(MYAPP.defaults);
if (typeof MYAPP.require !== 'undefined') {
$LAB.script(MYAPP.dependencies[MYAPP.require]);
}
and in the end of your page you could write:
<script type="text/javascript">
var MYAPP = MYAPP || {};
MYAPP.require = "cart";
</script>
<script type="text/javascript" src='js/libs/LAB.min.js'></script>
<script type="text/javascript" src='js/dependencies.js'></script>
By the way, a question to everyone, is it a good idea to do so?
In so far as possible only include the scripts on the pages that requirement. That said, if you're delivering content via AJAX that can be hard to do, since the script might already be loaded and reloading could cause problems. Of course you can deliver code in a script block (as opposed to referencing an external js file), in code delivered via AJAX.
In cases where you need to load scripts (say via a master page) for all pages, but that only apply to certain pages, take advantage of the fact that jQuery understands and deals well with selectors that don't match any elements. You can also use live handlers along with very specific selectors to allow scripts loaded at page load time to work with elements added dynamically later.
Note: if you use scripts loaded via content distribution network, you'll find that they are often cached locally in the browser anyway and don't really hurt your page load time. The same is true with scripts on your own site, if they've already been loaded once.
You have two competing things to optimize for, page load time over the network and page initialization time.
You can minimize your page load time over the network by taking maximum advantage of browser caching so that JS files don't have to be loaded over the network. To do this, you want as much javascript code for your site in on or two larger and fully minimized JS files. To do this, you should put JS for multiple different pages in one common JS file. It will vary from site to site whether the JS for all pages should be ine one or two larger JS files or whether you group it into a small number of common JS files that are each targeted at part of your site. But, the general idea is that you want to combine the JS code from different pages into a common JS file that can be most effectively cached.
You can minimize your page initialization time by only calling initialization code that actually needs to execute on the particular page that is being displayed. There are several different ways to approach this. I agree with the other callers that you do not want to be looking at URLs to decide which code to execute because this ties your code to the URL structure which is better to avoid. If your code has a manageable number of different types of pages, then I'd recommend identifying each of those page types with a unique class name on the body tag. You can then have your initialization code look for the appropriate class on the body tag and branch to the appropriate initialization code based on that. I've even seen it done where you find a class name with a particular common prefix, parse out the non-common part of the name and call an initialization function by that name. This allows you to give a page a specific set of behaviors by only adding a classname to the body tag. The code remains very separate from the actual page.
the less general purpose way of doing this is to keep all the code in the one or two common JS files, but to add the appropriate initialization call to each specific page's HTML. So, the JS code that does the initialization code lives in the common JS files and thus is maximally cached, but the calling of the appropriate initialization code for that page is embedded inline in each specific page. This minimizes the execution time of the initialization, but still lets you use maximal caching. It's slightly less generic than the class name technique mentioned earlier, but some may like the more direct calling technique.
Include scripts at bottom of pages that need it only.
The YSlow add-on is the best solution to know why your website is slow.
There are many issues which could be the reason for slowness.
Combining many jQuery to one could help you increasing your performance.
Also you can put the script at the bottom of your page and CSS at top.
Its basically up to you and depends on what the code is.
Generally with small things I will slip it into the bottom of the page. (I'm talking minor ui things that relate only to that page).
If you're doing the location ref testing for more than a couple pages it probably means you're doing something wrong.
You might want to take a look at one of these:
http://en.wikipedia.org/wiki/Unobtrusive_JavaScript
http://2tbsp.com/node/91
And as for which is faster it's wildly negligible, pick what is easier for you to maintain.

load different js files for different pages or load together?

I have 3 pages of site.
Page 1: 19 kb of JS
Page 2: 26 kb of JS
Page 3: 10 kb of JS
Total : 55 kb of JS
These javascript files are non repeating, means the JS needed on page 1 is not needed on page 2 and I have expiry headers set 1 month.
Still I would like to know what will be best way to load these files, should I load separate file for each page or I load these all together?
you should probably load them separately...
But, in order to speed things up you could do a trick : if you think that the user is staying for a bit (i donno, at least 5 sec) on your page, you could just load the script for that particular page and add the other ones remotely after the page loads. This way you force the client's browser to make a cache copy of your other - not needed at the moment - js files, and because they're being loaded after the dom object has been built, it doesn't slow your page rendering.
You will have to add an "addScript" function and make "addScript" calls when the document has finished loading.
For the first js (for the first page) it should be something like :
function addScript(jsUrl){
var s = document.createElement('script');
s.setAttribute('type','text/javascript');
s.setAttribute('src',jsUrl);
document.getElementsByTagName('body')[0].appendChild(s);
}
window.onload = function(){
addScript('mySecondScript.js');
addScript('myThirdScript.js');
}
The beauty is that when you load one of the other pages, the corresponding js file is loaded instantly because it is retrieved from the browser's cache.
I'd mash them all together in one JS file, minify that with Google's Closure Compiler set on "advanced optimizations", and load it once with a bunch of cache directives. If you do use the Closure Compiler set on advanced, you'll have to make sure there is no inline JavaScript in your HTML files (which is generally not a good practice anyway), because the function and variable names (even the global functions and variable names) won't be the same afterward.
55 kb isn't all that much. However, it may be harder to update Page 2 when you have to deal with page 1 and 3 code. Also, have you tried minifying the code that sounds like rather large javascript files.
The way you described it, you should load them separately.
If the user is guaranteed to navigate to all 3 pages then yes, combine them. If they usually navigate to one page, then loading all three is a waste.
That said, 55k is not THAT large, however consider the impact from a mobile browser or even dial-up.
Minimize and use compression to reduce the JS to the maximum, but don't load a file unless it is required.

Categories