Improving Javascript Load Times - Concatenation vs Many + Cache - javascript

I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail.
A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors).
First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data.
The system I currently have is something like this
All javascript files are compressed and loaded at the bottom of the page
All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time
Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required
Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches).
In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine.
Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit).
I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached.
Specific questions
If there are many tags, but all files are being loaded from the local cache, and less javascript is being loaded overall, is this going to be faster than one tag which is also being loaded from the cache, but contains all the javascript needed anywhere on the site, rather than an appropriate subset?
Are there any other reasons to prefer one over the other?
Does similar thinking apply to CSS? (I'm currently using a much more monolithic approach to CSS)

2021 Edit:
As this answer has had some recent upvotes, do notice that with http 2.0 things changed a lot. You don't get the per-request hit as you now multiplex over a single TCP connection. You also get server-push. While most of the answer is still valid, do take it as how things were previously done.
I would say that the most important thing to focus on is the perception of speed.
First thing to take into consideration, there is no win-win formula out there but a threshold where a javascript file grows into such a size that it could (and should) be split.
GWT uses this and they call it DFN (Dead-for-now) code. There isn't much magic here. You just have to manually define when you'll need a need a new piece of code and, should the user need it, just call that file.
How, when, where will you need it?
Benchmark. Chrome has a great benchmarking tool. Use it extensivelly. See if having just a small javascript file will greatly improve the loading of that particular page. If it does by all means do start DFNing your code.
Apart from that it's all about the perception.
Don't let the content jump!
If your page has images, set up their widths and heights up front. As the page will load with the elements positioned right where they are supposed to be, there will be no content fitting and adjusting the user's perception of speed will increase.
Defer javascript!
All major libraries can wait for page load before executing javascript. Use it. jQuery's goes like this $(document).ready(function(){ ... }). It doesn't wait for parsing the code but makes the parsed code fire exactly when it should. After page load, before image load.
Important things to take into consideration:
Make sure js files are cached by the client (all the others stand short compared to this one)
Compile your code with Closure Compiler
Deflate your code; it's faster than Gziping it (on both ends)
Apache example of caching:
// Set up caching on media files for 1 month
<FilesMatch "\.(gif|jpg|jpeg|png|swf|js|css)$">
ExpiresDefault A2629744
Header append Cache-Control "public, proxy-revalidate"
Header append Vary "Accept-Encoding: *"
</FilesMatch>
Apache example of deflating:
// compresses all files for faster transfer
LoadModule deflate_module modules/mod_deflate.so
AddOutputFilterByType DEFLATE text/html text/plain text/xml font/opentype font/truetype font/woff
<FilesMatch "\.(js|css|html|htm|php|xml)$">
SetOutputFilter DEFLATE
</FilesMatch>
And last, and probably least, serve your Javascript from a cookie-less domain.
And to keep your question in focus, remember that when you have DFN code, you'll have several smaller javascript files that, precisely for being split, won't have the level of compression Closure can give you with a single one. The sum of the parts isn't equal to the whole in this scenario.
Hope it helps!

I really think you need to do some measurement to figure out if one solution is better than the other. You can use JavaScript and log data to get a clear idea of what your users are seeing.
First, analyze your logs to see if your cache rate is really as good as you would expect for your userbase. For example, if each html page includes jquery.js, look over the logs for a day--how many requests were there for html pages? How many for jquery.js? If the cache rate is good, you should see far fewer requests for jquery.js than for html pages. You probably want to do this for a day right after an update, and also a day a few weeks after an update, to see how that affects the cache rate.
Next, add some simple measurements to your page in JavaScript. You said the script tags are at the bottom, so I assume it looks something like this?
<html>
<!-- all your HTML content... -->
<script src="jquery.js"></script>
<script src="jquery-ui.js"></script>
<script src="mycode.js"></script>
In that case, you time how long it takes to load the JS, and ping the server like this:
<html>
<!-- all your HTML content... -->
<script>var startTime = new Date().getTime();</script>
<script src="jquery.js"></script>
<script src="jquery-ui.js"></script>
<script src="mycode.js"></script>
<script>
var endTime = new Date().getTime();
var totalTime = endTime - startTime; // In milliseconds
new Image().src = "/time_tracker?script_load=" + totalTime;
</script>
Then you can look through the logs for /time_tracker (or whatever you want to call it) and see how long it's taking people to load the scripts.
If your cache rate isn't good, and/or you're dissatisfied with how long it takes to load the scripts, then try moving all the scripts to a concatenated/minified file on one of your pages, and measure how long that takes to load in the same way. If the results look promising, do the rest of the site.

I would definitely go with the non-monolithic approach. Not only in your case, but in general gives you more flexibility when you need something changed or re-configured.
If you make a change to one of these files then you will have to merge-compress and deliver. If you are doing this in an automated way then you are OK.
As far as the browser question "if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all", i think that there is an HTTP request made but the with response "NOT MODIFIED". To be sure you should check all the Requests made to the Web Server (using one of the tools available). After the response is given then the browser uses the unmodified resource - the js file or image or other.
Good luck with your B2B.

Even though you are dealing with repeat-visitors, there are many reasons why their cache may have been cleared, including privacy and performance tools that delete temporary cache files to "speed up your computer".
Merging and mini-fying your script doesn't have to be an onerous process. I write my JavaScript in separate files, nicely spaced out to be readable to me so it is easier to maintain. However, I serve it via a script page that combines all of the scripts into a single script and mini-fies it all - so one script gets sent to the browser with all my scripts in. This is the best of both worlds as I work on a collection of JavaScript files that are all readable, and the visitor gets one compressed JavaScript file, which is the recommendation for reducing the HTTP requests (and therefore the queue time).

Did you try Google Closure? From what I've read about it, it seems quite promising.
http://code.google.com/closure/
http://googlecode.blogspot.com/2009/11/introducing-closure-tools.html - blog post
http://axod.blogspot.com/2010/01/google-closure-compiler-advanced-mode.html - performance of GC
http://www.sitepoint.com/google-closure-how-not-to-write-javascript/ - a few tips for javascript

Generally it's better to have fewer, larger requests than to have many small requests, since the browser will only do two (?) requests in parallel to a particular domain.
So whilst you say that most users are repeat visitors, when the cache expires there will be many round-trips for the many files, rather than one for a monolithic file.
If you take this to an extreme and have potentially thousands of files with individual functions in them, it would become obvious that this would lead to a huge number of requests when the cache expires.
Another reason to have a monolithic file is for when various parts of the site have different chunks of javascript associated with them, as you again get this in the cache when you hit the first page, saving later requests and round-trips.
If you're worried about the initial hit loading a "large" javascript file you can try loading it asynchronously, using the method described here : http://www.webmaster-source.com/2010/06/07/loading-javascript-asynchronously/
Whichever way you go in the end, remember that since you're setting a far-future modified date, you'll need to change the name of the javascript (and CSS) files when changes are made in them, otherwise clients won't pick up the changes until their cache expires anyway.
PS : Profile it on the different browsers with the differing methods and write it up, as it will prove useful to those who are also stuck on slow JS engines like IE6 :)

I've used the following for both CSS and Javascript -- most of my pages in Google Speed report being 94-96/100 and they load very fast (always within a second, even if there are 100kb's of Javascript).
1. I have a PHP function to call files -- this is a class and stores all the unique files that are asked for. My call looks something like:
javascript( 'jquery', 'jquery.ui', 'home-page' );
2. I spit out a url-encoded version of these strings combined together to call a dynamic PHP page:
<script type="text/javascript" src="/js/?files=eNptkFsSgjAMRffCP4zlTVmDi4iQkVwibbEUHzju3UYEHMffc5r05gJnEX8IvisHnnHPQN9cMHZeKThzJOVeex7R3AmEDhQLCEZBLHLMLVhgpaXUikRMXCJbhdTjgNcG59UJyWSVPSh_0lqSSp0KN6XNEZSYwAqt_KoBY-lRRvNblBZrYeHQYdAOpHPS-VeoTpteVFwnNGSLX6ss3uwe1fi-mopg8aqt7P0LzIWwz-T_UCycC2sQavrp-QIsrnKh"></script>
3. That dynamic PHP page takes decodes the string and creates an array of the files that will needed to be called. A cache_file path is created:
$compressed_js_file_path = $_SERVER['DOCUMENT_ROOT'] . '/cache/js/' . md5( implode( '|', $js_files ) ) . '.js';
4. It checks to see if that file path already exists in the cache, if so, it just reads the file:
if( file_exists( $compressed_js_file_path ) ) {
echo file_get_contents( $compressed_js_file_path );
} else {
5. If it doesn't exist, it compresses all the javascript into one "monolith" file, but realize it has ONLY the necessary javascript for that page, not for the entire site.
if( $fh = #fopen( $compressed_js_file_path, 'w' ) ) {
fwrite( $fh, $js );
fclose( $fh );
}
// Echo the compressed Javascript
echo $js;
I've given you excerpts of the code. The program you use to compress javascript is completely up to you. I use this with both CSS and Javascript so that all those file requires 1 HTTP request, ever, the result is cached on the server (simply delete that file if you change something), and it has only the necessary Javascript & CSS for that page.

Related

How can i prevent theft of javscript code [duplicate]

I know it's impossible to hide source code but, for example, if I have to link a JavaScript file from my CDN to a web page and I don't want the people to know the location and/or content of this script, is this possible?
For example, to link a script from a website, we use:
<script type="text/javascript" src="http://somedomain.example/scriptxyz.js">
</script>
Now, is possible to hide from the user where the script comes from, or hide the script content and still use it on a web page?
For example, by saving it in my private CDN that needs password to access files, would that work? If not, what would work to get what I want?
Good question with a simple answer: you can't!
JavaScript is a client-side programming language, therefore it works on the client's machine, so you can't actually hide anything from the client.
Obfuscating your code is a good solution, but it's not enough, because, although it is hard, someone could decipher your code and "steal" your script.
There are a few ways of making your code hard to be stolen, but as I said nothing is bullet-proof.
Off the top of my head, one idea is to restrict access to your external js files from outside the page you embed your code in. In that case, if you have
<script type="text/javascript" src="myJs.js"></script>
and someone tries to access the myJs.js file in browser, he shouldn't be granted any access to the script source.
For example, if your page is written in PHP, you can include the script via the include function and let the script decide if it's safe" to return it's source.
In this example, you'll need the external "js" (written in PHP) file myJs.php:
<?php
$URL = $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'];
if ($URL != "my-domain.example/my-page.php")
die("/\*sry, no acces rights\*/");
?>
// your obfuscated script goes here
that would be included in your main page my-page.php:
<script type="text/javascript">
<?php include "myJs.php"; ?>;
</script>
This way, only the browser could see the js file contents.
Another interesting idea is that at the end of your script, you delete the contents of your dom script element, so that after the browser evaluates your code, the code disappears:
<script id="erasable" type="text/javascript">
//your code goes here
document.getElementById('erasable').innerHTML = "";
</script>
These are all just simple hacks that cannot, and I can't stress this enough: cannot, fully protect your js code, but they can sure piss off someone who is trying to "steal" your code.
Update:
I recently came across a very interesting article written by Patrick Weid on how to hide your js code, and he reveals a different approach: you can encode your source code into an image! Sure, that's not bullet proof either, but it's another fence that you could build around your code.
The idea behind this approach is that most browsers can use the canvas element to do pixel manipulation on images. And since the canvas pixel is represented by 4 values (rgba), each pixel can have a value in the range of 0-255. That means that you can store a character (actual it's ascii code) in every pixel. The rest of the encoding/decoding is trivial.
The only thing you can do is obfuscate your code to make it more difficult to read. No matter what you do, if you want the javascript to execute in their browser they'll have to have the code.
Just off the top of my head, you could do something like this (if you can create server-side scripts, which it sounds like you can):
Instead of loading the script like normal, send an AJAX request to a PHP page (it could be anything; I just use it myself). Have the PHP locate the file (maybe on a non-public part of the server), open it with file_get_contents, and return (read: echo) the contents as a string.
When this string returns to the JavaScript, have it create a new script tag, populate its innerHTML with the code you just received, and attach the tag to the page. (You might have trouble with this; innerHTML may not be what you need, but you can experiment.)
If you do this a lot, you might even want to set up a PHP page that accepts a GET variable with the script's name, so that you can dynamically grab different scripts using the same PHP. (Maybe you could use POST instead, to make it just a little harder for other people to see what you're doing. I don't know.)
EDIT: I thought you were only trying to hide the location of the script. This obviously wouldn't help much if you're trying to hide the script itself.
Google Closure Compiler, YUI compressor, Minify, /Packer/... etc, are options for compressing/obfuscating your JS codes. But none of them can help you from hiding your code from the users.
Anyone with decent knowledge can easily decode/de-obfuscate your code using tools like JS Beautifier. You name it.
So the answer is, you can always make your code harder to read/decode, but for sure there is no way to hide.
Forget it, this is not doable.
No matter what you try it will not work. All a user needs to do to discover your code and it's location is to look in the net tab in firebug or use fiddler to see what requests are being made.
From my knowledge, this is not possible.
Your browser has to have access to JS files to be able to execute them. If the browser has access, then browser's user also has access.
If you password protect your JS files, then the browser won't be able to access them, defeating the purpose of having JS in the first place.
I think the only way is to put required data on the server and allow only logged-in user to access the data as required (you can also make some calculations server side). This wont protect your javascript code but make it unoperatable without the server side code
I agree with everyone else here: With JS on the client, the cat is out of the bag and there is nothing completely foolproof that can be done.
Having said that; in some cases I do this to put some hurdles in the way of those who want to take a look at the code. This is how the algorithm works (roughly)
The server creates 3 hashed and salted values. One for the current timestamp, and the other two for each of the next 2 seconds. These values are sent over to the client via Ajax to the client as a comma delimited string; from my PHP module. In some cases, I think you can hard-bake these values into a script section of HTML when the page is formed, and delete that script tag once the use of the hashes is over The server is CORS protected and does all the usual SERVER_NAME etc check (which is not much of a protection but at least provides some modicum of resistance to script kiddies).
Also it would be nice, if the the server checks if there was indeed an authenticated user's client doing this
The client then sends the same 3 hashed values back to the server thru an ajax call to fetch the actual JS that I need. The server checks the hashes against the current time stamp there... The three values ensure that the data is being sent within the 3 second window to account for latency between the browser and the server
The server needs to be convinced that one of the hashes is
matched correctly; and if so it would send over the crucial JS back
to the client. This is a simple, crude "One time use Password"
without the need for any database at the back end.
This means, that any hacker has only the 3 second window period since the generation of the first set of hashes to get to the actual JS code.
The entire client code can be inside an IIFE function so some of the variables inside the client are even more harder to read from the Inspector console
This is not any deep solution: A determined hacker can register, get an account and then ask the server to generate the first three hashes; by doing tricks to go around Ajax and CORS; and then make the client perform the second call to get to the actual code -- but it is a reasonable amount of work.
Moreover, if the Salt used by the server is based on the login credentials; the server may be able to detect who is that user who tried to retreive the sensitive JS (The server needs to do some more additional work regarding the behaviour of the user AFTER the sensitive JS was retreived, and block the person if the person, say for example, did not do some other activity which was expected)
An old, crude version of this was done for a hackathon here: http://planwithin.com/demo/tadr.html That wil not work in case the server detects too much latency, and it goes beyond the 3 second window period
As I said in the comment I left on gion_13 answer before (please read), you really can't. Not with javascript.
If you don't want the code to be available client-side (= stealable without great efforts),
my suggestion would be to make use of PHP (ASP,Python,Perl,Ruby,JSP + Java-Servlets) that is processed server-side and only the results of the computation/code execution are served to the user. Or, if you prefer, even Flash or a Java-Applet that let client-side computation/code execution but are compiled and thus harder to reverse-engine (not impossible thus).
Just my 2 cents.
You can also set up a mime type for application/JavaScript to run as PHP, .NET, Java, or whatever language you're using. I've done this for dynamic CSS files in the past.
I know that this is the wrong time to be answering this question but i just thought of something
i know it might be stressful but atleast it might still work
Now the trick is to create a lot of server side encoding scripts, they have to be decodable(for example a script that replaces all vowels with numbers and add the letter 'a' to every consonant so that the word 'bat' becomes ba1ta) then create a script that will randomize between the encoding scripts and create a cookie with the name of the encoding script being used (quick tip: try not to use the actual name of the encoding script for the cookie for example if our cookie is name 'encoding_script_being_used' and the randomizing script chooses an encoding script named MD10 try not to use MD10 as the value of the cookie but 'encoding_script4567656' just to prevent guessing) then after the cookie has been created another script will check for the cookie named 'encoding_script_being_used' and get the value, then it will determine what encoding script is being used.
Now the reason for randomizing between the encoding scripts was that the server side language will randomize which script to use to decode your javascript.js and then create a session or cookie to know which encoding scripts was used
then the server side language will also encode your javascript .js and put it as a cookie
so now let me summarize with an example
PHP randomizes between a list of encoding scripts and encrypts javascript.js then it create a cookie telling the client side language which encoding script was used then client side language decodes the javascript.js cookie(which is obviously encoded)
so people can't steal your code
but i would not advise this because
it is a long process
It is too stressful
use nwjs i think helpful it can compile to bin then you can use it to make win,mac and linux application
This method partially works if you do not want to expose the most sensible part of your algorithm.
Create WebAssembly modules (.wasm), import them, and expose only your JS, etc... workflow. In this way the algorithm is protected since it is extremely difficult to revert assembly code into a more human readable format.
After having produced the wasm module and imported correclty, you can use your code as you normallt do:
<body id="wasm-example">
<script type="module">
import init from "./pkg/glue_code.js";
init().then(() => {
console.log("WASM Loaded");
});
</script>
</body>

Is there a way to clear/remove a specific resources from browser cache rather than clearing entire cache?

For instance in Chrome, I'm working on a webapp (which is heavy, takes ~5 seconds) has lot of static resources (JS) files and CSS to load in the first time. To reflect changes of one JS, I need to reload the webpage with "Empty Clear Cache".
If there can be a way to only remove specific resource(s) JS files from cache (so to force refetch from server), my testing time can be reduced by great extent.
A technique is to add a random parameter to the url of assets you don't want cached.
Depending on what your server-side language is, you might be able to do something like the following:
<script src="my.js?_=<%= encode(new Date().toString()) %>"></script>
If you have a PHP backend, you could tack on a random number to the JS file URL:
<script type="text/javascript" src="/script.js?<?php echo time(); ?>"></script>
This will cause the browser to re-fetch it every time from server as the URL will differ.
Specific resources can be reloaded individually if you change the date and time on your files on the server. "Clearing cache" is not as easy as it should be. Instead of clearing cache on my browsers, I realized that "touching" the server files cached will actually change the date and time of the source file cached on the server (Tested on Edge, Chrome and Firefox) and most browsers will automatically download the most current fresh copy of whats on your server (code, graphics any multimedia too). I suggest you just copy the most current scripts on the server and "do the touch thing" solution before your program runs, so it will change the date of all your problem files to a most current date and time, then it downloads a fresh copy to your browser:
<?php
touch('/www/sample/file1.css');
touch('/www/sample/file2.css');
touch('/www/sample/file2.css') ?>
then ... the rest of your program...
It took me some time to resolve this issue (as many browsers act differently to different commands, but they all check time of files and compare to your downloaded copy in your browser, if different date and time, will do the refresh), If you can't go the supposed right way, there is always another usable and better solution to it. Best Regards and happy camping. By the way touch(); or alternatives work in many programming languages inclusive in javascript bash sh php and you can include or call them in html.

Javascript parsing time vs http request

I'm working on a large project which is extensible with modules. Every module can have it's own javascript file which may only be needed on one page, multiple pages that use this module or even all pages if it is a global extension.
Right now I'm combining all .js files into one file whenever they get updated or a new module get's installed. The client only has to load one "big" .js file but parse it for every page. Let's assume someone has installed a lot of modules and the .js file grows to 1MB-2MB. Does it make sense to continue this route or should I include every .js when it is needed.
This would result in maybe 10-15 http requests more for every page. At the same time the parsing time for the .js file would be reduced since I only need to load a small portion for every page. At the same time the browser wouldn't try to execute js code that isn't even required for the current page or even possible to execute.
Comparing both scenarios is rather difficult for me since I would have to rewrite a lot of code. Before I continue I would like to know if someone has encountered a similar problem and how he/she solved it. My biggest concern is that the parsing time of the js files grows too much. Usually network latency is the biggest concern but I've never had to deal with so many possible modules/extensions -> js files.
If these 2 conditions are true, then it doesn't really matter which path you take as long as you do the Requirement (below).
Condition 1:
The javascript files are being run inside of a standard browser, meaning they are not going to be run inside of an apple ios uiWebView app (html5 iphone/ipad app)
Condition 2:
The initial page load time does not matter so much. In other words, this is more of a web application than a web page. So users login each day, stay logged in for a long time, do lots of stuffs...logout...come back the next day...
Requirement:
Put the javascript file(s), css files and all images under a /cache directory on the web server. Tell the web server to send the max-age of 1 year in the header (only for this directory and sub-dirs). Then once the browser downloads the file, it will never again waste a round trip to the web server asking if it has the most recent version.
Then you will need to implement javascript versioning, usually this is done by adding "?jsver=1" in the js include line. Then increment the version with each change.
Use chrome inspector and make sure this is setup correctly. After the first request, the browser never sends an Etag or asks the web server for the file again for 1 year. (hard reloads will download the file again...so test using links and a standard navigation path a user would normally take. Also watch the web server log to see what requests are being severed.
Good browsers will compile the javascript to machine code and the compiled code will sit in browser's cache waiting for execution. That's why Condition #1 is important. Today, the only browser which will not JIT compile js code is Apple's Safari inside of uiWebView which only happens if you are running html/js inside of an apple app (and the app is downloaded from the app store).
Hope this makes sense. I've done these things and have reduced network round trips considerably. Read up on Etags and how the browsers make round trips to determine if is using the current version of js/css/images.
On the other hand, if you're building a web site and you want to optimize for the first time visitor, then less is better. Only have the browser download what is absolutely needed for the first page view.
You really REALLY should be using on-demand JavaScript. Only load what 90% of users will use. For things most people won't use keep them separate and load them on demand. Also you should seriously reconsider what you're doing if you've got upwards of two megabytes of JavaScript after compression.
function ondemand(url,f,exe)
{
if (eval('typeof ' + f)=='function') {eval(f+'();');}
else
{
var h = document.getElementsByTagName('head')[0];
var js = document.createElement('script');
js.setAttribute('defer','defer');
js.setAttribute('src','scripts/'+url+'.js');
js.setAttribute('type',document.getElementsByTagName('script')[0].getAttribute('type'));
h.appendChild(js);
ondemand_poll(f,0,exe);
h.appendChild(document.createTextNode('\n'));
}
}
function ondemand_poll(f,i,exe)
{
if (i<200) {setTimeout(function() {if (eval('typeof ' + f)=='function') {if (exe==1) {eval(f+'();');}} else {i++; ondemand_poll(f,i,exe);}},50);}
else {alert('Error: could not load \''+f+'\', certain features on this page may not work as intended.\n\nReloading the page may correct the problem, if it does not check your internet connection.');}
}
Example usage: load example.js (first parameter), poll for the function example_init1() (second parameter) and 1 (third parameter) means execute that function once the polling finds it...
function example() {ondemand('example','example_init1',1);}

automate http expires

I have a webapp written in PHP and i generate the headers with header() function.
The problem is that when I'm making changes to the javascript code of my app, on clients side, the old javascript will not be executed because is cached to the clients browsers.
How can I automate the process of header expiration? I assume that is has to be a better way than modifying that function each time I modify the javascript code.
The only bullet-proof solution is to change filenames of server-side resources:
From: Yahoo's Best Practices for Speeding Up Your Web Site:
Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes. At Yahoo! we often make this step part of the build process: a version number is embedded in the component's filename[...]
Of course this process must be automated. We are appending JavaScript file contents hash into file name.
Change the URI to the script with each release.
This can be done by adding a query string. You can automate this by, for example, taking the revision number from your version control system and inserting it into your template.
This will allow you to have long expiry times (for optimal caching) and still get fresh JavaScript each time a new release is published (so long as the HTML document isn't loaded from the cache (but they tend to have short cache times compared to JS)).
The best way to version javascript files is to include a version number in their filename. When you rev the code, you bump the version number and then you rev any web pages that include the JS file to refer to the new filename. You then only need to expire the web pages themselves and they will automatically refer to the new JS files. The JS files can have very long expiration (months or years) so you get maximum caching benefit for them.
This also ensures that you get a consistent set of JS files.
This is how jQuery does it with versioning.
Since you don't provide much detail only the general pointer:
Usually you can configure the Expires and other header params in the webserver - either globally and/or per "folder" etc.
You can make the JS file expire for example after 1 hour... this way you would know that 1 hour after a change all clients will be using the new JS file...
IF you need the change to take effect immediately even for clients currently active the header won't help much - you would have to do some AJAX magic...

Put javascript in one .js file or break it out into multiple .js files?

My web application uses jQuery and some jQuery plugins (e.g. validation, autocomplete). I was wondering if I should stick them into one .js file so that it could be cached more easily, or break them out into separate files and only include the ones I need for a given page.
I should also mention that my concern is not only the time it takes to download the .js files but also how much the page slows down based on the contents of the .js file loaded. For example, adding the autocomplete plugin tends to slow down the response time by 100ms or so from my basic testing even when cached. My guess is that it has to scan through the elements in the DOM which causes this delay.
I think it depends how often they change. Let's take this example:
JQuery: change once a year
3rd party plugins: change every 6 months
your custom code: change every week
If your custom code represents only 10% of the total code, you don't want the users to download the other 90% every week. You would split in at least 2 js: the JQuery + plugins, and your custom code. Now, if your custom code represents 90% of the full size, it makes more sense to put everything in one file.
When choosing how to combine JS files (and same for CSS), I balance:
relative size of the file
number of updates expected
Common but relevant answer:
It depends on the project.
If you have a fairly limited website where most of the functionality is re-used across multiple sections of the site, it makes sense to put all your script into one file.
In several large web projects I've worked on, however, it has made more sense to put the common site-wide functionality into a single file and put the more section-specific functionality into their own files. (We're talking large script files here, for the behavior of several distinct web apps, all served under the same domain.)
The benefit to splitting up the script into separate files, is that you don't have to serve users unnecessary content and bandwidth that they aren't using. (For example, if they never visit "App A" on the website, they will never need the 100K of script for the "App A" section. But they would need the common site-wide functionality.)
The benefit to keeping the script under one file is simplicity. Fewer hits on the server. Fewer downloads for the user.
As usual, though, YMMV. There's no hard-and-fast rule. Do what makes most sense for your users based on their usage, and based on your project's structure.
If people are going to visit more than one page in your site, it's probably best to put them all in one file so they can be cached. They'll take one hit up front, but that'll be it for the whole time they spend on your site.
At the end of the day it's up to you.
However, the less information that each web page contains, the quicker it will be downloaded by the end-viewer.
If you only include the js files required for each page, it seems more likely that your web site will be more efficient and streamlined
If the files are needed in every page, put them in a single file. This will reduce the number of HTTP request and will improve the response time (for lots of visits).
See Yahoo best practice for other tips
I would pretty much concur with what bigmattyh said, it does depend.
As a general rule, I try to aggregate the script files as much as possible, but if you have some scripts that are only used on a few areas of the site, especially ones that perform large DOM traversals on load, it would make sense to leave those in separate file(s).
e.g. if you only use validation on your contact page, why load it on your home page?
As an aside, you can sometimes sneak these files into interstitial pages, where not much else is going on, so when a user lands on an otherwise quite heavy page that needs it, it should already be cached - use with caution - but can be a handy trick when you have someone benchmarking you.
So, as few script files as possible, within reason.
If you are sending a 100K monolith, but only using 20K of it for 80% of the pages, consider splitting it up.
It depends pretty heavily on the way that users interact with your site.
Some questions for you to consider:
How important is it that your first page load be very fast?
Do users typically spend most of their time in distinct sections of the site with subsets of functionality?
Do you need all of the scripts ready the moment that the page is ready, or can you load some in after the page is loaded by inserting <script> elements into the page?
Having a good idea of how users use your site, and what you want to optimize for is a good idea if you're really looking to push for performance.
However, my default method is to just concatenate and minify all of my javascript into one file. jQuery and jQuery.ui are small and have very low overhead. If the plugins you're using are having a 100ms effect on page load time, then something might be wrong.
A few things to check:
Is gzipping enabled on your HTTP server?
Are you generating static files with unique names as part of your deployment?
Are you serving static files with never ending cache expirations?
Are you including your CSS at the top of your page, and your scripts at the bottom?
Is there a better (smaller, faster) jQuery plugin that does the same thing?
I've basically gotten to the point where I reduce an entire web application to 3 files.
vendor.js
app.js
app.css
Vendor is neat, because it has all the styles in it too. I.e. I convert all my vendor CSS into minified css then I convert that to javascript and I include it in the vendor.js file. That's after it's been sass transformed too.
Because my vendor stuff does not update often, once in production it's pretty rare. When it does update I just rename it to something like vendor_1.0.0.js.
Also there are minified versions of those files. In dev I load the unminified versions and in production I load the minified versions.
I use gulp to handle doing all of this. The main plugins that make this possible are....
gulp-include
gulp-css2js
gulp-concat
gulp-csso
gulp-html-to-js
gulp-mode
gulp-rename
gulp-uglify
node-sass-tilde-importer
Now this also includes my images because I use sass and I have a sass function that will compile images into data-urls in my css sheet.
function sassFunctions(options) {
options = options || {};
options.base = options.base || process.cwd();
var fs = require('fs');
var path = require('path');
var types = require('node-sass').types;
var funcs = {};
funcs['inline-image($file)'] = function (file, done) {
var file = path.resolve(options.base, file.getValue());
var ext = file.split('.').pop();
fs.readFile(file, function (err, data) {
if (err) return done(err);
data = new Buffer(data);
data = data.toString('base64');
data = 'url(data:image/' + ext + ';base64,' + data + ')';
data = types.String(data);
done(data);
});
};
return funcs;
}
So my app.css will have all of my applications images in the css and I can add the image's to any chunk of styles I want. Typically i create classes for the images that are unique and I'll just take stuff with that class if I want it to have that image. I avoid using Image tags completely.
Additionally, use html to js plugin I compile all of my html to the js file into a template object hashed by the path to the html files, i.e. 'html\templates\header.html' and then using something like knockout I can data-bind that html to an element, or multiple elements.
The end result is I can end up with an entire web application that spins up off one "index.html" that doesn't have anything in it but this:
<html>
<head>
<script src="dst\vendor.js"></script>
<script src="dst\app.css"></script>
<script src="dst\app.js"></script>
</head>
<body id="body">
<xyz-app params="//xyz.com/api/v1"></xyz-app>
<script>
ko.applyBindings(document.getTagById("body"));
</script>
</body>
</html>
This will kick off my component "xyz-app" which is the entire application, and it doesn't have any server side events. It's not running on PHP, DotNet Core MVC, MVC in general or any of that stuff. It's just basic html managed with a build system like Gulp and everything it needs data wise is all rest apis.
Authentication -> Rest Api
Products -> Rest Api
Search -> Google Compute Engine (python apis built to index content coming back from rest apis).
So I never have any html coming back from a server (just static files, which are crazy fast). And there are only 3 files to cache other than index.html itself. Webservers support default documents (index.html) so you'll just see "blah.com" in the url and any query strings or hash fragments used to maintain state (routing etc for bookmarking urls).
Crazy quick, all pending on the JS engine running it.
Search optimization is trickier. It's just a different way of thinking about things. I.e. you have google crawl your apis, not your physical website and you tell google how to get to your website on each result.
So say you have a product page for ABC Thing with a product ID of 129. Google will crawl your products api to walk through all of your products and index them. In there you're api returns a url in the result that tells google how to get to that product on a website. I.e. "http://blah#products/129".
So when users search for "ABC thing" they see the listing and clicking on it takes them to "http://blah#products/129".
I think search engines need to start getting smart like this, it's the future imo.
I love building websites like this because it get's rid of all the back end complexity. You don't need RAZOR, or PHP, or Java, or ASPX web forms, or w/e you get rid of those entire stacks.... All you need is a way to write rest apis (WebApi2, Java Spring, or w/e etc etc).
This separates web design into UI Engineering, Backend Engineering, and Design and creates a clean separation between them. You can have a UX team building the entire application and an Architecture team doing all the rest api work, no need for full stack devs this way.
Security isn't a concern either, because you can pass credentials on ajax requests and if your stuff is all on the same domain you can just make your authentication cookie on the root domain and presto (automatic, seamless SSO with all your rest apis).
Not to mention how much simpler server farm setup is. Load balance needs are a lot less. Traffic capabilities a lot higher. It's way easier to cluster rest api servers on a load balancer than entire websites.
Just setup 1 nginx reverse proxy server to serve up your index .html and also direct api requests to one of 4 rest api servers.
Api Server 1
Api Server 2
Api Server 3
Api Server 4
And your sql boxes (replicated) just get load balanced from the 4 rest api servers (all using SSD's if possible)
Sql Box 1
Sql Box 2
All of your servers can be on internal network with no public ips and just make the reverse proxy server public with all requests coming in to it.
You can load balance reverse proxy servers on round robin DNS.
This means you only need 1 SSL cert to since it's one public domain.
If you're using Google Compute Engine for search and seo, that's out in the cloud so nothing to worry about there, just $.
If you like the code in separate files for development you can always write a quick script to concatenate them into a single file before minification.
One big file is better for reducing HTTP requests as other posters have indicated.
I also think you should go the one-file route, as the others have suggested. However, to your point on plugins eating up cycles by merely being included in your large js file:
Before you execute an expensive operation, use some checks to make sure you're even on a page that needs the operations. Perhaps you can detect the presence (or absence) of a dom node before you run the autocomplete plugin, and only initialize the plugin when necessary. There's no need to waste the overhead of dom traversal on pages or sections that will never need certain functionality.
A simple conditional before an expensive code chunk will give you the benefits of both the approaches you are deciding on.
I tried breaking my JS in multiple files and ran into a problem. I had a login form, the code for which (AJAX submission, etc) I put in its own file. When the login was successful, the AJAX callback then called functions to display other page elements. Since these elements were not part of the login process I put their JS code in a separate file. The problem is that JS in one file can't call functions in a second file unless the second file is loaded first (see Stack Overflow Q. 25962958) and so, in my case, the called functions couldn't display the other page elements. There are ways around this loading sequence problem (see Stack Overflow Q. 8996852) but I found it simpler put all the code in one larger file and clearly separate and comment sections of code that would fall into the same functional group e.g. keep the login code separate and clearly commented as the login code.

Categories