In my app I have around 10 different pages, and most of them use some sort of JavaScript. Currently, I have one client-side app.js file that is included in all pages. In order to figure out what event listeners to attach on a page I basically check url location and go from there:
window.onload = function () {
let urlLocation = window.location.pathname.split('/')[1]
//Global functionality for all pages
UIctrl.toggleActiveNavbar(urlLocation)
//createTopic route
if (urlLocation === 'createTopic' || urlLocation === 'edit') {
UIctrl.createTopicTagsBasedOnCategory()
UIctrl.handleCreateOrEditTopicClick()
}
if (urlLocation === 'editPost' || urlLocation === 'createPost') {
UIctrl.handleCreateOrEditPostClick()
}
// .... and so on
}
Even though it works, I don't think it's a good way to do it. If a project becomes big enough, it's very hard to manage it.
I was not able to find an answer how to do it properly. My questions are :
Should I have separate js files for each page ? My problem with this is that I have to duplicate common code that is used on all pages.
Do you use some sort of bundlers (webpack/parcel) in your express apps that solve this issue? Maybe you could point me to a repository that shows how to set it up correctly.
How is this done in the real world production environments ?
Thank you.
Your intuition that comparing URLs to decide what initialization to run is a bad way to code is correct. That is not a good pattern and will quickly get out of control with more and more pages and maintenance over time can get to be a real pain.
Instead, you can put common code in a shared JS file that every page loads so those functions are available as needed. Then, use an inline <script> tag inside each individual page to do the page-specific initialization that sets up event listeners that are particular to that page and calls code in the shared JS file.
If for some pages, you have lots of page-specific initialization code, you could put just that page-specific code in a page-specific JS file, but in general you don't want to have an external JS file for every one of your pages if you can avoid it. Try putting most of the code in the common JS file and then just using a small bit of inline code in each specific page to do the right initialization. This puts most of your code in a common, shared JS file and keeps page-specific initialization logic in the specific page.
Should I have separate js files for each page ? My problem with this is that I have to duplicate common code that is used on all pages.
No. You don't want to duplicate lots of code in separate JS files as that is a maintenance nightmare and defeats effective browsing caching.
I have a few EXTERNAL scripts that need to be loaded on various pages, such as Google Places Autocomplete, Facebook APIs, etc.
Obviously it does not make sense to load them on every route, however the documentation does not address this rather common scenario.
Furthermore, the Vue instance mounts onto a tag within the body, since
the mounted element will be replaced with Vue-generated DOM in all cases. It is therefore not recommended to mount the root instance
to < html > or < body >.
How are real world applications currently dealing with this situation?
I recommend using https://www.npmjs.com/package/vue-head, it is exactly designed to inject the data you want from your component into the document's head.
Perfect for SEO title and meta tags.
To be used like so:
export default {
data: () => ({
title: 'My Title'
}),
head: {
// creates a title tag in header.
title () {
return {
inner: this.title
}
},
meta: [
// creates a meta description tag in header.
{ name: 'description', content: 'My description' }
]
}
}
This isn't addressed in documentation because it's not Vue's job. Vue is meant for creating, among other things, single page applications (SPA). In a single page application you typically load all your vendor scripts (Google, Facebook, etc.) and your own scripts and styles the first time the site loads.
A lot of real world applications just bundle all their vendor scripts into a single file (example: vendor.js) using Webpack, Gulp, Grunt or some other bundling tool. The rationale is that you can pack all those scripts into a single file, minify them, serve them with gzip compression and then it's only a single request.
Smarter bundlers like Webpack and Browserify can walk the dependency tree if you're using require() or import to import your modules. This can be allow you to split your dependencies into several logical chunks so components and libraries only load load with their dependencies if they themselves are loaded.
We had this issue as well. We load JavaScripts by other means. We created a library that does this for us (quick and dirty, observing browser events and adding the JavaScript tag). This library also, implements a mediator pattern (https://addyosmani.com/largescalejavascript/#mediatorpattern) fires an event of "page:load" (custom for us) at the very end once all libraries have been loaded.
Our VueJS components are executed only when that event fires. This also allowed us to put Vue components in the header tag instead of body, as the browser will load it, but not execute the function until the event is fired.
var loadVueComponents=function()
{
var myComponents=new Vue({....});
};
Mediator.subscribe("page:load",loadVueComponents);
Before I get downvotes for not using Webpack or any of those other tools, this was an old application with a lot of JavaScripts (some cannot be minified or even concatenated/bundle with other files) and we needed to add some new components (now using Vue) and trying to disrupt as little as possible existing pages (mediator pattern was already implemented and loading dynamic libraries based on page attributes)
I think you are talking about some external JS which are not part of node-modules and want to retrieve from external source (http://your-external-script) then you can go for dynamic loading of JS script tag. Put this code somewhere like you first landing screen of SPA in before transition event.
var script = document.createElement('script');
script.src = "htpps://your-external-script.js";
document.head.appendChild(script); //or something of the likes
This will make your external file available in global scope and then you can use it anywhere.
Note: this scenario is where you dont haev node-moduels for library or you dont want to put as load modules
I have an MVC page that has a number of JavaScript dependencies, let's call them depend-A and depend-B where depend-B depends on depend-A. These are both included in different bundles in MVC that is being included on the page. After running this through Google's pagespeed tool it suggested that we should be including the JS asynchronously to prevent render blocking.
Because of the dependencies, they need to load in particular order so I have looked into utilising LABJS to load them asynchronously in the correct order to prevent the render blocking.
This works by including the bundle's URL, but I lose the ability to be able to have the debug versions of the JS files locally while developing.
Can anyone suggest a way around this, so that we can load the JS files asynchronously but in order and maintain the debug versions locally?
Here is what I am currently using.
<script src="~/Scripts/LAB.min.js"></script>
<script>
$LAB
.script("#Scripts.Url("~/bundles/jquery")").wait()
.script("/scripts/fileone.js").wait()
.script("/scripts/filetwo.js").wait(function() {
FunctionInFileTwo();
});
</script>
The page source with the above code is as follows.
<script src="/Scripts/LAB.min.js"></script>
<script>
$LAB
.script("/bundles/jquery?v=GnU3whLS74nHNYUsUJjcWJKdXvKBNbFqBrkQVKSNlKc1").wait()
.script("/scripts/scripts/fileone.js").wait()
.script("/scripts/scripts/filetwo.js").wait(function() {
FunctionInFileTwo();
});
</script>
It doesn't look like there's any clean API available for this.
Scripts.Render("~/bundles/yourbundle") returns an IHtmlString of the necessary script tags - you could make a method that scavenges the script srcs from that string and generates the correct $LAB invocations.
Scripts.Manager.RenderExplicit(tagFormat, paths) comes close, where you'd just pass a better format string as the first argument, but reading the code it looks like it could start including differently formatted script tags verbatim in the middle of the list.
I have the following:
Modernizr.load([
{
load : '//ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js',
complete : function () {
if ( !window.jQuery ){
Modernizr.load('/js/jquery-1.6.2.min.js');
}
}
},
{
load : ["/js/someplugin.js", "/js/anotherplugin.js"],
complete : function()
{
// do some stuff
}
},
{
load: ('https:' == location.protocol ? '//ssl' : '//www') + '.google-analytics.com/ga.js'
}
]};
I read that Modernizr loads scripts Asyncronously. But in the above example, which ones are loaded async?
Do all of the following get loaded asyncronously?
jquery.min.js
someplugin.js
anotherplugin.js
ga.js
Or is it a combination of async and ordered loading like this:
jquery.min.js is loaded first
Then someplugin.js and anotherplugin.js is loaded async
finally, ga.js is loaded
I'm having a hard time testing which case it is.
You've selected a fairly complex example to dissect. So lets do it in steps.
The three parameter sets {...},{...},{...} will execute sequentially.
Inside the first parameter set you will load jQuery from google's CDN, then when complete you test if jQuery actually loaded. If not (perhaps you are developing offline and the google CDN is not reachable), you load a local copy of jQuery. So these one is "sequential", but really only one of them will load.
In the second parameter set you load someplugin.js and 'anotherplugin.js` simultaneously and asynchronously. So they will be downloaded in parallel. Its great when you can parallel 2 items at a time as that is the "weakest link" for browsers today (yup only IE, Every other browser will parallel 6-8 files at a time).
In the third parameter set you load the google analytics script.
Remember modernizr is a collection of tools. The included loader is actually just a repackaged yepnope. So you can google for more about yepnope.
The idea with the sequential loads is to be able to ensure that dependencies load in order (example your jQuery plugins must load after jQuery framework). The purpose of the parallel downloads syntax in parameter set two is to speed up performance for multiple files that are not co-dependent (example you could load multiple jQuery plugins in parallel once jQuery is loaded as long as they don't depend on eachother).
So to answer your question, your hypothesis #2 is correct. If you'd like to explore more using firebug, add some console.log statements in each parameter set's complete function. You should see the 3 groups complete sequentially every time. Now switch firebug onto the "Net" tab to watch the file requests go out. The files someplugin.js and 'anotherplugin.js` won't necessarily load in the same order everytime. The requests will go out in order, but there timing bars should overlap (showing them as parallel). Testing this locally will be hard because it'll be so fast. Put your two test files on a slow server somewhere or bias them the opposite of what you are expecting (make the first file 1mb and the second <1kb) so you can see the overlapping downloads in the network monitor tab of firebug (this is just an artificial scenario for testing purposes, you could fill a file with repeated JS comments just to make an artificially slow file for testing).
EDIT: To clarify a little more accurately, I'd like to add a quote from the yepnopejs.com homepage. yepnopejs is the resource loader that is included (and aliased) inside modernizr:
In short, whatever order you put them in, that's the order that we
execute them in. The 'load' and 'both' sets of files are executed
after your 'yep' or 'nope' sets, but the order that you specificy
within those sets is also preserved. This doesn't mean that the files
always load in this order, but we guarantee that they execute in this
order. Since this is an asynchronous loader, we load everything all at
the same time, and we just delay running it (or injecting it) until
the time is just right.
So yes, you could put jquery, followed by some plugins in the same Modernizr.load statement and they will be downloaded in parallel and injected into the DOM in the same order as specified in the arguments. The only thing you are giving up here is the ability to test if jQuery loaded successfully and perhaps grab a backup non-CDN version of jQuery if necessary. So it's your choice how critical fallback support is for you. (I don't have any sources, but I seem to recall that the google CDN has only gone down once in its entire lifetime)
My web application uses jQuery and some jQuery plugins (e.g. validation, autocomplete). I was wondering if I should stick them into one .js file so that it could be cached more easily, or break them out into separate files and only include the ones I need for a given page.
I should also mention that my concern is not only the time it takes to download the .js files but also how much the page slows down based on the contents of the .js file loaded. For example, adding the autocomplete plugin tends to slow down the response time by 100ms or so from my basic testing even when cached. My guess is that it has to scan through the elements in the DOM which causes this delay.
I think it depends how often they change. Let's take this example:
JQuery: change once a year
3rd party plugins: change every 6 months
your custom code: change every week
If your custom code represents only 10% of the total code, you don't want the users to download the other 90% every week. You would split in at least 2 js: the JQuery + plugins, and your custom code. Now, if your custom code represents 90% of the full size, it makes more sense to put everything in one file.
When choosing how to combine JS files (and same for CSS), I balance:
relative size of the file
number of updates expected
Common but relevant answer:
It depends on the project.
If you have a fairly limited website where most of the functionality is re-used across multiple sections of the site, it makes sense to put all your script into one file.
In several large web projects I've worked on, however, it has made more sense to put the common site-wide functionality into a single file and put the more section-specific functionality into their own files. (We're talking large script files here, for the behavior of several distinct web apps, all served under the same domain.)
The benefit to splitting up the script into separate files, is that you don't have to serve users unnecessary content and bandwidth that they aren't using. (For example, if they never visit "App A" on the website, they will never need the 100K of script for the "App A" section. But they would need the common site-wide functionality.)
The benefit to keeping the script under one file is simplicity. Fewer hits on the server. Fewer downloads for the user.
As usual, though, YMMV. There's no hard-and-fast rule. Do what makes most sense for your users based on their usage, and based on your project's structure.
If people are going to visit more than one page in your site, it's probably best to put them all in one file so they can be cached. They'll take one hit up front, but that'll be it for the whole time they spend on your site.
At the end of the day it's up to you.
However, the less information that each web page contains, the quicker it will be downloaded by the end-viewer.
If you only include the js files required for each page, it seems more likely that your web site will be more efficient and streamlined
If the files are needed in every page, put them in a single file. This will reduce the number of HTTP request and will improve the response time (for lots of visits).
See Yahoo best practice for other tips
I would pretty much concur with what bigmattyh said, it does depend.
As a general rule, I try to aggregate the script files as much as possible, but if you have some scripts that are only used on a few areas of the site, especially ones that perform large DOM traversals on load, it would make sense to leave those in separate file(s).
e.g. if you only use validation on your contact page, why load it on your home page?
As an aside, you can sometimes sneak these files into interstitial pages, where not much else is going on, so when a user lands on an otherwise quite heavy page that needs it, it should already be cached - use with caution - but can be a handy trick when you have someone benchmarking you.
So, as few script files as possible, within reason.
If you are sending a 100K monolith, but only using 20K of it for 80% of the pages, consider splitting it up.
It depends pretty heavily on the way that users interact with your site.
Some questions for you to consider:
How important is it that your first page load be very fast?
Do users typically spend most of their time in distinct sections of the site with subsets of functionality?
Do you need all of the scripts ready the moment that the page is ready, or can you load some in after the page is loaded by inserting <script> elements into the page?
Having a good idea of how users use your site, and what you want to optimize for is a good idea if you're really looking to push for performance.
However, my default method is to just concatenate and minify all of my javascript into one file. jQuery and jQuery.ui are small and have very low overhead. If the plugins you're using are having a 100ms effect on page load time, then something might be wrong.
A few things to check:
Is gzipping enabled on your HTTP server?
Are you generating static files with unique names as part of your deployment?
Are you serving static files with never ending cache expirations?
Are you including your CSS at the top of your page, and your scripts at the bottom?
Is there a better (smaller, faster) jQuery plugin that does the same thing?
I've basically gotten to the point where I reduce an entire web application to 3 files.
vendor.js
app.js
app.css
Vendor is neat, because it has all the styles in it too. I.e. I convert all my vendor CSS into minified css then I convert that to javascript and I include it in the vendor.js file. That's after it's been sass transformed too.
Because my vendor stuff does not update often, once in production it's pretty rare. When it does update I just rename it to something like vendor_1.0.0.js.
Also there are minified versions of those files. In dev I load the unminified versions and in production I load the minified versions.
I use gulp to handle doing all of this. The main plugins that make this possible are....
gulp-include
gulp-css2js
gulp-concat
gulp-csso
gulp-html-to-js
gulp-mode
gulp-rename
gulp-uglify
node-sass-tilde-importer
Now this also includes my images because I use sass and I have a sass function that will compile images into data-urls in my css sheet.
function sassFunctions(options) {
options = options || {};
options.base = options.base || process.cwd();
var fs = require('fs');
var path = require('path');
var types = require('node-sass').types;
var funcs = {};
funcs['inline-image($file)'] = function (file, done) {
var file = path.resolve(options.base, file.getValue());
var ext = file.split('.').pop();
fs.readFile(file, function (err, data) {
if (err) return done(err);
data = new Buffer(data);
data = data.toString('base64');
data = 'url(data:image/' + ext + ';base64,' + data + ')';
data = types.String(data);
done(data);
});
};
return funcs;
}
So my app.css will have all of my applications images in the css and I can add the image's to any chunk of styles I want. Typically i create classes for the images that are unique and I'll just take stuff with that class if I want it to have that image. I avoid using Image tags completely.
Additionally, use html to js plugin I compile all of my html to the js file into a template object hashed by the path to the html files, i.e. 'html\templates\header.html' and then using something like knockout I can data-bind that html to an element, or multiple elements.
The end result is I can end up with an entire web application that spins up off one "index.html" that doesn't have anything in it but this:
<html>
<head>
<script src="dst\vendor.js"></script>
<script src="dst\app.css"></script>
<script src="dst\app.js"></script>
</head>
<body id="body">
<xyz-app params="//xyz.com/api/v1"></xyz-app>
<script>
ko.applyBindings(document.getTagById("body"));
</script>
</body>
</html>
This will kick off my component "xyz-app" which is the entire application, and it doesn't have any server side events. It's not running on PHP, DotNet Core MVC, MVC in general or any of that stuff. It's just basic html managed with a build system like Gulp and everything it needs data wise is all rest apis.
Authentication -> Rest Api
Products -> Rest Api
Search -> Google Compute Engine (python apis built to index content coming back from rest apis).
So I never have any html coming back from a server (just static files, which are crazy fast). And there are only 3 files to cache other than index.html itself. Webservers support default documents (index.html) so you'll just see "blah.com" in the url and any query strings or hash fragments used to maintain state (routing etc for bookmarking urls).
Crazy quick, all pending on the JS engine running it.
Search optimization is trickier. It's just a different way of thinking about things. I.e. you have google crawl your apis, not your physical website and you tell google how to get to your website on each result.
So say you have a product page for ABC Thing with a product ID of 129. Google will crawl your products api to walk through all of your products and index them. In there you're api returns a url in the result that tells google how to get to that product on a website. I.e. "http://blah#products/129".
So when users search for "ABC thing" they see the listing and clicking on it takes them to "http://blah#products/129".
I think search engines need to start getting smart like this, it's the future imo.
I love building websites like this because it get's rid of all the back end complexity. You don't need RAZOR, or PHP, or Java, or ASPX web forms, or w/e you get rid of those entire stacks.... All you need is a way to write rest apis (WebApi2, Java Spring, or w/e etc etc).
This separates web design into UI Engineering, Backend Engineering, and Design and creates a clean separation between them. You can have a UX team building the entire application and an Architecture team doing all the rest api work, no need for full stack devs this way.
Security isn't a concern either, because you can pass credentials on ajax requests and if your stuff is all on the same domain you can just make your authentication cookie on the root domain and presto (automatic, seamless SSO with all your rest apis).
Not to mention how much simpler server farm setup is. Load balance needs are a lot less. Traffic capabilities a lot higher. It's way easier to cluster rest api servers on a load balancer than entire websites.
Just setup 1 nginx reverse proxy server to serve up your index .html and also direct api requests to one of 4 rest api servers.
Api Server 1
Api Server 2
Api Server 3
Api Server 4
And your sql boxes (replicated) just get load balanced from the 4 rest api servers (all using SSD's if possible)
Sql Box 1
Sql Box 2
All of your servers can be on internal network with no public ips and just make the reverse proxy server public with all requests coming in to it.
You can load balance reverse proxy servers on round robin DNS.
This means you only need 1 SSL cert to since it's one public domain.
If you're using Google Compute Engine for search and seo, that's out in the cloud so nothing to worry about there, just $.
If you like the code in separate files for development you can always write a quick script to concatenate them into a single file before minification.
One big file is better for reducing HTTP requests as other posters have indicated.
I also think you should go the one-file route, as the others have suggested. However, to your point on plugins eating up cycles by merely being included in your large js file:
Before you execute an expensive operation, use some checks to make sure you're even on a page that needs the operations. Perhaps you can detect the presence (or absence) of a dom node before you run the autocomplete plugin, and only initialize the plugin when necessary. There's no need to waste the overhead of dom traversal on pages or sections that will never need certain functionality.
A simple conditional before an expensive code chunk will give you the benefits of both the approaches you are deciding on.
I tried breaking my JS in multiple files and ran into a problem. I had a login form, the code for which (AJAX submission, etc) I put in its own file. When the login was successful, the AJAX callback then called functions to display other page elements. Since these elements were not part of the login process I put their JS code in a separate file. The problem is that JS in one file can't call functions in a second file unless the second file is loaded first (see Stack Overflow Q. 25962958) and so, in my case, the called functions couldn't display the other page elements. There are ways around this loading sequence problem (see Stack Overflow Q. 8996852) but I found it simpler put all the code in one larger file and clearly separate and comment sections of code that would fall into the same functional group e.g. keep the login code separate and clearly commented as the login code.