LESS (dynamic style sheet language) and Resource Loaders - javascript

I am currently looking at changing from using css to LESS in my current project. A few things to mention before I get to the question are:
1) The project is purely clientside (Html/Js/Css) so there is no server side component for the website (although there is a web service it calls via CORS)
2) I load almost everything via resource loading frameworks, currently I am using yepnope
So given the above I need to be able to get the LESS styles to be processed clientside, but as I am using a resource loader and more css/less could be loaded after the initial page load has happened I was wondering if:
1) Does Less work with content loaders when using client side processing? As it says:
Client-side usage
Link your .less stylesheets with the rel set to “stylesheet/less”:
<link rel="stylesheet/less" type="text/css" href="styles.less">
Then download less.js from the top of the page, and include it in the <head> element of your page, like so:
<script src="less.js" type="text/javascript"></script>
Make sure you include your stylesheets before the script.
I think I may be able to tell yepnope how to handle less files and give them the required element attributes. If I can providing the less resources are brought in before the less javascript will it be ok?
2) Is there any manual way to tell it what to process in javascript?
This would cover the case where everything has been loaded for the current page, the user clicks a button which dynamically loads a new template which is displayed in the current page, this may require new less resources to be loaded, but the less.js file has already been included.
Hopefully the above gives you some context as to what I am trying to do and what the 2 questions are.

Yes you can.
Reading this post Load less.js rules dynamically and adjusting it a bit:
less.sheets.push(document.getElementById('new-style-1'));
// Add the new less file to the head of your document
var newLessStylesheet = $("<link />").attr("id", "new-style-1").attr("href", "/stylesheets/style.less").attr("type", 'text/less');
$("head").append(newLessStylesheet);
// Have less refresh the stylesheets
less.refresh(true);
You could also generate all the CSS in your development environment and put it in one file.
There are lots of options. The easiests way would be to use an application. You could use apps like http://incident57.com/less/ for Mac. You can even compile online: Search for something as "lessphp".

Related

How to make sure browsers load the most recent version of a file after updating? [duplicate]

This question already has answers here:
How to force browsers to reload cached CSS and JS files?
(57 answers)
Closed 8 years ago.
When rolling out a new website change or web application change, sometimes browsers load old javascript or image files when you navigate to the site. Oftentimes it takes a manual refresh of the page for the browser to load in the newly updated files.
How can I make sure that after an update, users receive the most up-to-date files the first time they load the page, rather than having to manually refresh to clear out any cached files. Is there a very reliable way to do this through sending expires headers or last modified?
I assume you have a build script or a bunch of task scripts to help you with the repetitive process of updating the website/application . Since you tagged your question with javascript tag ,i will offer you a javascript based solution .
You could use (or alredy using) a task runner like Grunt or Glup or any other , and then run a cache busting task that will update your url's from this :
<script src="testing.js"></src>
<link href="testing.css" rel="stylesheet">
<img src="testing.png">
to this :
<script src="testing.js?v=123456"></src>
<link href="testing.css?v=123456" rel="stylesheet">
<img src="testing.png?v=123456">
This will prevent the browser from reusing your assets .
I know only one way how to do that - add some parameters to the end of the file URL.
E.g. you have image picture.png and instead of write it in html like this
<img src="path/to/picture.png">
you have to write it like this:
<img src="path/to/picture.png?specific_parameter_123">
So, changing of this parameter after '?' will force browser to load your picture (or something else) again because for browser the exact path was changed.
You can do it manually by changing parameter (or even generate it random every time by JS) or use something like Grunt and grunt-cache-breaker and it will generate unique URL of file based on md5 hash. So, once file was changed url will be also changed.
Also it is possible to do the same on server side. E.g. if you are using PHP you can try something like this: hash css and js files to break cache. Is it slow?.
More about query string here.
I have used a hash in some projects. That hash (md5 or something fast) is computed from the file contents of the used LESS/SASS files or JS modules. Each time something is changed, e.g. in the LESS source, the compiled CSS file will have a new filename.
You should enable client-side caching with a long caching time. The browser will store the CSS files locally. It only loads a new CSS file after a live-deploy.
Check the data you have available per the system.

what is purpose of this type of css? [duplicate]

This question already has answers here:
What's a queryString doing in this stylesheet's href?
(2 answers)
Closed 8 years ago.
I was wondering what is purpose of this type of css and js files below is example
<link rel="stylesheet" type="text/css" media="all" href="http://example.com.pk/lib/css/fancybox.style.css?v=1.4">
What is css?v=1.4 and some time i found js?v=1.3 why these parameter are given ?
it is using for JS, CSS Versioning to Update Browser Cache when Files are Changed
They are used in order to retrieve a specific version of the given resource (e.g., a CSS file).
This is a link that loads a css file.
The href attribute shows us the location(url in this case).
The css?v=1.4 is part of a query string and probably is part of this specific css version.
Mostly this is added in the query string to avoid cache(the browser caches css\js files) and if you change the url, it doesn't find the file in the cache.
It's a file version indicator. Most likely it's used in the link path to control caching on the file.
With this variable, the site administrator can change the URL for the stylesheet as specified in the document head, without actually changing the file-name.
Routers, browsers, etc. that cache resources will see a new url, and make a full request for the document back to the server, returning the updated file instead of a cached version.
Well, first of all, if the file in the link ends with .css it does not necessarily have to be direct link to the specified file. Server internally can rewrite this link to some server side script (asp,php,...) which then, based on the query parameter (v=4) decides which file to serve.
Also, google used those to decide which library of it's api to load. Which basically comes back to paragraph I already wrote before.
Further more, this can be also used to be sure that browser loads new version of the script. For example, if you always load style.css, browsers tend to cache those for faster loading (unless specifically told otherwise) which can later interfere with changes. You might change something in your .css file and not see it in browser, because browser served cached version instead of the live one. Thus, you add ?v=X where X is increment from last version (or something totally new) to make sure browsers do not load from cache.
It really depends on the implementation of the query parameter at the server side. It is also possible that it does nothing, and serves just as a reference in html for developer.

jquery and script speed?

Quick question, I have some scripts that only need to be run on some pages and some only on a certain page, would it be best to include the script at the bottom of the actual page with script tags or do something like in my js inlcude;
var pageURL = window.location.href;
if (pageURL == 'http://example.com') {
// run code
}
Which would be better and faster?
The best is to include the script only on pages that need it. Also in terms of maintenance your script is more independant from the pages that are using it. Putting those ifs in your script makes it tightly coupled to the structure of your site and if you decide to rename some page it will no longer work.
I can recommend you to use an asynchrounous resource loader, LAB.js for example. Then you could build a dependencies list, for instance:
var MYAPP = MYAPP || {};
/*
* Bunches of scripts
* to load together
*/
MYAPP.bunches = {
defaults: ["libs/jquery-1.6.2.min.js"],
cart: ["plugins/jquery.tmpl.min.js",
"libs/knockout-1.2.1.min.js",
"scripts/shopping-cart.js"],
signup: ["libs/knockout-1.2.1.min.js",
"scripts/validator.js"]
/*
... etc
*/
};
/*
* Loading default libraries
*/
$LAB.script(MYAPP.defaults);
if (typeof MYAPP.require !== 'undefined') {
$LAB.script(MYAPP.dependencies[MYAPP.require]);
}
and in the end of your page you could write:
<script type="text/javascript">
var MYAPP = MYAPP || {};
MYAPP.require = "cart";
</script>
<script type="text/javascript" src='js/libs/LAB.min.js'></script>
<script type="text/javascript" src='js/dependencies.js'></script>
By the way, a question to everyone, is it a good idea to do so?
In so far as possible only include the scripts on the pages that requirement. That said, if you're delivering content via AJAX that can be hard to do, since the script might already be loaded and reloading could cause problems. Of course you can deliver code in a script block (as opposed to referencing an external js file), in code delivered via AJAX.
In cases where you need to load scripts (say via a master page) for all pages, but that only apply to certain pages, take advantage of the fact that jQuery understands and deals well with selectors that don't match any elements. You can also use live handlers along with very specific selectors to allow scripts loaded at page load time to work with elements added dynamically later.
Note: if you use scripts loaded via content distribution network, you'll find that they are often cached locally in the browser anyway and don't really hurt your page load time. The same is true with scripts on your own site, if they've already been loaded once.
You have two competing things to optimize for, page load time over the network and page initialization time.
You can minimize your page load time over the network by taking maximum advantage of browser caching so that JS files don't have to be loaded over the network. To do this, you want as much javascript code for your site in on or two larger and fully minimized JS files. To do this, you should put JS for multiple different pages in one common JS file. It will vary from site to site whether the JS for all pages should be ine one or two larger JS files or whether you group it into a small number of common JS files that are each targeted at part of your site. But, the general idea is that you want to combine the JS code from different pages into a common JS file that can be most effectively cached.
You can minimize your page initialization time by only calling initialization code that actually needs to execute on the particular page that is being displayed. There are several different ways to approach this. I agree with the other callers that you do not want to be looking at URLs to decide which code to execute because this ties your code to the URL structure which is better to avoid. If your code has a manageable number of different types of pages, then I'd recommend identifying each of those page types with a unique class name on the body tag. You can then have your initialization code look for the appropriate class on the body tag and branch to the appropriate initialization code based on that. I've even seen it done where you find a class name with a particular common prefix, parse out the non-common part of the name and call an initialization function by that name. This allows you to give a page a specific set of behaviors by only adding a classname to the body tag. The code remains very separate from the actual page.
the less general purpose way of doing this is to keep all the code in the one or two common JS files, but to add the appropriate initialization call to each specific page's HTML. So, the JS code that does the initialization code lives in the common JS files and thus is maximally cached, but the calling of the appropriate initialization code for that page is embedded inline in each specific page. This minimizes the execution time of the initialization, but still lets you use maximal caching. It's slightly less generic than the class name technique mentioned earlier, but some may like the more direct calling technique.
Include scripts at bottom of pages that need it only.
The YSlow add-on is the best solution to know why your website is slow.
There are many issues which could be the reason for slowness.
Combining many jQuery to one could help you increasing your performance.
Also you can put the script at the bottom of your page and CSS at top.
Its basically up to you and depends on what the code is.
Generally with small things I will slip it into the bottom of the page. (I'm talking minor ui things that relate only to that page).
If you're doing the location ref testing for more than a couple pages it probably means you're doing something wrong.
You might want to take a look at one of these:
http://en.wikipedia.org/wiki/Unobtrusive_JavaScript
http://2tbsp.com/node/91
And as for which is faster it's wildly negligible, pick what is easier for you to maintain.

Add ASP.NET server script to mostly-static .JS / .CSS files without losing IntelliSense?

Using VS2008 and ASP.NET 3.5 (or VS 2010 / .NET 4.0?), how can I include a bit of dynamic ASP.NET server-side code in mostly-static JavaScript and CSS files?
I want to do this to avoid cloning entire JS or CSS files to vary just a small part of them multi-tenant sites. Later, I want to extend the solution to handle localization inside javascript/CSS, dynamic debugging/tracing support, and other cool things you can get by injecting stuff dynamically into JavaScript and CSS.
The hard part is that I don't want to lose all the cool things you get with static files, for example:
JS/CSS code coloring and intellisense
CSS-class "go to definition" support in the IDE
automatic HTTP caching headers based on date of underlying file
automatic compression by IIS
The server-side goodness of static files (e.g. headers/compression) can be faked via an HttpHandler, but retaining IDE goodness (intellisense/coloring/etc) has me stumped.
An ideal solution would meet the following requirements:
VS IDE provides JS/CSS intellisense and code coloring. Giving up server-code intellisense is OK since server code is usually simple in these files.
"go to defintion" still works for CSS classes (just like in static CSS files)
send HTTP caching headers, varying by modified date of the underlying file.
support HTTP compression like other static files
support <%= %> and <script runat=server> code blocks
URL paths (at least the ones that HTTP clients see) end with .JS or .CSS (not .ASPX). Optionally, I can use querystring or PathInfo to parameterize (e.g. choosing a locale), although in most cases I'll use vdirs for this. Caching should vary for different querystrings.
So far the best (hacky) solution I've come up with is this:
Switch the underlying CSS or JS files to be .ASPX files (e.g. foo.css.aspx or foo.js.aspx). Embed the underlying static content in a STYLE element (for CSS) or a SCRIPT element (for JS). This enables JS/CSS intellisense as well as allowing inline or runat=server code blocks.
Write an HttpHandler which:
looks at the URL and adds .aspx to know the right underlying ASPX to call
uses System.Net.HttpWebRequest to call that URL
strips out the containing STYLE or SCRIPT tags, leaving only the CSS or JS
adds the appropriate headers (caching, content type, etc.)
compresses the response if the client suports compression
Map *.CSS and *.JS to my handler.
(if IIS6) Ensure .JS and .CSS file extensions are mapped to ASP.NET
I'm already using a modified version of Darick_c's HttpCompression Module which handles almost all of above for me, so modifying it to support the solution above won't be too hard.
But my solution is hacky. I was wondering if anyone has a more lightweight approach for this problem which doesn't lose Visual Studio's static-file goodness.
I know I can also hack up a client-side-only solution where I split all JS and CSS into "vary" and "won't vary" files, but there's a performance and maintenance overhead to this kind of solution that I'd like to avoid. I really want a server-side solution here so I can maintain one file on the server, not N+1 files.
I've not tried VS10/.NET 4.0 yet, but I'm open to a Dev10/.net4 solution if that's the best way to make this scenario work.
Thanks!
I have handled a similar problem by having a master page output a dynamic generated JSON object in the footer of each page.
I needed to have my js popup login dialog box support localization. So using JSON.NET for serialization, I created a public key/value collection property of the master page that pages could access in order place key/values into such as phrase key/localized phrase pairs. The master page then renders a dynamic JSON object that holds these values so that static js files could reference these dynamic values.
For the js login box I have the masterpage set the localized values. This made sense because the masterpage also includes the login.js file.
I do commend you on your concern over the number of http requests being made from the client and the payload being returned. Too many people I know and work with overlook those easy optimizations. However, any time I run into the same issue you're having (which is actually quite often), I have found I've usually either taken a wrong turn somewhere or am trying to solve the problem the wrong way.
As far as your JS question goes, I think Frank Schwieterman in the comments above is correct. I'd be looking at ways to expose the dynamic parts of your JS through setters. A really basic example would be if you have want to display a customized welcome message to users on login. In your JS file, you can have a setMessage(message) method exposed. That method would then be called by the page including the script. As a result, you'd have something like:
<body onLoad="setMessage('Welcome' + <%= user.FirstName %>);">
This can obviously be expanded by passing objects or methods into the static JS file to allow you the functionality you desire.
In response to the CSS question, I think you can gain a lot from the approach Shawn Steward from the comments makes a good point. You can define certain static parts of your CSS in the base file and then redefine the parts you want to change in other files. As a result, you can then dictate the look of your website by which files you're including. Also, since you don't want to take the hit for extra http requests (keep in mind, if you set those files to be cached for a week, month, etc. it's a one time request), you can do something like combining the CSS files into a single file at compilation or runtime.
Something like the following links may be helpful in pointing you in the right direction:
http://geekswithblogs.net/rashid/archive/2007/07/25/Combine-Multiple-JavaScript-and-CSS-Files-and-Remove-Overheads.aspx
http://www.asp.net/learn/3.5-SP1/video-296.aspx?wwwaspnetrdirset=1
http://dimebrain.com/2008/04/resourceful-asp.html
By utilizing the combining at run or compile time you can gain the best of both world by allowing you to logically separate CSS and JS files, yet also gaining the reduction of payload and requests that comes with compressing and combining files.

Put javascript in one .js file or break it out into multiple .js files?

My web application uses jQuery and some jQuery plugins (e.g. validation, autocomplete). I was wondering if I should stick them into one .js file so that it could be cached more easily, or break them out into separate files and only include the ones I need for a given page.
I should also mention that my concern is not only the time it takes to download the .js files but also how much the page slows down based on the contents of the .js file loaded. For example, adding the autocomplete plugin tends to slow down the response time by 100ms or so from my basic testing even when cached. My guess is that it has to scan through the elements in the DOM which causes this delay.
I think it depends how often they change. Let's take this example:
JQuery: change once a year
3rd party plugins: change every 6 months
your custom code: change every week
If your custom code represents only 10% of the total code, you don't want the users to download the other 90% every week. You would split in at least 2 js: the JQuery + plugins, and your custom code. Now, if your custom code represents 90% of the full size, it makes more sense to put everything in one file.
When choosing how to combine JS files (and same for CSS), I balance:
relative size of the file
number of updates expected
Common but relevant answer:
It depends on the project.
If you have a fairly limited website where most of the functionality is re-used across multiple sections of the site, it makes sense to put all your script into one file.
In several large web projects I've worked on, however, it has made more sense to put the common site-wide functionality into a single file and put the more section-specific functionality into their own files. (We're talking large script files here, for the behavior of several distinct web apps, all served under the same domain.)
The benefit to splitting up the script into separate files, is that you don't have to serve users unnecessary content and bandwidth that they aren't using. (For example, if they never visit "App A" on the website, they will never need the 100K of script for the "App A" section. But they would need the common site-wide functionality.)
The benefit to keeping the script under one file is simplicity. Fewer hits on the server. Fewer downloads for the user.
As usual, though, YMMV. There's no hard-and-fast rule. Do what makes most sense for your users based on their usage, and based on your project's structure.
If people are going to visit more than one page in your site, it's probably best to put them all in one file so they can be cached. They'll take one hit up front, but that'll be it for the whole time they spend on your site.
At the end of the day it's up to you.
However, the less information that each web page contains, the quicker it will be downloaded by the end-viewer.
If you only include the js files required for each page, it seems more likely that your web site will be more efficient and streamlined
If the files are needed in every page, put them in a single file. This will reduce the number of HTTP request and will improve the response time (for lots of visits).
See Yahoo best practice for other tips
I would pretty much concur with what bigmattyh said, it does depend.
As a general rule, I try to aggregate the script files as much as possible, but if you have some scripts that are only used on a few areas of the site, especially ones that perform large DOM traversals on load, it would make sense to leave those in separate file(s).
e.g. if you only use validation on your contact page, why load it on your home page?
As an aside, you can sometimes sneak these files into interstitial pages, where not much else is going on, so when a user lands on an otherwise quite heavy page that needs it, it should already be cached - use with caution - but can be a handy trick when you have someone benchmarking you.
So, as few script files as possible, within reason.
If you are sending a 100K monolith, but only using 20K of it for 80% of the pages, consider splitting it up.
It depends pretty heavily on the way that users interact with your site.
Some questions for you to consider:
How important is it that your first page load be very fast?
Do users typically spend most of their time in distinct sections of the site with subsets of functionality?
Do you need all of the scripts ready the moment that the page is ready, or can you load some in after the page is loaded by inserting <script> elements into the page?
Having a good idea of how users use your site, and what you want to optimize for is a good idea if you're really looking to push for performance.
However, my default method is to just concatenate and minify all of my javascript into one file. jQuery and jQuery.ui are small and have very low overhead. If the plugins you're using are having a 100ms effect on page load time, then something might be wrong.
A few things to check:
Is gzipping enabled on your HTTP server?
Are you generating static files with unique names as part of your deployment?
Are you serving static files with never ending cache expirations?
Are you including your CSS at the top of your page, and your scripts at the bottom?
Is there a better (smaller, faster) jQuery plugin that does the same thing?
I've basically gotten to the point where I reduce an entire web application to 3 files.
vendor.js
app.js
app.css
Vendor is neat, because it has all the styles in it too. I.e. I convert all my vendor CSS into minified css then I convert that to javascript and I include it in the vendor.js file. That's after it's been sass transformed too.
Because my vendor stuff does not update often, once in production it's pretty rare. When it does update I just rename it to something like vendor_1.0.0.js.
Also there are minified versions of those files. In dev I load the unminified versions and in production I load the minified versions.
I use gulp to handle doing all of this. The main plugins that make this possible are....
gulp-include
gulp-css2js
gulp-concat
gulp-csso
gulp-html-to-js
gulp-mode
gulp-rename
gulp-uglify
node-sass-tilde-importer
Now this also includes my images because I use sass and I have a sass function that will compile images into data-urls in my css sheet.
function sassFunctions(options) {
options = options || {};
options.base = options.base || process.cwd();
var fs = require('fs');
var path = require('path');
var types = require('node-sass').types;
var funcs = {};
funcs['inline-image($file)'] = function (file, done) {
var file = path.resolve(options.base, file.getValue());
var ext = file.split('.').pop();
fs.readFile(file, function (err, data) {
if (err) return done(err);
data = new Buffer(data);
data = data.toString('base64');
data = 'url(data:image/' + ext + ';base64,' + data + ')';
data = types.String(data);
done(data);
});
};
return funcs;
}
So my app.css will have all of my applications images in the css and I can add the image's to any chunk of styles I want. Typically i create classes for the images that are unique and I'll just take stuff with that class if I want it to have that image. I avoid using Image tags completely.
Additionally, use html to js plugin I compile all of my html to the js file into a template object hashed by the path to the html files, i.e. 'html\templates\header.html' and then using something like knockout I can data-bind that html to an element, or multiple elements.
The end result is I can end up with an entire web application that spins up off one "index.html" that doesn't have anything in it but this:
<html>
<head>
<script src="dst\vendor.js"></script>
<script src="dst\app.css"></script>
<script src="dst\app.js"></script>
</head>
<body id="body">
<xyz-app params="//xyz.com/api/v1"></xyz-app>
<script>
ko.applyBindings(document.getTagById("body"));
</script>
</body>
</html>
This will kick off my component "xyz-app" which is the entire application, and it doesn't have any server side events. It's not running on PHP, DotNet Core MVC, MVC in general or any of that stuff. It's just basic html managed with a build system like Gulp and everything it needs data wise is all rest apis.
Authentication -> Rest Api
Products -> Rest Api
Search -> Google Compute Engine (python apis built to index content coming back from rest apis).
So I never have any html coming back from a server (just static files, which are crazy fast). And there are only 3 files to cache other than index.html itself. Webservers support default documents (index.html) so you'll just see "blah.com" in the url and any query strings or hash fragments used to maintain state (routing etc for bookmarking urls).
Crazy quick, all pending on the JS engine running it.
Search optimization is trickier. It's just a different way of thinking about things. I.e. you have google crawl your apis, not your physical website and you tell google how to get to your website on each result.
So say you have a product page for ABC Thing with a product ID of 129. Google will crawl your products api to walk through all of your products and index them. In there you're api returns a url in the result that tells google how to get to that product on a website. I.e. "http://blah#products/129".
So when users search for "ABC thing" they see the listing and clicking on it takes them to "http://blah#products/129".
I think search engines need to start getting smart like this, it's the future imo.
I love building websites like this because it get's rid of all the back end complexity. You don't need RAZOR, or PHP, or Java, or ASPX web forms, or w/e you get rid of those entire stacks.... All you need is a way to write rest apis (WebApi2, Java Spring, or w/e etc etc).
This separates web design into UI Engineering, Backend Engineering, and Design and creates a clean separation between them. You can have a UX team building the entire application and an Architecture team doing all the rest api work, no need for full stack devs this way.
Security isn't a concern either, because you can pass credentials on ajax requests and if your stuff is all on the same domain you can just make your authentication cookie on the root domain and presto (automatic, seamless SSO with all your rest apis).
Not to mention how much simpler server farm setup is. Load balance needs are a lot less. Traffic capabilities a lot higher. It's way easier to cluster rest api servers on a load balancer than entire websites.
Just setup 1 nginx reverse proxy server to serve up your index .html and also direct api requests to one of 4 rest api servers.
Api Server 1
Api Server 2
Api Server 3
Api Server 4
And your sql boxes (replicated) just get load balanced from the 4 rest api servers (all using SSD's if possible)
Sql Box 1
Sql Box 2
All of your servers can be on internal network with no public ips and just make the reverse proxy server public with all requests coming in to it.
You can load balance reverse proxy servers on round robin DNS.
This means you only need 1 SSL cert to since it's one public domain.
If you're using Google Compute Engine for search and seo, that's out in the cloud so nothing to worry about there, just $.
If you like the code in separate files for development you can always write a quick script to concatenate them into a single file before minification.
One big file is better for reducing HTTP requests as other posters have indicated.
I also think you should go the one-file route, as the others have suggested. However, to your point on plugins eating up cycles by merely being included in your large js file:
Before you execute an expensive operation, use some checks to make sure you're even on a page that needs the operations. Perhaps you can detect the presence (or absence) of a dom node before you run the autocomplete plugin, and only initialize the plugin when necessary. There's no need to waste the overhead of dom traversal on pages or sections that will never need certain functionality.
A simple conditional before an expensive code chunk will give you the benefits of both the approaches you are deciding on.
I tried breaking my JS in multiple files and ran into a problem. I had a login form, the code for which (AJAX submission, etc) I put in its own file. When the login was successful, the AJAX callback then called functions to display other page elements. Since these elements were not part of the login process I put their JS code in a separate file. The problem is that JS in one file can't call functions in a second file unless the second file is loaded first (see Stack Overflow Q. 25962958) and so, in my case, the called functions couldn't display the other page elements. There are ways around this loading sequence problem (see Stack Overflow Q. 8996852) but I found it simpler put all the code in one larger file and clearly separate and comment sections of code that would fall into the same functional group e.g. keep the login code separate and clearly commented as the login code.

Categories