I'm fairly new to CodeIgniter and developing a couple of sites at the moment. I'm sure people may have diferent opinions on this but a general consensus and explanation as to why, would be useful.
When developing without frameworks such as CI, you tradionally put the various JS operations (listeners, jQuery calls etc) in the footer of the page.
With CI, the norm is to include a common footer & header, but if we then put all JS, for all pages, in this footer we're calling things that aren't needed on each page.
What's the best way to get around this / set this up?
I don't have any experience with it, but for my next project, I'm probably going to try Carabiner.
From the GitHub project :
Carabiner manages javascript and CSS assets. It will react differently depending on whether it is in a production or development environment. In a production environment, it will combine, minify, and cache assets. (As files are changed, new cache files will be generated.) In a development environment, it will simply include references to the original assets.
Carabiner allows you to do something like
// add a js file
$this->carabiner->js('scripts.js');
// add a css file
$this->carabiner->css('reset.css');
in your controller to add css or js files and then you output them in your header or footer with
// display css
$this->carabiner->display('css');
//display js
$this->carabiner->display('js');
I did implement something similar in the past, but this library looks promising with a lot less effort than building my own. I would definitively check it out, if I were you.
Using an application framework is no excuse to compromise your front-end code. Whoever told you that "With CI, the norm is to include a common footer & header" is a lazy developer.
What I've done in the past is something like this:
MY_html_helper.php
<?php
function js($js)
{
$js_base_path = '';
if(is_array($js))
{
foreach($js as $script_src)
{
if(strpos($script_src, 'http://') === false && strpos($script_src, 'https://') === false)
{
$js_base_path = base_url() . 'js/';
}
echo "<script src=\"{$js_base_path}{$script_src}\"></script>";
}
}
else
{
if(strpos($js, 'http://') === false && strpos($js, 'https://') === false)
{
$js_base_path = base_url() . 'js/';
}
echo "<script src=\"{$js_base_path}{$js}\"></script>";
}
}
?>
CONTROLLER
// many scripts
$this->load->view('_footer', array('js'=>array('whatever.js', 'plugin.js')));
// or a single script
$this->load->view('_footer', array('js'=>'jquery.cycle.lite.js'));
VIEW
<?php
if(isset($js))
{
$this->load->helper('html');
js($js);
}
?>
<script src="/js/script.js"></script>
Goes without saying, but feel free to customize it to suit your needs.
Many of the websites that I build are not that busy and the priority is ease of maintenance. For these cases, I just load all the scripts that I need, usually jQuery core and related plug-ins for image rotators and so on.
My view is that once the home page is loaded, the scripts get cached so the extra weight does not really affect the site performance all that much (especially for lower traffic sites).
However, a bigger problem has to do with jQuery actions that vary from page to page. In CodeIgniter, I use nested views in a template-type of approach to rendering pages. I often find myself putting in page specific jQuery actions in a <script> tag after my page content section, but before my footer section. Not quite the best but at least my jQuery actions are close to the relevant page elements (usually forms). Ideally, I would create a javascript file specific to each page and load it from the relevant CodeIgniter controller. I usually do this as part of refactoring from version 1 to 2 once the website/application has stabilized.
As an aside, I am much more concerned about keeping my CSS files organized and modular. My CSS tends towards chaos a lot more quickly than my JS files.
Take-Away: If you have a busy site and performance is critical, build in page-by-page script loading into your controllers and maintain a JS file for each page in addition to the global JS file (and similarly for CSS).
If your site is less busy, just load all the scripts and CSS files in the primary page template (or the common header and footer files) as needed.
Having said that, I would rather spend few minutes coding things right and avoiding having a major mess to deal with two years down the road when a major site upgrade is requested.
As professionals, we strive to improve our skill sets and deployment strategies, but not every website project will be perfectly executed since all budgets are finite and time is limited. At the end of the day, we get paid when the work is done, even is not 100% perfect.
I would say there are a few things you could do. You could have a number of different footers linking to different scripts and change the footer link (footer-home.php or footer-about.php for example) in your controller. Then just keep general script links in your header.
You could also author personal scripts for specific pages and just include them in the body part of the page (home.php or whatever you are squeezing in between the header and footer).
I suppose it would depend on how heavy the scripts are and how often they are being used.
I have a multiple page website using RequireJS, which loads a boot strap file (boot.js), which then requires app.js.
app.js handles all the logic, and all other module initialization happens through app.initModule() (which is just a require() call wrapper)
I also have a app.loadPageJS() to load page specific JS files (based on window.location.pathname, for example, www.domain.com/path/to/file.html would auto-load /_assets/js/pages/path/to/file.js)
This feature can be turned on/off, and overridden by adding a class of "no-auto-load" or "auto-load" to the body, respectively.
Now, my approach isn't robust enough. For one, url rewriting would break the mechanism, and for two, if loadPageJS is turned off, unless I have access to the body tag, I can't include a page specific JS file (in the case of sites using templating systems, adding a class to the body tag isn't always an option).
What are other ways to include page specific code? I'd rather avoid the following:
adding page specific code to a global.js file and doing if checks and only running certain code sets
using a pageName variable (which would essentially be similar to the above)
Thanks in advance.
If you have different modules on the page sectioned by unique ID's (a newsletter module wrapped within a div with an ID of 'newsletter', etc), you could test for existence of the module element in the DOM and conditionally load in the JS file necessary to run that module. So rather than being page-specific, it is module specific.
I am Building a learning application where there are a bunch of different page types that a learner will go through and do activities. It will be a SCORM compliant learning object.
This is the structure I have so far...
application/
models/
scorm.js
sequence.js
session.js
pagetypes/
multichoice.js
truefalse.js
basic.js
utilities/
jquery.js
api.js
My pagetypes do the viewing and the controlling, should I seperate these out? The reason I have combined them is so when I build a new page type, I can just drop it into that folder and it will get recognised straight away by the code.
What do you guys think? amidoinrite?
I'm guessing you're separating out methods based on type of page interactions.
I don't see any reason not to do it your way. So long as everything the sco needs is in the manifest you can subdivide your scripts however you want. It might save just a bit of load time to separate out separate page types... But only if you are only loading what you need into the HTML page, & you are actually navigating pages within a sco session. If you're loading all script into a single HTML page, & then dynamically changing the content of page divs, then your scripts are all loaded 1 time & you may as well have 1 minified file for all page type scripts.
I would probably go with the latter, & tie interactions to classes or ids in the markup. 1 file, less work to minify, & I can use in other packages without having to make sure that I have every page type I need...
With JavaScript it can be tricky to separate it out since it lives so closely to the view. As long as the data is separated from the actual view (which it looks like it is in your example) it will be a good design. I would argue that the pagetypes are more controllers and the HTML is the view. The most important part is to keep the model separated from the view. Unless you're trying to build reusable JavaScript/HTML components it's ok for pagetypes to blur the role of controller and view.
when i download some new plugins eg. jquery plugins i put them in js folder. and the same for css and img.
so all my different applications share them. but where do i put my js/img and css for specific application/website? every website?
and where should i put my ajaxcall-files.php?
EDIT: some guides that could give me a clean and neat file structure?
I normally keep a file structure for javascripts as follows:
- js
- jQuery
- flot
- chilli
- processing
- closure
- typical_library
- js
- css
- img
By keeping separate folders for each library/plugin (including the relevant css and images if it need be), the pain of maintenance during upgrades is less. There is one more advantage, predictable folder structures can help with autodiscovery of JavaScript base directories.
For ajaxcall files (since I mostly use an MVC pattern), I keep them in the controller files. (I mostly use CodeIgniter). Some people would keep them in views, however if the ajaxcall.php involve any business logic is best to stick them in the controller files.
In general minimize anything outside of folders.
It's entirely up to you. But what I do is put common resource files that get used by lots of pages in central locations, e.g. /js is where the javascript libraries go. My arrows go in /arrows.
But if a given resource is specific to only one page, e.g. foo_pic.png is only ever used by foo.php, then I keep the files together and name them so they list together alphabetically.
So, as you see, I don't prefer structuring only according to file type. But that's just me.
Outside of the DocumentRoot, I put my php include files under one directory and they are all suffixed .inc.php. HTML templates are organized under another dir.
My web application uses jQuery and some jQuery plugins (e.g. validation, autocomplete). I was wondering if I should stick them into one .js file so that it could be cached more easily, or break them out into separate files and only include the ones I need for a given page.
I should also mention that my concern is not only the time it takes to download the .js files but also how much the page slows down based on the contents of the .js file loaded. For example, adding the autocomplete plugin tends to slow down the response time by 100ms or so from my basic testing even when cached. My guess is that it has to scan through the elements in the DOM which causes this delay.
I think it depends how often they change. Let's take this example:
JQuery: change once a year
3rd party plugins: change every 6 months
your custom code: change every week
If your custom code represents only 10% of the total code, you don't want the users to download the other 90% every week. You would split in at least 2 js: the JQuery + plugins, and your custom code. Now, if your custom code represents 90% of the full size, it makes more sense to put everything in one file.
When choosing how to combine JS files (and same for CSS), I balance:
relative size of the file
number of updates expected
Common but relevant answer:
It depends on the project.
If you have a fairly limited website where most of the functionality is re-used across multiple sections of the site, it makes sense to put all your script into one file.
In several large web projects I've worked on, however, it has made more sense to put the common site-wide functionality into a single file and put the more section-specific functionality into their own files. (We're talking large script files here, for the behavior of several distinct web apps, all served under the same domain.)
The benefit to splitting up the script into separate files, is that you don't have to serve users unnecessary content and bandwidth that they aren't using. (For example, if they never visit "App A" on the website, they will never need the 100K of script for the "App A" section. But they would need the common site-wide functionality.)
The benefit to keeping the script under one file is simplicity. Fewer hits on the server. Fewer downloads for the user.
As usual, though, YMMV. There's no hard-and-fast rule. Do what makes most sense for your users based on their usage, and based on your project's structure.
If people are going to visit more than one page in your site, it's probably best to put them all in one file so they can be cached. They'll take one hit up front, but that'll be it for the whole time they spend on your site.
At the end of the day it's up to you.
However, the less information that each web page contains, the quicker it will be downloaded by the end-viewer.
If you only include the js files required for each page, it seems more likely that your web site will be more efficient and streamlined
If the files are needed in every page, put them in a single file. This will reduce the number of HTTP request and will improve the response time (for lots of visits).
See Yahoo best practice for other tips
I would pretty much concur with what bigmattyh said, it does depend.
As a general rule, I try to aggregate the script files as much as possible, but if you have some scripts that are only used on a few areas of the site, especially ones that perform large DOM traversals on load, it would make sense to leave those in separate file(s).
e.g. if you only use validation on your contact page, why load it on your home page?
As an aside, you can sometimes sneak these files into interstitial pages, where not much else is going on, so when a user lands on an otherwise quite heavy page that needs it, it should already be cached - use with caution - but can be a handy trick when you have someone benchmarking you.
So, as few script files as possible, within reason.
If you are sending a 100K monolith, but only using 20K of it for 80% of the pages, consider splitting it up.
It depends pretty heavily on the way that users interact with your site.
Some questions for you to consider:
How important is it that your first page load be very fast?
Do users typically spend most of their time in distinct sections of the site with subsets of functionality?
Do you need all of the scripts ready the moment that the page is ready, or can you load some in after the page is loaded by inserting <script> elements into the page?
Having a good idea of how users use your site, and what you want to optimize for is a good idea if you're really looking to push for performance.
However, my default method is to just concatenate and minify all of my javascript into one file. jQuery and jQuery.ui are small and have very low overhead. If the plugins you're using are having a 100ms effect on page load time, then something might be wrong.
A few things to check:
Is gzipping enabled on your HTTP server?
Are you generating static files with unique names as part of your deployment?
Are you serving static files with never ending cache expirations?
Are you including your CSS at the top of your page, and your scripts at the bottom?
Is there a better (smaller, faster) jQuery plugin that does the same thing?
I've basically gotten to the point where I reduce an entire web application to 3 files.
vendor.js
app.js
app.css
Vendor is neat, because it has all the styles in it too. I.e. I convert all my vendor CSS into minified css then I convert that to javascript and I include it in the vendor.js file. That's after it's been sass transformed too.
Because my vendor stuff does not update often, once in production it's pretty rare. When it does update I just rename it to something like vendor_1.0.0.js.
Also there are minified versions of those files. In dev I load the unminified versions and in production I load the minified versions.
I use gulp to handle doing all of this. The main plugins that make this possible are....
gulp-include
gulp-css2js
gulp-concat
gulp-csso
gulp-html-to-js
gulp-mode
gulp-rename
gulp-uglify
node-sass-tilde-importer
Now this also includes my images because I use sass and I have a sass function that will compile images into data-urls in my css sheet.
function sassFunctions(options) {
options = options || {};
options.base = options.base || process.cwd();
var fs = require('fs');
var path = require('path');
var types = require('node-sass').types;
var funcs = {};
funcs['inline-image($file)'] = function (file, done) {
var file = path.resolve(options.base, file.getValue());
var ext = file.split('.').pop();
fs.readFile(file, function (err, data) {
if (err) return done(err);
data = new Buffer(data);
data = data.toString('base64');
data = 'url(data:image/' + ext + ';base64,' + data + ')';
data = types.String(data);
done(data);
});
};
return funcs;
}
So my app.css will have all of my applications images in the css and I can add the image's to any chunk of styles I want. Typically i create classes for the images that are unique and I'll just take stuff with that class if I want it to have that image. I avoid using Image tags completely.
Additionally, use html to js plugin I compile all of my html to the js file into a template object hashed by the path to the html files, i.e. 'html\templates\header.html' and then using something like knockout I can data-bind that html to an element, or multiple elements.
The end result is I can end up with an entire web application that spins up off one "index.html" that doesn't have anything in it but this:
<html>
<head>
<script src="dst\vendor.js"></script>
<script src="dst\app.css"></script>
<script src="dst\app.js"></script>
</head>
<body id="body">
<xyz-app params="//xyz.com/api/v1"></xyz-app>
<script>
ko.applyBindings(document.getTagById("body"));
</script>
</body>
</html>
This will kick off my component "xyz-app" which is the entire application, and it doesn't have any server side events. It's not running on PHP, DotNet Core MVC, MVC in general or any of that stuff. It's just basic html managed with a build system like Gulp and everything it needs data wise is all rest apis.
Authentication -> Rest Api
Products -> Rest Api
Search -> Google Compute Engine (python apis built to index content coming back from rest apis).
So I never have any html coming back from a server (just static files, which are crazy fast). And there are only 3 files to cache other than index.html itself. Webservers support default documents (index.html) so you'll just see "blah.com" in the url and any query strings or hash fragments used to maintain state (routing etc for bookmarking urls).
Crazy quick, all pending on the JS engine running it.
Search optimization is trickier. It's just a different way of thinking about things. I.e. you have google crawl your apis, not your physical website and you tell google how to get to your website on each result.
So say you have a product page for ABC Thing with a product ID of 129. Google will crawl your products api to walk through all of your products and index them. In there you're api returns a url in the result that tells google how to get to that product on a website. I.e. "http://blah#products/129".
So when users search for "ABC thing" they see the listing and clicking on it takes them to "http://blah#products/129".
I think search engines need to start getting smart like this, it's the future imo.
I love building websites like this because it get's rid of all the back end complexity. You don't need RAZOR, or PHP, or Java, or ASPX web forms, or w/e you get rid of those entire stacks.... All you need is a way to write rest apis (WebApi2, Java Spring, or w/e etc etc).
This separates web design into UI Engineering, Backend Engineering, and Design and creates a clean separation between them. You can have a UX team building the entire application and an Architecture team doing all the rest api work, no need for full stack devs this way.
Security isn't a concern either, because you can pass credentials on ajax requests and if your stuff is all on the same domain you can just make your authentication cookie on the root domain and presto (automatic, seamless SSO with all your rest apis).
Not to mention how much simpler server farm setup is. Load balance needs are a lot less. Traffic capabilities a lot higher. It's way easier to cluster rest api servers on a load balancer than entire websites.
Just setup 1 nginx reverse proxy server to serve up your index .html and also direct api requests to one of 4 rest api servers.
Api Server 1
Api Server 2
Api Server 3
Api Server 4
And your sql boxes (replicated) just get load balanced from the 4 rest api servers (all using SSD's if possible)
Sql Box 1
Sql Box 2
All of your servers can be on internal network with no public ips and just make the reverse proxy server public with all requests coming in to it.
You can load balance reverse proxy servers on round robin DNS.
This means you only need 1 SSL cert to since it's one public domain.
If you're using Google Compute Engine for search and seo, that's out in the cloud so nothing to worry about there, just $.
If you like the code in separate files for development you can always write a quick script to concatenate them into a single file before minification.
One big file is better for reducing HTTP requests as other posters have indicated.
I also think you should go the one-file route, as the others have suggested. However, to your point on plugins eating up cycles by merely being included in your large js file:
Before you execute an expensive operation, use some checks to make sure you're even on a page that needs the operations. Perhaps you can detect the presence (or absence) of a dom node before you run the autocomplete plugin, and only initialize the plugin when necessary. There's no need to waste the overhead of dom traversal on pages or sections that will never need certain functionality.
A simple conditional before an expensive code chunk will give you the benefits of both the approaches you are deciding on.
I tried breaking my JS in multiple files and ran into a problem. I had a login form, the code for which (AJAX submission, etc) I put in its own file. When the login was successful, the AJAX callback then called functions to display other page elements. Since these elements were not part of the login process I put their JS code in a separate file. The problem is that JS in one file can't call functions in a second file unless the second file is loaded first (see Stack Overflow Q. 25962958) and so, in my case, the called functions couldn't display the other page elements. There are ways around this loading sequence problem (see Stack Overflow Q. 8996852) but I found it simpler put all the code in one larger file and clearly separate and comment sections of code that would fall into the same functional group e.g. keep the login code separate and clearly commented as the login code.