I'm currently working on a 100,000 page website, the current design has been in place for over 5 years and successive updates have resulted in four 12,000+ line CSS files.
Obviously the CSS has become unwieldy, many of the styles are duplicated and it's nearly impossible to know how many of the styles are actually being used.
We can minify, but this isn't really tackling the problem, just delaying the inevitable re-work.
So three questions are there any tools out there for...
de-duplicating large CSS files?
scanning the site and logging CSS class and ID use?
could such scanning be achieved with a script of some kind, Greasemonkey maybe?
http://unused-css.com/ Does some of what you ask, and they have this to say:-
Latish Sehgal has written a windows application to find and remove unused CSS classes. I haven't tested it but from the description, you have to provide the path of your html files and one CSS file. The program will then list the unused CSS selectors. From the screenshot, it looks like there is no way to export this list or download a new clean CSS file. It also looks like the service is limited to one CSS file. If you have multiple files you want to clean, you have to clean them one by one.
Dust-Me Selectors is a Firefox extension (for v1.5 or later) that finds unused CSS selectors. It extracts all the selectors from all the stylesheets on the page you're viewing, then analyzes that page to see which of those selectors are not used. The data is then stored so that when testing subsequent pages, selectors can be crossed off the list as they're encountered. This tool is supposed to be able to spider a whole website but I unfortunately could make it work. Also, I don't believe you can configure and download the CSS file with the styles removed.
Liquidcity CSS cleaner is a php script that uses regular expressions to check the styles of one page. It will tell you the classes that aren't available in the HTML code. I haven't tested this solution.
Deadweight is a CSS coverage tool. Given a set of stylesheets and a set of URLs, it determines which selectors are actually used and lists which can be "safely" deleted. This tool is a ruby module and will only work with rails website. The unused selectors have to be manually removed from the CSS file.
Helium CSS is a javascript tool for discovering unused CSS across many pages on a web site. You first have to install the javascript file to the page you want to test. Then, you have to call a helium function to start the cleaning.
UnusedCSS.com is web application with an easy to use interface. Type the url of a site and you will get a list of CSS selectors. For each selector, a number indicates how many times a selector is used. This service has a few limitations. The #import statement is not supported. You can't configure and download the new clean CSS file.
CSSESS is a bookmarklet that helps you find unused CSS selectors on any site. This tool is pretty easy to use but it won't let you configure and download clean CSS files. It will only list unused CSS files.
If you are using Visual Studio, this extension helps to auto-merge css classes.
Related
For about page, I'm just using a template from http://html5up.net/license for two reasons. One;there won't be many people checking my about page any way. two; I would rather spend my time on different thing and use free template.
Because I'm using the code I didn't write for this, I want this to be seperated. I don't want this to be interfere with my css and javascript folder. So, I added long css and javascript code in one html file. readability of code can't be worse than this, but it's just about page that I won't change frequently.
My question is if I add all of them in one html page, will it be a problem to my website's functionality?
It's actually recommended by some people to include all Javascript and CSS embedded in the page, even if it's repeated across many pages on the website.
The reason for that is because it reduces the number of HTTP requests to the server and makes you website's load-time much faster, especially when a user hits your website from a Google search result (one-time users). For web portals, it is not recommended because the user will continually browse through pages, and the Javascript/CSS is expected to be cached in a separate file.
Here's an example of how much slower a webpage can be with multiple external Javascript/CSS pointers:
If all those files were embedded into the page, the load time will decrease. Only after loading many pages on the same site will the repeated code will start to take effect.
I would suggest adding your code to an external stylesheet / script and linking that stylesheet & script in your document. eg.
<link rel="stylesheet" type="text/css" href="about-page.css">
<script src="about-page.js"></script>
This way you avoid messy code, and some common pitfalls that come with inline CSS / JS.
Or you can add to the existing CSS / JS files and add comments organizing your code, indicating what you added, etc...
There shouldn't be any huge problem, with exception of not being easily managed, but you really need to understand the 'cascade' in 'Cascading Style Sheets' to see what potential problems may arise.
It's important to remember that anything declared inline in your file holds more weight than things defined in an external stylesheet. So if a {text-decoration: none;} is defined in your external stylesheet, and you define a {text-decoration: underline;} in your html document. Your links will now be underlined, due to specificity.
To learn more about how the 'Cascading' aspect of CSS see Mozilla Developer Network - https://developer.mozilla.org/en-US/docs/Web/CSS/Cascade
Well sometimes using JavaScript or other bad practices to get an output that is "functional" my affect your SEO score which is used by search engines to recommend your page to users who might be searching for something you have.
Apart from this you will not be able to use technologies like gzip compression which greatly affect page load speed.
you might want to check:
Why page insights
gzip compression and caching
Website vs Web application
So simply a website is not only about functionality, it is about reach-ability, usability, and recommendation by search engines.
My question is if I add all of them in one html page, will it be a
problem to my website's functionality?
No.The effective result would be a Single Page Application
I am creating a site that uses Java Script and CSS from jQuery and jQuery Mobile. Right now I am not hosting any of the files but rather referencing URLs on the jQuery site. This has the disadvantage that I have to load resources from jQuery every time the page loads and I cannot alter the files myself. I want to switch to hosting this stuff locally and would like to go about it in an organized and scalalable fashion. Is there any better way to do this than just copying the code from the links and pasting it into my own local .css and .js files?
Modifying the jQuery source is not ideal as you would be required to maintain it with every new release. If there is additional functionality you would like to add, it is better to create jQuery plugins. As for managing your project with respect to CSS and JavaScript files, most IDEs will generate a series of folders following the convention of JavaScript and CSS files being placed in a scripts and styles directory, respectively, under your project root. On top of this, it is wise to catalog your changes with some form of source control, such as git. There is plenty of documentation on the web on how to use this tool, and explaining how to use any form of source control is far too broad for an answer on StackOverflow. There is a certain level of mental discipline you must maintain, however, especially if you are manually managing the structure of your web project. This will come with time and experience as to what works best for you.
I've launched a redesign of our website and I'm using quite a bit of Javascript for the first time.
I've learned that I should be combining all my javascript and css into one file (each obviously) but while I know I can combine the css without problems but the javascript I'm not sure of.
I have to load:
jquery.min.js <-- I load the top two from ajax.googleapis.com, is that a good idea
jquery-ui.min.js
javascript for Facebook
some for google plus button
same for twitter
some for google analytics
then some inline stuff to hide divs which javascript users shouldn't see and that type of thing.
you can see it here: traditionalirishgifts.com
So can I just copy and paste the contents of all these files into one big file. Find some way to minify (haven't looked into that fully yet) it. Load this one file right at the bottom of my page before and bingo?
I'd use this tool: http://jscompress.com/
JSCompress.com is an online javascript compressor that allows you to
compress and minify your javascript files. Compressed javascript files
are ideal for production environments since they typically reduce the
size of the file by 30-90%. Most of the filesize reduction is achieved
by removing comments and extra whitespace characters that are not
needed by web browsers or visitors.
You should always be able to merge all your external JavaScripts into one file. You can use a server-side compressor to cache it and serve it as one file. It does put some constraints on the files, like which file should load first etc. Also, if there is a syntax error anywhere it will crash completely.
Keep in mind that 3rd party code like code from google can't be mixed in. Usually there is some kind of authentication going on (or an API key in the URL). If you try to cache that code, it will stop working after a while. So you do need to keep those separate.
This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
Make File for Javascript
Actually i am writing some javascript for testing purpose.
i want to use multiple javascripts in which functions are defined.
Is there any way to achieve this ?
I think Make file is the way.
But i don't know that also.
I want to generate make file.
Can any body suggest me how is to be done?
Creating makefile is an interesting solution, but you can also use require.js library to set the sequense of loaded scripts.
If you looking to combine multiple scripts as one. You can the use build script Boilerplate.
Why to use it? Its not only about scripts.
Combines and minifies javascript (via yui compressor)
Inlines stylesheets specified using #import in your CSS
Combines and minifies CSS
Optimizes JPGs and PNGs (with jpegtran & optipng)
Removes development only code (any remaining console.log files, profiling, test suite)
Basic to aggressive html minification (via htmlcompressor)
Autogenerates a cache manifest file (and links from the html tag) when you enable a property in the project config file.
Revises the file names of your assets so that you can use heavy caching (1 year expires).
Upgrades the .htaccess to use heavier caching
Updates your HTML to reference these new hyper-optimized CSS + JS files
Updates your HTML to use the minified jQuery instead of the development version
Remove unneeded references from HTML (like a root folder favicon)
Runs your JavaScript through a code quality tool (optional)
If you have several separate files and you want to append them all it into one file before, f.i. using it one your website, then any script or tool is good: Make, Rake, Cake, or your own, in your language of choice. If it goes to the web, it should be also compressed. Now how to do it, is beyond scope of this question, there are loads of articles on the web about all those topics. You are encouraged to come back when (if) you hit some more detailed problem.
Do you localize your javascript to the page, or have a master "application.js" or similar?
If it's the latter, what is the best practice to make sure your .js isn't executing on the wrong pages?
EDIT: by javascript I mean custom javascript you write as a developer, not js libraries. I can't imagine anyone would copy/paste the jQuery source into their page but you never know.
Putting all your js in one file can help performance (only one request versus several). And if you're using a content distribution network like Akamai it improves your cache hit ratio. Also, always throw inline js at the very bottom of the page (just above the body tag) because that is executed synchronously and can delay your page from rendering.
And yes, if one of the js files you are using is also hosted at google, make sure to use that one.
Here's my "guidelines". Note that none of these are formal, they just seem like the right thing to do.
All shared JS code lives in the SITE/javascripts directory, but it's loaded in 'tiers'
For site-wide stuff (like jquery, or my site wide application.js), the site wide layout (this would be a master page in ASP.net) includes the file. The script tags go at the top of the page.
There's also 'region-wide' stuff (eg: js code which is only needed in the admin section of the site). These regions either have a common layout (which can then include the script tags) or will render a common partial, and that partial can include the script tags)
For less-shared stuff (say my library that's only needed in a few places) then I put a script tag in those HTML pages individually. The script tags go at the top of the page.
For stuff that's only relevant to the single page, I just write inline javascript. I try to keep it as close to it's "target" as possible. For example, if I have some onclick js for a button, the script tag will go below the button.
For inline JS that doesn't have a target (eg: onload events) it goes at the bottom of the page.
So, how does something get into a localised library, or a site-wide library?.
The first time you need it, write it inline
The next time you need it, pull the inline code up to a localised library
If you're referencing some code in a localized library from (approximately) 3 or more places, pull the code up to a region-wide library
If it's needed from more than one region, pull it up to a site-wide library.
A common complaint about a system such as this, is that you wind up with 10 or 20 small JS files, where 2 or 3 large JS files will perform better from a networking point of view.
However, both rails and ASP.NET have features which handle combining and caching multiple JS files into one or more 'super' js files for production situations.
I'd recommend using features like this rather than compromising the quality/readability of the actual source code.
Yahoo!'s Exceptional Performance Team has some great performance suggestions for JavaScript. Steve Souders used to be on that team (he's now at Google) and he's written some interesting tools that can help you decide where to put JavaScript.
I try to avoid putting javascript functions on the rendered page. In general, I have an application.js (or root.js) that has generic functionality like menu manipulation. If a given page has specific javascript functionality, I'll create a .js file to handle that code and mimic the dir structure on how to get to that file (also using the same name as the rendered file).
In other words, if the rendered page is in public/dir1/dir2/mypage.html, the js file would be in public/js/dir1/dir2/mypage.js. I've found this style works well for me, especially when doing templating on a site. I build the template engine to "autoload" my resources (css and js) by taking the request path and doing some checking for the css and js equivalents in the css and js directories on the root.
Personally, I try to include several Javascript files, sorted by module (like YUI does). But once in a while, when I'm writing essentially a one-liner, I'll put it on the page.
Best practice is probably to put it on Google's servers.
(Depends what you mean by "your" javascript though I suppose :)
This is something I've been wrestling with, too. I've ended up by using my back-end PHP script to intelligently build a list of required JS files based on the content requested by the user.
By organizing my JS files into a repository that contains multiple files organized by purpose (be they general use, focused for a single page, single section, etc) I can use the chain of events that builds the page on the back-end to selectively choose which JS files get included based on need (see example below).
This is after implementing my web app without giving this aspect of the code enough thought. Now, I should also add that the javascript I use enhances but does not form the foundation of my site. If you're using something like SproutCore or Ext I imagine the solution would be somewhat different.
Here's an example for a PHP-driven website:
If your site is divided into sections and one of those sections is calendar. The user navigates to "index.phhp?module=calendar&action=view". If the PHP code is class-based the routing algorithm instantiates the CalendarModule class which is based on 'Module' and has a virtual method 'getJavascript'. This will return those javascript classes that are required to perform the action 'view' on the 'calendar' module. It can also take into account any other special requirements and return js files for those as well. The rendering code can verify that there are no duplicates of js files when the javascript include list is built for the final page. So the getJavascript method returns an array like this
return array('prototype.js','mycalendar.js');
Note that this, or some form of this, is not a new idea. But it took me some time to think it important enough to go to the trouble.
If it's only a few hundred bytes or less, and doesn't need to be used anywhere else, I would probably inline it. The network overhead for another http request will likely outweigh any performance gains that you get by pulling it out of the page.
If it needs to be used in a few places, I would put the function(s) into a common external file, and call it from an inline script as needed.
If you are targeting an iphone, try to keep anything that you want cached under 25k.
No hard and fast rules really, every approach has pros and cons, would strongly recommend you check out the articles that can be found on yahoo's developer section, so you can make informed decisions on a case by case basis.