So I have been playing around with a home project that includes a lot of js. I having been using Script# to write my own library etc. Personally I wouldn't write a lot of js if I didn't have a tool like Script# or GWT to help maintain it.
So far it includes these external libraries:
– ASP.NET AJAX
– ExtJS
– Google Maps
– Google Visulisations
– My own library to wrap the above libraries and add extra functionality...
So that works out to be a heap of js. It runs fine on my pc. I however have little faith in js/browsers and I am concerned that loading too much js will cause the browser to die or perform poorly.
Is this a valid concern?
Does anyone have any experience with loading a lot of js into the browser that has resulted in performance issues? I however know there are a lot of variables here, for example browser type (I assume IE is worse than others) the client PCs RAM etc, but it would be good to get other peoples experiences. I would hate to invest a lot of time into js only to find that I am painting myself into a corner.
The more I use Script# the more client classes I have as I move more processing onto the client. At what point would this start becoming an issue? I'm sure the browser could easily handle 100 MS Ajax classes but at what would be too far for a browser?
NOTE: I am not concerned about the actual js file sizes but more the runtime environment that gets loaded.
There is nothing wrong with having large number of js files or big js files, the project currently am working on got more than 60 core framework libraries and 30 of each module got average of 5 to 6 js files.
So the only concern is how you design your website that make use of the JS best practices & optimization techniques. like
Minimize the JS using YUI or any other compression libraries to address the download size issues.
Enable proper caching in your webserver to reduce the file downloads.
Put your javascript in the bottom of the page, or make it a separate file.
Make your AJAX response cachable.
And finally, design your page that handles the on-demamnd script loading.
- Microsoft DOLOTO is a good example for this one. download it here
And Check out the High Performance Web Sites && latest Even Faster Web Sites by Steve Souders. Its a must read for the web developers. This book addresses all the common problems web developers facing today.
with modern browsers routinely occupying 250 MB of RAM or more, script caching, and optimized javascript engines, keeping the script library resident would probably be negligible added load in most reasonable scenarios.
the biggest bottleneck would probably be intitial load time of the scripts - downloading and parsing them. but once that's done, the scripts are cached and the per-page initialization isn't very noticeable.
I highly doubt a browser would ever crash running your JS scripts, but it will become really slow and may not perform what you want. Most people are more concerned about how fast it runs, not if it will run!
I agree with jspcal, you should be able to load quite a lot of javascript with no problems. The javascript engines in all the modern browsers are a lot faster than they were a few years ago. The initial load will be the biggest issue. If possible I'd suggest lazy loading scripts that aren't needed for the page to render.
Also, Steve Souders has a lot of great material about improving page load times, such as this article, which gives several techniques for loading scripts without blocking.
http://www.stevesouders.com/blog/2009/04/27/loading-scripts-without-blocking/
If you're really concerned about performance then I would take a look at your target audience. If you think you'll have a relatively high number of IE6 users then test it out in IE6-- on an older machine if possible. IE Tester is great for this.
Related
A friend of mine is programming a web page. Instead of writing any HTML in the .html file, he is outputting the entire website with javascript. Excluding SEO, or the fact that it will be a pain to debug, what are the downsides to this? Also, if there is an upside, what would an upside be?
As you are already aware, SEO is a big deal. SEO is usually the elephant in the room (e.g. the big deal) for javascript-based web-sites and you either forgo SEO or have to somehow design an alternate path that the search engines can index you via and that doesn't run afoul of their rules about serving different content to search engines.
Some potential downsides:
Accessibility - access via screen readers and other tools that may be geared to the page HTML.
Mobile - Sometimes rendering the page will require larger downloads, more javascript code and more data and more CPU to build the page than a simpler server-rendered page. This may cause compromises on small devices without a lot of bandwidth or horsepower.
Performance - The initial page display time may be slower as you can't render anything until all data and code has been downloaded and run. Subsequent page times might be faster or slower depending upon how the app is written. Sometimes you save server trips by doing client-side rendering (which could be faster), but sometimes it's slower to do things on the client.
Security - Somethings simply can't be secured client-side the way they can server-side.
Code secrecy - Code in browsers is open to the world to see. Code on servers can be kept secret.
Same-origin limitations - Browsers are much more limited in who they can contact than servers are because of the same-origin limitation.
Memory - Lots of code and lots of data can consume a lot more memory for your app than a server-generated HTML page. This probably isn't meaningful on a desktop, but could be on a smaller device.
Some of the upsides:
The content can be dynamically rendered with lots of smarts based on the type of user, the screen size, the capabilities of the device, etc... and this local rendering can generally be done more effectively than trying to do it on the server.
You can often do lots of the functions of the app without ever reloading the page, just fetching data from the server or issuing commands to the server with ajax and never reloading the page.
One big downside may be mobile.
Depending on the functionality, a javascript-only page may be slow for those on mobile devices.
The pages may be resource heavy, if you are using one or more libraries and / or accessing lots of information. This could also cost a fair amount for those on mobile devices.
Another downside could be accessibility. Not sure how enabling software for low/no vision users would work with a js only site.
My opinion that such kind of coding is more appropriate for members only areas, that of course are not reachable by search engines.
Provided that you use a good library, that is able to do the layout for you, such as ExtJS, it's an interesting coding. You can build web applications that look similar to desktop applications. Browser differences are smoothed out by the library and expect to have very few problems, if any.
For public websites, in general the SEO argument is a pretty big one. If nobody can find you...
Ignoring download times, what's the performance impact of making the browser interpret several separate small files as opposed to one big one. In particular, could it make a significant difference to page rendering speed in ie6 and 7?
Browsers typically limit themselves to a certain number of simultaneous requests. This number is dependent on how "server friendly" they are.
How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
So, depending on the number of artifacts the browser has to load, it may have to wait for others to complete first. Artifacts include everything the browser has to go back to the server for: images, javascript, css, flash, etc. Even the favicon if you have one.
That aside, rendering speed is normally going to boil down to how the pages are structured. ie. how many calculations you depend on the browser to make (% width vs fixed width).
It has to make more round-trip HTTP requests. It may or may not have significant consequences.
Where,
Apart from download times , if you too have many javascript and css files
Each request is as an extra http call from client to server.
If the page load is one of the main criteria you should definetly think about it
read this doc also
http://developer.yahoo.com/performance/rules.html
I work for a gov't organization with a large scale enterprise intranet and when we had around 25+ JS files and 10+ CSS files loading on our intranet portal we did notice a dramatic lag in page load time in IE6 and 7. Newer browsers have faster routines for loading and executing JavaScript. I used YUI Compressor to minify everything including CSS.
If you include minification in along with combining files, then dead code often gets removed (depending on the minifier) and some code can be optimized (see YUI Compressor: What are micro optimizations? and Which javascript minification library produces better results?).
I've asked this question a bunch of times when I first started out with web development.
If you have under 10 javascripts and 10 css files (css not so important in my opinion), then I don't think there is much use minifying and compressing. However, if you are dealing with a bunch of javascript files (greater than 10), then YES, it's gonna make a difference.
What you may experience is, even after compressing and minifying and combining your scripts, you may still experience slow-ness. That's when HTML caching plays a huge role in website optimizations, at least that's what I experienced in my web application. Try looking into Memcached and use it to cache your html files. This technique speeds up your web application a WHOLE LOT!!!
I am assuming your question is related to web optimization and high performance websites.
Just my 2 cents.
I've started work recently at a new company and they have an existing application with 1000s of lines of Javascript code. The baseline contains dozens of JS files with easily over 10,000 custom lines of code, they also use multiple 3rd party libraries such as Jquery, Livequery, JQTransform and others. One of the major complaints they have been receiving from users is the slowness of the client side operation of the site. I've been tasked with optimizing and improving the performance of the JS. My first step will be obviously to move forward to the newest Jquery library, and incorporate JSMin into the build process. Other than that I'm wondering if anyone has some tips on where to begin with optimization on such a huge code base?
You could try installing DynaTrace Ajax Edition (free download here) and see what that tells you. It supports only IE8 I think, but that's probably as good a place to start as any. It's got a much more thorough and understandable profiler interface than do either Firebug or Chrome, in my opinion.
One thing that jumps out at me is "Livequery", which if not used very carefully can cause gigantic performance problems.
Remember this: in a code base that big, developed over time and possibly not with the most "modern" Javascript techniques available, your real problems are going to be bad algorithms in your own code. Newer libraries and minification/optimization methods are good ideas, but the first thing you need to do is find the pages that seem sluggish and then start profiling. In my experience, in a big old codebase like that, you'll find something terrible really quickly. Install a desktop gadget that tracks CPU utilization. That's a great way to see when page code is causing the browser to slow down directly, and not just network lag. Any big spike in browser CPU usage for any significant amount of time should be a big red flag.
Profile that code. Don't optimize something if you just "feel" it could be optimized. Remember the 80% 20% rule. 80% of time is spent in 20% of code.
Use Google's Closure tools. They can optimize and reduce your JS code, which will at least cause it to load faster on your client's computers.
The way to go is to find bottlenecks. If you find the actual situtation where the app is slow, you can use Firebug to profile your code and tell how much time spent on every function and how many times they have been called. From this information it's pretty easy to determine what areas need some improvement.
Generally the bottlenecks of a webapplication are:
Working with the DOM extensively (repaints, reflows)
Heavy network communication (AJAX)
You have a long road ahead of you mate, and I dont envy you.
Here are some Performance Optimization Techniques for Javascript that I wrote down after working in a similar role as yours recently.
They are broken down into 5 broad categories in order of the performance difference they make.
However given what you said about the codebase, I think the second section on Managing and Actively reducing your Dependencies is the most relevant, particularly:
Modifying code to reduce library dependencies, and
Using a post-load dependency manager for your libraries and modules
However all 25 techniques listed there are useful for improving performance.
I hope that you find them useful.
PageSpeed and Yslow suggest that to combine javascripts file to reduce HTTPRequest. But this is becuase (I think) pre ie8 browser has no more than 2 serverhost connection.
But nowaday, browser has 6 serverhost connections, which means it has download javascripts in parrallel. So let say we have 1MB of javascript, should we break it down into 6 different files in similar size to obtain max download speed? Please let me know.
Micahel.S
No, because each HTTP request involves overhead (less if pipelining is used)
The answer to your question is no. However, assuming you are able to serve your content in a completely isolated environment where only IE8 is used (like company intranet), then the answer to your question becomes: no.
Since you aren't designing for IE6-7 I assume you are in an isolated environment (otherwise you are making a poor designing decision). In this environment, yes you might see small benefits from breaking down JavaScript files, but I recommend against it.
Why? Since you are optimizing for speed, I assume you are putting JavaScript at the bottom of the body tag in your HTML document, in order to prevent JS from blocking download of DOM. This is a fundamental practice to make the page appear to be loading faster. However, by placing the content in the bottom of the body, your question becomes moot. Since the DOM is no longer being blocked by the script tags, whatever speed benefits you could achieve by using parallel downloading would be lost on the user because they see the page load before the browser even requests the JavaScript files.
tl;dr: There is no practical speed advantage to break JS into multiple files for parallel downloading.
Splitting the files won't make too much of a difference really. If you want to see performance gains in terms of download times for your Production environment what I always do is use YUI Compressor (http://developer.yahoo.com/yui/compressor/) to get my JS file size down as small as possible then serve a gzipped version of the js to browsers that support it.
In a development environment you shouldn't be too worried about it though. Just split them logically based on their purpose so they're easier to work on then bring them all together into one file once you're ready and optimize it for production.
Most browsers will cache your JavaScript files anyway, so after the first page load, it won't matter.
But, you should split your JavaScript logically and in a way that would most help you in development, since browsers vary in the number of simultaneous connections they allow.
For speed, you can obfuscate your code (via a minimization routine) and serve it in a way no human would have the patience to read.
I am trying to compare the performance of several Javascript libraries. Although measuring transaction times helps to determine the working performance of a library, it doesn't account for the time required to download and initiate the individual libraries. I'm looking for suggestions for the best method of determining the load time other than using tools such as firebug, etc. I would like to be able to setup a controlled environment where a page could be loaded n-times, while capturing the start and end times. Should the Javascript library contents be included in the page rather than as an include file, or is there a better way?
Reading this article by John Resig on JavaScript Benchmark Quality before you start anything may help you out.
After that, I would suggest that you might try requesting the javascript from your sever, getting, and timing how long the eval(responseJS); takes. That way, you are only timing how long the Library takes to load rather than that plus the time it takes to download from the server.
The libraries should always be an external file, included via a script tag, either on their own or with the site's scripting also rolled in. Minified and packed files will have a smaller size attachment. Delivered via a CDN is optimal as well, as the CDN will have it cached. Many of the popular frameworks are available over Google's CDN.
You must also account for not only the library, but the application using the library. The quality of the JS in the libraries is (typically) top notch, but what about the quality of the code tapping into those libraries, or even the code of plugins which may not be developed by the library authors. You also have to look at what browser is being used. As much as we hate it, most of these cross browser libraries are optimized for best performance out of Internet Explorer, because it retains 85+% market share.
The performance of any library is really a trade off. Deciding what is acceptable in order to get your application to do whatever it is that you want to do.