Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Here's an example from Bootstrap's "Getting Started" code:
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.0-rc1/css/bootstrap.min.css">
<script src="//netdna.bootstrapcdn.com/bootstrap/3.0.0-rc1/js/bootstrap.min.js"></script>
Putting CSS and JS onto a CDN seems like pure lunacy. If the CDN goes down, so does your site. I gather that pointing to a CDN gives you the programmer the latest updates and all, but what if the maintainers break their own production code? I always download the necessary css and js and park them in my own directory. Whats the consensus?
TL;DR Why leave critical files such as JS and CSS to a CDN?
Reliability is stellar for all three CDNs (Google vs. Microsoft vs. Media Temple CDNs)
First off, all three CDNs proved to have excellent availability. The only CDN with any downtime at all for this entire year so far was Microsoft’s, and that was just a few minutes. All three had, when rounded to two decimals, 100.00% uptime.
This really is what you want from a CDN. You shouldn’t have to worry about it working or not, and the distributed nature of a CDN usually makes it extremely reliable, as this survey shows.
We will focus the rest of this article on performance since that is where the true differences are.
SOURCE
So the question is. Your host can do better?
The use of the CDN is to allow for edge caching and improved performance in page loading. Usually you define a fall back reference in the case the CDN is down. Here is another answer that describes this process: How to fallback to local stylesheet (not script) if CDN fails
The purpose in pointing to a CDN is decidedly not to stay up-to-date. Generally, and as it appears in your example, you refer to a particular version of an external script. By doing so, you're protected against buggy releases and breaking changes.
The reason you link to the CDN is twofold: You take load off your own servers, so they can spend their time actually rendering dynamic output. And it decreases load times on your site.
See http://developer.yahoo.com/performance/rules.html#cdn
Regarding downtime: A file hosted on a CDN is far more likely to be available than a file hosted on your own network.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm looking for simple bullet point answers please. I've tried looking all over, Googling, other questions here but I can never find both advantages and disadvantages for each method.
This is the answer I got from W3Schools pertaining to external javascript files
Pros
It allows separation of concerns - which is not a big deal in simple pages but as the script grows larger you can have a monolithic html page. Big files in general are not ideal for maintainability
It allows caching - when the browser loads a script externally (whether it's be from your site or a cdn) it caches the file for future use. That's why cdn's are preferred for commonly used scripts. Makes the browser use a cached script instead of building a new one every time the page loads which makes the page load faster
More readable code - this ties into the first bullet point but nevertheless it is important. The smaller the files we humans are working with the better. It is easier to catch mistakes and much easier to pass of the torch to the next developer working on the project or learning from it.
Cons
The browser has to make an http request to get the code
There may be other browser specific reasons as well, but I believe the main reason is the separation of code into different components.
Probably the best advantage of using external javascript files is browser caching - which gives you a good performance boost.
Imagine you have a site that uses MyJsFile.js (a random 50kb javascript file that adds functionality to your websire).
You can:
embed it in every page, and add the 50kb to every page request (not ideal)
link it in every page (<script src="MyJsFile.js"></script>)
The second option is usually prefered because most modern browsers will only get the file once, and serve it from the browser cache instead of downloading it at every request.
Check out similar questions:
Why not embed styles/scripts in HTML instead of linking?
When should I use Inline vs. External Javascript?
Is it better to put the JS code on the html file or in an external file?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Does anyone know why browsers such as Chrome, FF and IE don't just have all famous scripts embedded in their install? They could have all the versions of jquery, for example, pre-compiled (i.e. in V8 for chrome) and the browser would just be able to recognize the reference to a cdn or simply a local script by the name of the script. And really, how much greater would that make the install of the browser if you included all version of, say, jquery, angular, dojo and ext? Compiled to C++ via V8 these scripts aren't very large at all.
Sure, you could say, 'but then it won't use the modifications I made in jquery-2.1.3.js'. True, but that's just horrible engineering.
It would be faster and save bandwidth.
But there is probably something I'm overlooking. There always tends to be.
Because there is already a whole protocol related to delivering resources to browsers and for caching them on client side and sending headers to tell the browsers when they should check for new versions.
Also, filename-1.2.3.js don't tell the whole story. There is also a build number after major, minor and patch. See http://semver.org/
You couldn't expect distinct browser vendors to take the responsibility to update their browsers every time any script was being updated or built. It'd simply slow down the delivery. Considering there already is a protocol for it. HTTP.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
There are a lot of "buts" in approaches to web development. Recently I've read an article
why not to use Twitter bootstrap. One of the reasons were that it doesn’t follow best practices. Well I don't want to discuss about TB. What I wanna know is how is it with Modernizr - it looks like that has a lot of advantages, but what about dissadvantages? Is that also redeemed by using bad programming practises on web (like these mentioned in twitter bootstrap)?
By good practices I mean ideas which are connected with Html5 and CSS => this is not opinion based question - I'm basicly asking if Modernizr is in contrary with these ideas.
Modernizr itself tries to follow best practices as best as possible, however there are a few things that it does that aren't necessarily "best practice"
it basically requires being loading in the <head>. Since one of the main uses of Modernizr is the css classes that are added, you actually want it to block the rendering of the page until after it has ran. If you load it in the bottom of the page (which is the "best practice" for javascript, generally) and rely on the classes it provides, you would see a flicker between the non-support and support versions of your styles as it runs.
It can be heavy. There are ongoing discussions on the github issue tracker about how we can improve the execution time of the library, as well as new preposed updates to the lib that would group tests to increase speed
Not only that, but it can be used poorly. One of the most common issues is that people deploy their public website with the debug build of Modernizr - the one that includes ALL of the tests. This means running very large amount of javascript that never impacts your site.
Other than that, modernizr tries very hard to help define best practices, let alone just follow them. If you ever find there is any issue what so ever, I would really encourage you to open up an issue on the github repo to help us move the internet forward.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
UPDATE: In the end this has become a non-issue. we created proxy that serves the theme css files from the blob through the same domain as the js files.
I am developing a themeswitcher module that uses the document.stylesheets object to get information about css rules in certain stylesheets.
the site is hosted on azure, and the css files are stored on the blob.
this makes it impossible to access to stylesheets due to lack of support for Cross-origin resource sharing in the blob.
a workaround is to inject the entire css file/s into the head section of the html page.
would this significantly affect performance by bloating the page?
EDIT:
As mentioned int he comments below, there are of course 2 factors: the first is caching of external css files, which leads to faster re-load times.
the second is number of http requests- which become less if the css is in the page: although if the files are already cached- no request is made for them on re-load anyways.
I am trying to understand if aside from these factors, does making the html page much bigger, by adding about 3-5 style tags each with 100- 300 css rules, in the header section, cause scripts to run slower on the page because the html string itself is much longer?
If it is in the page, it will take longer to load due to the css not already being in the browser's cache on second load.
It depends on different factors. For example - how big is your css and where are your users go to your site from. If your target audience are - mobile users, every extra kb of data may be significantly affect performance and cost you negative user experience. On the other hand each external stylesheet is an http request which may affect performance even more. So it depends on your particular case
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Has anyone here used KSS?
KSS is an AJAX framework.
KSS has both a client-side Javascript library and server-side support.
The client-side Javascript library needs to be included in your page. It fetches Kinetic style sheets from the server, parses them and binds a set of action to browser events and/or page elements. It is clean Javascript code that can peacefully coexist with other clean Javascript librarys(sic) like JQuery or ExtJS. It is about 100k in production mode. You can integrate your own Javascript code by using its extension mechanism through plugins.
I'm currently working on a project that uses it. Are there any drawbacks and gotchas to be aware of?
What's its cross browser support like?
At first as was really put off by the fact that you don't write the JS by hand, and actually translates a CSS-like file to JS behavior, but seeing in action, I've got to say that it really works quite well. But I haven't done any cross browser tests yet.
Some things that I've found:
it sends HTML from the server, instead of XML and/or JSON and replacing them clientside, meaning higher messages (understandable)
it has problems with scripts that add iframes dynamically on a KSS widget that you reload
some things are hard to debug, while others are made easy thanks to KSS' integration with Firebug