Getting Javascript via AJAX [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I would like to know what if i get all my scripts(not vendor) via AJAX request and what's good or bad about it. Would browser cache the response or server would send my script with every request as if user never been to my site?
Performance-wise would that be better, than storing all scripts in footer.

Getting scripts via AJAX is a bit of confusion in terms, you can use JavaScript to create additional script elements to load extra scripts into the page - which can also be done so asynchronously so it will not block the rendering or download of the page. Take a look for example at the following:
var script = document.createElement('script');
script.src = 'myscript.js';
document.head.appendChild(script);
It creates a new script element, assigns a source and the appends it to the head section of the document. If this was added to the end of the document then the script would be downloaded without having blocked the page loading.
This simple idea is the basis for all (that I know of) front-end JS dependency management - the idea that the requested page only download the assets it needs. You might have heard of JS module definitions such as AMD and CommonJS and that's a huge topic, so I'd recommend taking a read of Addy Osmani's "Writing Modular JavaScript With AMD, CommonJS & ES Harmony" article.
In answer to whether or not they will be cached the answer is yes--in general--though caching depends on many factors both on the server and the user's browser. The individual requests will still need to be made on each subsequent page load which can be slower than one big individual file on a flaky connection. The decision really is down to how users may be accessing your site and whether or not a slight speed decrease initially is worth it vs. developer authoring and maintenance. You can always go for the latter than later on move to the former.

In essence, <script> is by default pulling data into your page synchronously. That might be an issue when scripts are placed in the head section since it might be a blocker for showing your body's content.
Pulling in javascript asynchronously by ajax might usually come handy in case specific minor conditions are met, i.e. most of the users won't use your script in the first place. Let's say you're having a page and should you be logged in as an admin, some additional script is pulled in to handle your admin UI. Or such.
But generally speaking, there's no real advantage in ajaxing javascripts and should you want to avoid broken dependencies (your script being pulled in earlier than your jQuery library, for instance), just stick with your already optimal solution: placing your javascripts as the last thing beforing closing the body tag.
Note also there's new async attribute in the HTML5 draft where it's possible to get script asynchronously ((and therefore speed up loading in theory) even without using AJAX magic. As always though, it's only supported by modern browsers, kicking IE9 and older out of the game.
Hope it helps!

Related

Javascript/jQuery best practice for web app: centralization vs. specification [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Imagine a web app that have dozens of page groups (pg1, pg2, ...) and some of these page groups need some JavaScript code specific only to them, but not to the entire app. For example, some visual fixes on window.resize() might be relevant only in pg2 but nowhere else.
Here are some possible solutions:
1/ Centralized: having one script file for the entire app that deals with all page groups. It's quite easy to know if relevant DOM object is present and so all irrelevant pages simply do a minor extra if().
Biggest advantage is that all JS is loaded once for the entire web app and no modification of the HTML code is needed. Disadvantage is that a additional checks are added to irrelevant pages.
2/ Mixed: the centralized script checks for the existence of a specific function on a page and launches it if it exists. For example we could add a
if (typeof page_specific_resize === 'function') page_specific_resize();
The specific page group in this case will have:
<script>
function page_specific_resize() {
//....
}
</script>
Advantage is that the code exists only for relevant pages and so isn't tested on every page. Disadvantage is additional size for the HTML results in the entire page group. If there are more than a few lines of code, the page group might be able to load an additional script specific to it but then we're adding an http call there to possibly save a few kilos in the centralized script.
Which is the best practice? Please comment on these solutions or suggest your own solution. Adding some resources to support your claims for what's better (consider performance and ease of maintenance) would be great. The most detailed answer will be selected. Thanks.
It's a bit tough to think on a best solution since it's an hypothetical scenario and we don't have the numbers to crunch on: what are the most loaded pages, how many are there, the total script size, etc...
That said, I didn't find the specific answer, Best Practice TM, but general points where people agree on.
1) Cacheing:
According to this:
https://developer.yahoo.com/performance/rules.html
"Using external files in the real world generally produces faster
pages because the JavaScript and CSS files are cached by the browser"
2) Minification
Still according to the Yahoo link:
"[minification] improves response time performance because the size of the downloaded file is reduced"
3) HTTP Requests
It's best to reduce HTTP calls (based on community answer).
One big javascript file or multiple smaller files?
Best practice to optimize javascript loading
4) Do you need that specific scritp at all?
According to: https://github.com/stevekwan/best-practices/blob/master/javascript/best-practices.md
"JavaScript should be used to decorate your site with additional functionality, and should not be required for your site to be operational."
It depends on the resources you have to load. Depends on how frequently a specific page group is loaded or how much frequently you expect it to be requested. The web app is single page? What each specific script do?
If the script loads a form, the user will not need to visit the page more than once. User will need internet connection to post data later anyway.
But if it's a script to resize a page and the user has some connection hiccups (ex: visiting your web app on a mobile, while taking the the subway), it may be better to have the code already loaded so the user can freely navigate. According to the Github link I posted earlier:
"Events that get fired all the time (for example, resizing/scrolling)"
Is one thing that should be optimized because it's related to performance.
Minifying all the code in one JS file to be cached early will reduce the number of requests made. Also, it may take a few seconds to a connection to stablish, but takes milliseconds to process a bunch of "if" statements.
However, if you have a heavy JS file for just one feature which is not the core of your app (say, this one single file is almost n% the size of the total of other scripts combined), then there is no need to make the user wait for that specific feature.
"If your page is media-heavy, we recommend investigating JavaScript techniques to load media assets only when they are required."
This is the holy grail of JS and hopefully what modules in ECMA6/7 will solve!
There are module loaders on the client such as JSPM and there are now hot swapping JS code compilers in node.js that can be used on the client with chunks via webpack.
I experimented successfully last year with creating a simple application that only loaded the required chunk of code/file as needed at runtime:
It was a simple test project I called praetor.js on github: github.com/magnumjs/praetor.js
The gh-pages-code branch can show you how it works:
https://github.com/magnumjs/praetor.js/tree/gh-pages-code
In the main,js file you can see how this is done with webpackJsonp once compiled:
JS:
module.exports = {
getPage: function (name, callback) {
// no better way to do this with webpack for chunk files ?
if (name.indexOf("demo") !== -1) {
require(["./modules/demo/app.js"], function (require) {
callback(name, require("./modules/demo/app.js"));
});
}
.....
To see it working live, go to http://magnumjs.github.io/praetor.js/ and open the network panel and view the chunks being loaded at runtime when navigating via the select menu on the left.
Hope that helps!
It is generally better to have fewer HTTP requests, but it depends on your web app.
I usually use requirejs to include the right models and views I need in each file.
While in development it saves me considerable time on debugging (because I know which files are present), in production this could be problematic considering the number of requests.
So I use r.js, which is conceived especially for requirejs, to compile my JS files when it's time for deployment.
Hope this helps

What are the best practices for including CSS and JavaScript in a page? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I would like to know if it is advisable to embed (inline) all CSS and JavaScript that are required by a webpage, into and tags instead of letting the browser download these files. Is such a practice advisable?
My "application" is a SPA and I have managed to compile everything, even images and font-icons (as base64) into a single index.html, but I am not sure if this is a common practice.
Thanks in advance.
You are ignoring some crucial things:
Browser can fetch specific resources in parallel, thus reducing the time of load compared to the "pack altogether" approach.
Browser can apply different caching policies for a different type of resources, which also can be used for some clever time and/or badnwidth reducing tuning.
People can get some useful data even if not all resources are loaded.
Not all functionality in SPA is heavily used, so sometimes it makes sense to load some entities lazily, on demand.
This is a very basic and simplified overview, there's a lot of thing to consider here. Moreover, budling to a bigger chuncks is actually something used in practice. Say, pretty often all js resources are bundled. But definitely trying to get rid of any additional http request will make your architecture less flexible, less cacheable and so on. So, it's overkill.
Best practice would be to split each resources(scripts, CSS, images, etc.) into separate files. Which will allow browser to download and cache each resource for future reuse(even for other pages). But browsers have limit into six(on time of writing) parallel connections per one origin. That why a lot of external resources on page cause bad page loading performance and bad waterfall.
There are a lot of techniques to improve performance such as: bundling, domain sharding, image sprites etc. Also for some critical resources you can use inline technique. It allows browsers to use this resources instantly without additional requests. For example, you can embed all resources(image, CSS, scripts) that are required for loading indicator and browser will render it without additional requests.
For best development style do not embed resources and use separate files. In case you care about performance you should investigate waterfall of your page(e.g. here or network tab in developer tool of any browser) and use some techniques to improve it. If you are interested in this field I recommend you to read books below:
High Performance Web Sites by Steve Souders
Even Faster Web Sites by Steve Souders
High Performance Browser Networking by Ilya Grigorik
Note that this techniques are relevant only for HTTP 1.1. For HTTP 2.0 it will be only harmful because new version is designed to improve performance.
No, always avoid inline-styling and scripting to reduce the page load of the html file. Also, separating your html, your css and your js keeps your coding clean, semantic and reusable for other external pages or applications that may require a common css property or script.
It's all about where in the pipeline you need the CSS as I see it.
inline css
Pros: Great for quick fixes/prototyping and simple tests without having to swap back and forth between the .css document and the actual HTML file.
Pros: Many email clients do NOT allow the use of external .css referencing because of possible spam/abuse. Embedding might help.
Cons: Fills up HTML space/takes bandwidth, not resuable accross pages - not even IFRAMES.
embedded css
Pros: Same as above regarding prototype, but easier to cut out of the final prototype and put into an external file when templates are done.
Cons: Some email clients do not allow styles in the [head] as the head-tags are removed by most webmail clients.
external css
Pros: Easy to maintain and reuse across websites with more than 1 page.
Pros: Cacheable = less bandwidth = faster page rendering after second page load
Pros: External files including .css can be hosted on CDN's and thereby making less requests the the firewall/webserver hosting the HTML pages (if on different hosts).
Pros: Compilable, you could automatically remove all of the unused space from the final build, just as jQuery has a developer version and a compressed version = faster download = faster user experience + less bandwidth use = faster internet! (we like!!!)
Cons: Normally removed from HTML mails = messy HTML layout.
Cons: Makes an extra HTTP request per file = more resources used in the Firewalls/routers.
Source/Reference : here
Keeping all the HTML, CSS, and JavaScript code in one file can make it difficult to work with. Stylesheet and JavaScript files must contain the and tags respectively, because they are HTML snippets and not pure .css or .js files.
Many web developers recommend loading JavaScript code at the bottom of the page to increase responsiveness, and this is even more important with the HTML service. In the NATIVE sandbox mode, all scripts you load are scanned and sanitized client-side, which may take a couple of seconds. Moving your tags to the end of your page will let HTML content render before the JavaScript is processed, allowing you to present a spinner or other message to the user.
When you're hard at work on your markup, sometimes it can be tempting to take the easy route and sneak in a bit of styling.
I'm going to make this text red so that it really stands out and makes people take notice!
Sure -- it looks harmless enough. However, this points to an error in your coding practices.
Instead, finish your markup, and then reference that P tag from your external stylesheet.
someElement > p { color: red;}
Remember -- the primary goal is to make the page load as quickly as possible for the user. When loading a script, the browser can't continue on until the entire file has been loaded. Thus, the user will have to wait longer before noticing any progress.
If you have JS files whose only purpose is to add functionality -- for example, after a button is clicked -- go ahead and place those files at the bottom, just before the closing body tag. This is absolutely a best practice.

What is this script doing? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I have a ASP.NET MVC 4 application.
This referrs a Js file like so:
<script type="text/javascript" src="http://localhost:32967/Scripts/MyScripts/Myscript.js"></script>
There was a blocking time seen in the browser while downloading this script.
Googling for this, I found a reference here
When this script is added the blocking is gone.
In fact there is no blocking seen for any more script references added.
This had a major speed boost.
Can someone please explain what this script does?
If blocking can be avoided by doing this then why is this not a best practice?
[Update]
I have changed the script name from Reference.js to MyScript.js as it's causing confusion with the other question with the _reference.js.
Here are the contents of this file:
var urls = {
commonUrl: "http://localhost:32944/",
myappurl: "http://localhost:32967/",
productUrl: "http://localhost:49880/"
}
That's all there is in that file.
Regards.
References.js is only used by the Visual Studio IDE as a list of files to which provide Intellisense for. It is not included in any HTML file as a script reference by the basic template, as I interpret you claim in your question.
If anything, adding the script reference should slow down load time. I'm going out on a limb here and claiming your measurements are wrong - that adding this script in no way can speed up loading time of any web page.
edit: I see you have updated your question now. If this indeed is not _references.js, then you need to provide the content of the file for us to give an answer.
You ask two questions:
What does it do? It fools the browser in thinking there are no script references in this HTML page, only a single inline script. By doing this it circumvents the standard load procedure which results in the scripts being loaded at a later point than normal, and asynchronously. Why exactly this results in asynchronous loading you must ask the implementors of the different browsers. It may not be true for all.
Why is it not best practice? I can think of multiple reasons:
It comes with side effects
It is a more verbose, less readable format
It breaks the HTML standard
It breaks SEO and other machine parsing of the HTML
As for #1: By loading and running asynchronously you no longer have control over which script executes first. There may be dependencies between the scripts, which will result in runtime errors when done in wrong order. If you could solve the problem of running in order while still loading asynchronously (totally doable), you'll be fine though.
Then again, you should rather push the browser implementors to fix this instead of implementing this rather ugly hack. There are many other more viable ways of improving page load time better than this. Caching, minifying and bundling are three that jumps to mind.
Lastly I'm assuming you have added a working implementation of the script referenced at the webdigi blog. The blog itself is good at explaining the concept, but awful on code examples as it does not provide insight in what the poorly named variables n,k,e, g and C should represent. Here is a working implementation for reference and context of what I'm talking about, which loads JQuery and JQuery UI. Note that since JQuery UI depends on JQuery core being loaded first, this example will only work half the time:
<html>
<head>
<script>
var headNode = document.getElementsByTagName("HEAD")[0];
// Jquery
var c1 = document.createElement("script");
c1.type= "text/javascript";
c1.src = "http://code.jquery.com/jquery-2.1.0.min.js";
// JQuery UI
var c2 = document.createElement("script");
c2.type= "text/javascript";
c2.src = "http://code.jquery.com/ui/1.10.4/jquery-ui.min.js";
headNode.appendChild(c1);
headNode.appendChild(c2);
</script>
</head>
<body>
Content goes here
</body>
</html>

What is the acceptable number of javascript external linked files [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What is the acceptable number of javascript external linked files, within html.
And may a browser happen to not download external js file. If yes then why it may happen.
Thanks in advance.
Take in consideration that every script that is external will take time to load and the server that serves it may be offline.
You should consider including only the scripts you use on the current page and not all libraries in the world for small things.
An acceptable number of external files is 0. From my opinion.
If you want your webpage to run smoothly you should not consider loading anything external.
External files are often included for testing purposes when you don't want to save scripts on localhost, css (eg: jQuery and jQuery UI). But on live production you should have them on your host/server. Maybe in future the external server will not be available anymore.
A browser does NOT choose what to download, it downloads what he is asked. But if a script fails, or there are actions in that script that require an additional library and that library isn't available, the browser will stop loading and will give errors.
The answer to this question is quite complicated. You have to take into account caching, number of simultaneous requests and things like authentication.
The disadvantage of inline scripts is that you can't take good advantage of caching. If you move your scripts to external files revisiting users may still have your files in cache and the page will load faster for them. How many scripts you should have depends on the number of simultaneous requests a browser will make (typically 4), the size of the scripts and execute complexity. Keep in mind that CSS files, or basically any resource, on the same domain counts towards this limit as well. You may ignore stylesheets with media="print" as modern browsers will delay loading.
If you have more than 4 scripts the 5th script will only start loading when one of the other 4 have been loaded. If this script contains some on dom ready event code it will be delayed. You could consider merging scripts or changing the order in which they are loaded.
Another problem to be well aware of is updates. If you update your scripts and users still have the old one cached you're going to run into problems. Some users might even get some of the newer scripts and some of the older scripts. Make sure you have a mechanism in place for this. I've found fingerprinting to be really useful in cache management.
You could consider a lazy loading principle where you first only load the most basic scripts for showing what the user absolutely must see. Then load other scripts in the background as they are needed.
Then there are 3rd party services like Google Maps, you can't actually cache these files because they change over time and may contain authentication steps to prevent abuse and such. You have limited control over these scripts.
Overall it depends on the kind of website you're making. If you're making more of a business application a relatively long load time may be acceptable. If you're making a fancy promotional site, load time is absolutely key and inline scripts may be for you.
This is quite an advanced topic, don't worry about it too much unless you run into actual performance issues. Premature optimization is the root of all evil.

Enterprise JavaScript Tips [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm currently writing an article covering tips/tricks/best practices when working with JavaScript within an Enterprise environment. "Enterprise" can be a bit ambiguous so, for the purpose of this article we will define it as: supporting multiple web-based applications within a network that is not necessarily connected to the Internet.
Here are just a few of the thoughts I've had, to get your creative juices flowing:
Ensure all libraries are maintained in a central, web-accessible location and that all applications reference those libraries (rather than maintaining independent copies).
Reference libraries by version, guaranteeing new releases won't break your applications (no jquery-latest, use jquery-#.#.# instead).
Proper namespacing of application code
What tips can you provide to help me out?
Test your javascript on the largest DOM size possible. IE6/7/8 will hang based on number of executed VM statements, as opposed to actual run time. Regex and regex jQuery selectors are particularly bad.
Write less. Javascript in particular becomes very hard to manage and debug beyond a certain size set. Breaking up functionality into different external source files can help, but always consider a better method to do what your doing (example: jquery plugin.)
If you're writing a common pattern over and over, STOP. Either create a global method, or if the method acts on a jQuery selector, consider writing your own jQuery plugin instead.
Don't make methods take DOM objects or IDs. Pass in the jQuery object itself, and operate on that. In this manner, you don't force arbitrary DOM constraints on your method (the object passed in might not even be on the DOM yet, or it might not have an ID).
Don't modify prototypes. This breaks libraries/jQuery. Write a plugin or new datatype if you have to.
Don't modify libraries; this breaks upgradability. You can often achieve a similar affect by wrapping the jQuery library with your own plugin and forwarding/intercepting calls, kind of like AOP.
Don't have code execute while the DOM is still loading. This leads to race conditions that you'll only catch on the machines breakage occurs on, and even then it won't be consistent.
Don't style the page with jQuery. It's tempting, but a FOUC gets worse as the DOM grows. Build .first-child, .last-child etc. in your server pages, as opposed to hacking it in with jQuery.
maybe I'll come back and add more... but for now I have just few in my mind:
1) Caching strategies. Enterprise servers are heavy loaded, to serve http requests it is important to know how can you deal with it. E.g. JS can be cached on client side, but you should know how to 'tell ' a client that new version is available.
2) There are different libraries which minify counts of requests to JS files just appending them (based on configuration). E.g. for Java it is Jawr (just one of). It's better to load 1,2,3 scripts (read 'files') instead of 100 (and this number becomes normal today, in era of RIA). One more nice trick Jawr does, it creates zipped bundles, so when client asks for script server does not need to zip it.
3) Your business logic can be processed by application server (sort of JBoss, GlassFish etc when we talk about java), but JavaScript is static so it can be server by http server (like Apache, or better lighttd, nginx). Again this way you minify server loading (critical for enterprise)
4) libraries like jquery can be loaded from Google CDN (or any other reliable source).
5) use Yslow, PageSpeed, Ajax DynaTrace to check performance, get ideas to improve etc.
6) try mod_pagespeed, it can 'eliminate' jawr, or make powerful company for it
7) one more issue used today is JavaScript-on-demand loading
8) offline storage
Well, although you've specified topics you are interested in, the area still looks unlimited...

Categories