Enterprise JavaScript Tips [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm currently writing an article covering tips/tricks/best practices when working with JavaScript within an Enterprise environment. "Enterprise" can be a bit ambiguous so, for the purpose of this article we will define it as: supporting multiple web-based applications within a network that is not necessarily connected to the Internet.
Here are just a few of the thoughts I've had, to get your creative juices flowing:
Ensure all libraries are maintained in a central, web-accessible location and that all applications reference those libraries (rather than maintaining independent copies).
Reference libraries by version, guaranteeing new releases won't break your applications (no jquery-latest, use jquery-#.#.# instead).
Proper namespacing of application code
What tips can you provide to help me out?

Test your javascript on the largest DOM size possible. IE6/7/8 will hang based on number of executed VM statements, as opposed to actual run time. Regex and regex jQuery selectors are particularly bad.
Write less. Javascript in particular becomes very hard to manage and debug beyond a certain size set. Breaking up functionality into different external source files can help, but always consider a better method to do what your doing (example: jquery plugin.)
If you're writing a common pattern over and over, STOP. Either create a global method, or if the method acts on a jQuery selector, consider writing your own jQuery plugin instead.
Don't make methods take DOM objects or IDs. Pass in the jQuery object itself, and operate on that. In this manner, you don't force arbitrary DOM constraints on your method (the object passed in might not even be on the DOM yet, or it might not have an ID).
Don't modify prototypes. This breaks libraries/jQuery. Write a plugin or new datatype if you have to.
Don't modify libraries; this breaks upgradability. You can often achieve a similar affect by wrapping the jQuery library with your own plugin and forwarding/intercepting calls, kind of like AOP.
Don't have code execute while the DOM is still loading. This leads to race conditions that you'll only catch on the machines breakage occurs on, and even then it won't be consistent.
Don't style the page with jQuery. It's tempting, but a FOUC gets worse as the DOM grows. Build .first-child, .last-child etc. in your server pages, as opposed to hacking it in with jQuery.

maybe I'll come back and add more... but for now I have just few in my mind:
1) Caching strategies. Enterprise servers are heavy loaded, to serve http requests it is important to know how can you deal with it. E.g. JS can be cached on client side, but you should know how to 'tell ' a client that new version is available.
2) There are different libraries which minify counts of requests to JS files just appending them (based on configuration). E.g. for Java it is Jawr (just one of). It's better to load 1,2,3 scripts (read 'files') instead of 100 (and this number becomes normal today, in era of RIA). One more nice trick Jawr does, it creates zipped bundles, so when client asks for script server does not need to zip it.
3) Your business logic can be processed by application server (sort of JBoss, GlassFish etc when we talk about java), but JavaScript is static so it can be server by http server (like Apache, or better lighttd, nginx). Again this way you minify server loading (critical for enterprise)
4) libraries like jquery can be loaded from Google CDN (or any other reliable source).
5) use Yslow, PageSpeed, Ajax DynaTrace to check performance, get ideas to improve etc.
6) try mod_pagespeed, it can 'eliminate' jawr, or make powerful company for it
7) one more issue used today is JavaScript-on-demand loading
8) offline storage
Well, although you've specified topics you are interested in, the area still looks unlimited...

Related

Javascript/jQuery best practice for web app: centralization vs. specification [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Imagine a web app that have dozens of page groups (pg1, pg2, ...) and some of these page groups need some JavaScript code specific only to them, but not to the entire app. For example, some visual fixes on window.resize() might be relevant only in pg2 but nowhere else.
Here are some possible solutions:
1/ Centralized: having one script file for the entire app that deals with all page groups. It's quite easy to know if relevant DOM object is present and so all irrelevant pages simply do a minor extra if().
Biggest advantage is that all JS is loaded once for the entire web app and no modification of the HTML code is needed. Disadvantage is that a additional checks are added to irrelevant pages.
2/ Mixed: the centralized script checks for the existence of a specific function on a page and launches it if it exists. For example we could add a
if (typeof page_specific_resize === 'function') page_specific_resize();
The specific page group in this case will have:
<script>
function page_specific_resize() {
//....
}
</script>
Advantage is that the code exists only for relevant pages and so isn't tested on every page. Disadvantage is additional size for the HTML results in the entire page group. If there are more than a few lines of code, the page group might be able to load an additional script specific to it but then we're adding an http call there to possibly save a few kilos in the centralized script.
Which is the best practice? Please comment on these solutions or suggest your own solution. Adding some resources to support your claims for what's better (consider performance and ease of maintenance) would be great. The most detailed answer will be selected. Thanks.
It's a bit tough to think on a best solution since it's an hypothetical scenario and we don't have the numbers to crunch on: what are the most loaded pages, how many are there, the total script size, etc...
That said, I didn't find the specific answer, Best Practice TM, but general points where people agree on.
1) Cacheing:
According to this:
https://developer.yahoo.com/performance/rules.html
"Using external files in the real world generally produces faster
pages because the JavaScript and CSS files are cached by the browser"
2) Minification
Still according to the Yahoo link:
"[minification] improves response time performance because the size of the downloaded file is reduced"
3) HTTP Requests
It's best to reduce HTTP calls (based on community answer).
One big javascript file or multiple smaller files?
Best practice to optimize javascript loading
4) Do you need that specific scritp at all?
According to: https://github.com/stevekwan/best-practices/blob/master/javascript/best-practices.md
"JavaScript should be used to decorate your site with additional functionality, and should not be required for your site to be operational."
It depends on the resources you have to load. Depends on how frequently a specific page group is loaded or how much frequently you expect it to be requested. The web app is single page? What each specific script do?
If the script loads a form, the user will not need to visit the page more than once. User will need internet connection to post data later anyway.
But if it's a script to resize a page and the user has some connection hiccups (ex: visiting your web app on a mobile, while taking the the subway), it may be better to have the code already loaded so the user can freely navigate. According to the Github link I posted earlier:
"Events that get fired all the time (for example, resizing/scrolling)"
Is one thing that should be optimized because it's related to performance.
Minifying all the code in one JS file to be cached early will reduce the number of requests made. Also, it may take a few seconds to a connection to stablish, but takes milliseconds to process a bunch of "if" statements.
However, if you have a heavy JS file for just one feature which is not the core of your app (say, this one single file is almost n% the size of the total of other scripts combined), then there is no need to make the user wait for that specific feature.
"If your page is media-heavy, we recommend investigating JavaScript techniques to load media assets only when they are required."
This is the holy grail of JS and hopefully what modules in ECMA6/7 will solve!
There are module loaders on the client such as JSPM and there are now hot swapping JS code compilers in node.js that can be used on the client with chunks via webpack.
I experimented successfully last year with creating a simple application that only loaded the required chunk of code/file as needed at runtime:
It was a simple test project I called praetor.js on github: github.com/magnumjs/praetor.js
The gh-pages-code branch can show you how it works:
https://github.com/magnumjs/praetor.js/tree/gh-pages-code
In the main,js file you can see how this is done with webpackJsonp once compiled:
JS:
module.exports = {
getPage: function (name, callback) {
// no better way to do this with webpack for chunk files ?
if (name.indexOf("demo") !== -1) {
require(["./modules/demo/app.js"], function (require) {
callback(name, require("./modules/demo/app.js"));
});
}
.....
To see it working live, go to http://magnumjs.github.io/praetor.js/ and open the network panel and view the chunks being loaded at runtime when navigating via the select menu on the left.
Hope that helps!
It is generally better to have fewer HTTP requests, but it depends on your web app.
I usually use requirejs to include the right models and views I need in each file.
While in development it saves me considerable time on debugging (because I know which files are present), in production this could be problematic considering the number of requests.
So I use r.js, which is conceived especially for requirejs, to compile my JS files when it's time for deployment.
Hope this helps

Is there an easy way to make Javascript apps SEO friendly? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've been looking at using a new workflow process for web development. Yemoan, Grunt, and Bower with AngularJS seems like a great solution for front-end development. The only downside is that the SEO is absolutely horrible. This seems like a HUGE component of the business decision driving adoption of these services yet I can't find any solutions.
What's a solid solution for making SEO-friendly javascript apps?
The current standard practice for making ajax heavy sites/apps SEO friendly is to use snapshots. See the google tutorial on this here: https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot and here: https://developers.google.com/webmasters/ajax-crawling/docs/specification
To summarize, you add this tag <meta name="fragment" content="!"> to your DOM. The crawler will see this and redirect itself from www.example.com to www.example.com?_escaped_fragment_= where it will be expecting the snapshot of the page.
You could manually copy the html from your site after all ajax is finished, and create your snapshot files yourself. However, this could be quite a nuisance. Instead, you could use PhantomJS to automate this process for you. Personally, I am going to use .htaccess to send the escaped_fragment requests to a single php file which has cached markup created from the content manager when the edits were made. This allows it to recreate the markup for crawlers to view (but no functionality for humans).
Here's a relevant piece of info from Debunking 10 common KnockoutJS myths. I assume it applies more or less equally to Angular.
Graceful degradation in absense of javascript depends on the way your
application has been architectured. Although KO being a pure
javascript library, does not offer any support for graceful
degradation in absence of javascript, nevertheless unlike many of the
competing technologies it does not hinder graceful degradation.
To create a KO application that degrades gracefully, just ensure that
the initial state of the page that is rendered by the server suffices
to convey the information that a user should see in absence of
javascript. Fallback mechanisms (eg simple forms and links) should be
available that provide the complete (or partial) application
functionality in absence of javascript. Then when you create your view
models you can instantiate them from the data already available from
the DOM and future data can be loaded via ajax without refreshing the
page.
A good example for this functionality can be a grid. The basic HTML
page served by the server can contain a simple HTML table with support
for traditional links for pagination. Then you can create your view
models from the data present in the table ( or ajax if a bit of
redundant data load does not matter for you) and utilize KO for
interactive bindings.
Since KO does not use special inline markup or custom html tags, but
rather simple data-bind attributes which are anyways not visible in
absence of javascript, it does not hinder graceful degradation.

Getting Javascript via AJAX [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I would like to know what if i get all my scripts(not vendor) via AJAX request and what's good or bad about it. Would browser cache the response or server would send my script with every request as if user never been to my site?
Performance-wise would that be better, than storing all scripts in footer.
Getting scripts via AJAX is a bit of confusion in terms, you can use JavaScript to create additional script elements to load extra scripts into the page - which can also be done so asynchronously so it will not block the rendering or download of the page. Take a look for example at the following:
var script = document.createElement('script');
script.src = 'myscript.js';
document.head.appendChild(script);
It creates a new script element, assigns a source and the appends it to the head section of the document. If this was added to the end of the document then the script would be downloaded without having blocked the page loading.
This simple idea is the basis for all (that I know of) front-end JS dependency management - the idea that the requested page only download the assets it needs. You might have heard of JS module definitions such as AMD and CommonJS and that's a huge topic, so I'd recommend taking a read of Addy Osmani's "Writing Modular JavaScript With AMD, CommonJS & ES Harmony" article.
In answer to whether or not they will be cached the answer is yes--in general--though caching depends on many factors both on the server and the user's browser. The individual requests will still need to be made on each subsequent page load which can be slower than one big individual file on a flaky connection. The decision really is down to how users may be accessing your site and whether or not a slight speed decrease initially is worth it vs. developer authoring and maintenance. You can always go for the latter than later on move to the former.
In essence, <script> is by default pulling data into your page synchronously. That might be an issue when scripts are placed in the head section since it might be a blocker for showing your body's content.
Pulling in javascript asynchronously by ajax might usually come handy in case specific minor conditions are met, i.e. most of the users won't use your script in the first place. Let's say you're having a page and should you be logged in as an admin, some additional script is pulled in to handle your admin UI. Or such.
But generally speaking, there's no real advantage in ajaxing javascripts and should you want to avoid broken dependencies (your script being pulled in earlier than your jQuery library, for instance), just stick with your already optimal solution: placing your javascripts as the last thing beforing closing the body tag.
Note also there's new async attribute in the HTML5 draft where it's possible to get script asynchronously ((and therefore speed up loading in theory) even without using AJAX magic. As always though, it's only supported by modern browsers, kicking IE9 and older out of the game.
Hope it helps!

How much speed is gained with RequireJS/AMD in JS? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How much faster is requireJS actually, on a large website?
Has anyone done any tests on the speed of large websites that use asynchronous loading vs not?
For instance, using Backbone with a lot of views (> 100), is it better to simply have a views object that gets loaded with all the views at once and is then always available, or should they all be loaded asynchronously as needed?
Also, are there any differences for these considerations for mobile vs desktop? I've heard that you want to limit the number of requests on mobile instead of the size.
I don't believe that the intent of require.js is to load all of your scripts asynchronously in production. In development async loading of each script is convenient because you can make changes to your project and reload without a "compile" step. However in production you should be combining all of your source files into one or more larger modules using the r.js optimizer. If your large webapp can defer loading of a subset of your modules until a later time (e.g. after a particular user action) these modules can optimized separately and loaded asynchronously in production.
Regarding the speed of loading a single large JS file vs multiple smaller files, generally:
“Reduce HTTP requests” has become a general maxim for speeding up frontend performance, and that is a concern that’s even more relevant in today’s world of mobile browsers (often running on networks that are an order of magnitude slower than broadband connections). [reference]
But there are other considerations such as:
Mobile Caches: iPhones limit the size of files they cache so big files may need to be downloaded each time, making many small files better.
CDN Usage: If you use a common 3rd party library like jQuery it's probably best not to include it in a big single JS file and instead load it from a CDN since a user of your site may already have it in their cache from another website (reference). For more info, see update below
Lazy Evaluation: AMD modules can be lazily parsed and evaluated allowing download on app load while deferring the cost of parse+eval until the module is needed. See this article and the series of other older articles that it references.
Target Browser: Browsers limit the number of concurrent downloads per hostname. For example, IE 7 will only download two files from a given host concurrently. Others limit to 4, and others to 6. [reference]
Finally, here's a good article by Steve Souders that summarizes a bunch of script loading techniques.
Update: Re CDN usage: Steve Souders posted a detailed analysis of using a CDN for 3rd party libraries (e.g. jQuery) that identifies the many considerations, pros and cons.
This question is a bit old now, but I thought I might add my thoughts.
I completely agree with rharper in using r.js to combine all your code for production, but there is also a case for splitting functionality.
For single page apps I think having everything together makes sense. For large scale more traditional page based websites which have in page interactions this can be quite cumbersome and result in loading a lot of unnecessary code for a lot of users.
The approach I have used a few times is
define core modules (needed on all pages for them to operate properly), this gets combined into a single file.
create a module loader that understands DOM dependencies and paths to modules
on doc.ready loop through the module loader and async load modules needed for enhanced functionality as needed by specific pages.
The advantage here is you keep the initial page weight down, and as additional scripts are loaded async after page load the perceived performance should be faster. That said, all functionality loaded this way should be done approached as progressive enhancement (i.e. ajax on forms) so that in the event of slow loading or errors the basic functionality is still available.

What is a good book or resource for writing large ajax applications? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am very experienced in engineering large-scale systems, but I am still relatively new to ajax-based design. I know how to use the apis, and I am fairly comfortable using jquery and javascript as a whole, but I often find myself thinking way too hard about the overall architecture.
Right now, my current application just has javascript files sprinkled all over the place, all in a /js directory. Most of them use jQuery, but some use YUI or a combination between the two because the features weren't available in jQuery.
Some of my REST server methods accept normal GET methods with request parameters, but others needed much more complex POSTs to handle the incoming data (lists of lists of objects). The handling of all of my ajax stuff is a mix and mash of different methods as a result of the complexity of the data I'm dealing with.
What I'd really like is to read about how to design an ajax-based system that is very clean and elegant architecturally, and is consistent from the simplest to the most complex of cases. Does such a resource exist?
Also suggestions on naming conventions of javascript files and conventions for ajax endpoint directory/method names?
Also how to do with entering form data? Should you use get or post to do this?
Also about validation of form data when all the constraints are already on the server? How to make this very trivial to do so you're not doing it for each form?
What are the best ways to generate new page content when people click things and settings this up so that it's easy to do over and over.
How to deal with application-specific javascript files depending on each other and managing this nicely.
I am also using Spring and Spring-MVC, but I don't expect this to make much difference. My questions are purely browser related.
There's a TL;DR summary at the end.
I can't really point you to a good resource for this as I haven't found one myself. However, all is not lost. You already have experience in developing large-scale applications and taking this knowledge into the browser-space doesn't require a lot of re-thinking.
First of all, unless your application is really trivial, I wouldn't start refactoring the entire codebase straight away because there are bound to be endless cases you haven't thought of yet.
Design the core architecture of the system you want first. In your case you probably want all your AJAX requests to go through one point. Select the XHR interface from either jQuery or YUI and write a wrapper around it that takes an option hash. All the XHR calls you write for new code go through there. This allows you to switch out the framework performing the XHR calls at any time with another framework or your own.
Next up, harmonize the wire protocol. I'd recommend using JSON and POST requests (POST requests have the additional benefit for FORM submissions of not being cached). Make a list of the different types of request/responses you need. For each of these responses, make a JS object to encapsulate them. (E.g. the form submission response is returned to the caller as a FormReponse object which has accessor functions for the validation errors, etc). The JS overhead for this is totally trivial and makes it easy to change the JSON protocol itself without going through your widget code to change the access of the raw JSON.
If you're dealing with a lot of forms, make sure they all have the same structure so you can use a JS object to serialize them. Most frameworks seem to have various native functions to do this, but I'd recommend rolling your own so you don't have to deal with shortcomings.
The added business value at this point is of course zero because all you have is the start of a sane way of doing things and even more JS code to load into your app.
If you have new code to write, write it on the APIs you've just implemented. That's a good way to see if you're not doing anything really stupid. Keep the other JS as it is for now but once you have to fix a bug or add a feature there, refactor that code to use your new APIs. Over time you'll find that the important code is all running on your APIs and a lot of the other stuff will slowly become obsolete.
Don't go overboard with re-inventing the wheel, though. Keep this new structure limited to data interaction and the HTTP wire and use your primary JS framework for handling anything related to the DOM, browser quirks, etc.
Also set up a global logger object and don't use console directly. Have your logger object use console or a custom DOM logger or whatever you need in different environments. That makes it easy to build in custom log levels, log filters, etc. Obviously you have to set up your build environment to scrub that code out for production builds (you do have a build process for this, right?)
My personal favorite for relatively sane JS source-code layout and namespacing is the Dojo framework. Object definitions relative to their namespace have obvious relations to their location on disk, there's a build system in place for custom builds, third-party modules, etc. The Dojo dependency/build system depends on dojo.require and dojo.provide statements in the code. When running on source a dojo.require statement will trigger the blocking load of the resource in question. For production the build system follows these statements and inserts the resource into the final bundle at that location. The documentation is a bit sparse, but it's definitely a good start for inspiration.
The TL;DR answer is,
Have all XHR calls go through a single interface
Don't pass raw response data back to higher levels
Do gradual refactoring of existing code
Harmonize the wire protocol
Use the dynamic power of JS for building light-weight proxy code
Sane code structure and call graphs look the same in JS as they do in other languages
Ajax Patterns is a pretty awesome book/site: http://ajaxpatterns.org/
Otherwise you may want to check out Advanced Ajax Architecture Best Practices
How you go about designing your site should be based on the features and size of your app. I would keep your focus there as opposed to looking for a architecture that works for all cases.

Categories