I am intermediate level javascript developer trying to understand how great javascript developer write their code and i decide to start looking into Backbone library as starting point.
here is some code snippet for initial setup in backbone please help me to make sense out of it.
code1 -
(function(){
var root = this;
}).call(this);
is there any specific reason to use call method over simply using (), or it is just a coding preference, if i have to write the same code i would do something like this.
(function(root){
})(this);
code2 -
var Backbone;
if (typeof exports !== 'undefined') {
Backbone = exports;
} else {
Backbone = root.Backbone = {};
}
now there is no definition of export in the global scope nor it is defined anywhere in local scope then what is if block is doing here if i was writing the same code i would write
var Backbone = root.Backbone = {};
code 3
var _ = root._;
if (!_ && (typeof require !== 'undefined')) _ = require('underscore')._;
again I can't find definition of require anywhere in local or global scope
Code Block 1
This is down to developer preference, you could write that code either way and, indeed, many libraries do prefer your suggested style.
Code Block 2
This block is a take on the AMD Boiler Plate. AMD libraries provide the hooks required to split your JavaScript code into modules. In the code blocks case, the exports object is a global used by the CommonJS Module Standard. If the exports global is not present then Backbone is added to the root object directly.
An interesting side note on this is the fact that Backbone does not support exporting to the popular RequireJS AMD library.
Code Block 3
require is another global introduced by AMD libraries, see above.
In code1 call(this) passes a reference of the current this to the function. In case of doing it in the global scope, this is window. I think it's just preference and doesn't make a difference in that case.
require and exports are provided by NodeJS (or CommonJS compliant library) and they're part of the CommonJS spec.
I think that code 2 is to protect the framework in case someone else defined exports in global scope. So they check if its not undefined meaning someone else did then they just reset it to an empty object.
#JonnyReeves has pretty much answered all of your questions. Not sure you've seen the annotated source code of backbone.js before, this would be useful for you.
http://documentcloud.github.com/backbone/docs/backbone.html
Related
This question already has answers here:
Is 'require(...)' a common javascript pattern or a library function?
(3 answers)
Closed 7 years ago.
I see this in many JS libraries out there, mostly looking through GitHub, currently looking at PeerJS. I'm referring to this:
var util = require('./util');
var EventEmitter = require('eventemitter3');
var Negotiator = require('./negotiator');
var Reliable = require('reliable');
...
window.Socket = require('./socket');
window.MediaConnection = require('./mediaconnection');
window.DataConnection = require('./dataconnection');
window.Peer = require('./peer');
window.RTCPeerConnection = require('./adapter').RTCPeerConnection;
window.RTCSessionDescription = require('./adapter').RTCSessionDescription;
window.RTCIceCandidate = require('./adapter').RTCIceCandidate;
window.Negotiator = require('./negotiator');
window.BinaryPack = require('js-binarypack');
...
Just from intuition, it seems like require() is importing/including whatever is being passed in, i.e., EventEmitter. However, I can't figure out where require() is coming from?
I'm not too familiar with NodeJS, and this seems like it's a NodeJS thing, but I don't understand how require() fits into a web browser context, where NodeJS doesn't exist.
I've seen RequireJS and Browserify, but these are libraries that would need to be included with application in order to use the require() function. In the example of PeerJS, I can just include it:
<script type="text/javascript" src="/static/js/peerjs.v0.3.13.js"></script>
... and it uses require() no problem. However, it doesn't look like there's any 3rd party libraries that are defining require() that are being bundled along with the PeerJS source code.
How is it being included? How is it being initialized? How is it fetching whatever is being passed in, i.e., "EventEmitter"?
It's a way to include external JavaScript (or really any other file) into your script through a single point of entry without introducing globals. It's most often used with Asynchronous Module Definition (AMD) and CommonJS modules. At it's highest level, require() is an API to use JavaScript modules. NodeJS uses the CommonJS syntax for the most part.
There's also no definite "better" one to use. It becomes almost an East Coast/West Coast turf war trying to discuss one vs the other. Some say that AMD modules are too verbose since you have to capture all of your imports in a closure and then use them which means you have to look in two different places for where you are defining your variables and imports:
AMD from RequireJS also Dojo
define('myModule', ['dep1', 'dep2'], function (dep1, dep2) {
return function () {};
});
CommonJS
var foobar = require('./foobar').foobar,
test = new foobar();
test.bar(); // 'Hello bar'
Additionally, some say that the A part of AMD doesn't really benefit you often enough to be worth the overhead.
Where does the "require" part come from?
In the browser, the require() comes from whatever library you are loading whether it be an AMD or CommonJS loader. However, for your above examples, you look like you're using it in a NodeJS context. NodeJS has the require() defined within the environment just like module.exports. In fact, the module.exports is what is given to whatever variable you're assigning to with the require(...).
Contrast the module.exports from CommonJS to how AMD gets your module into your variable's hands which is just the return from the closure that you put in a call to define().
Can these be used together?
One other thing to touch on is that they aren't exclusive to one another. They can be coerced into playing with one another either using some kind of wrapper or by using another convention: UMD. UMD tries to let you create modules that work no matter the environment exposing your module through whatever convention is available be it module.exports or define().
I was going through some github library wherein I came across the following lines:
//Util.js
(function(exports,global){
global["true"] = exports;
"use strict";
...rest of the code here
})({}, function() {
return this;
}());
The above file was directly included in index.html.
I am aware of the concept of self-executing functions in JS. But what purpose could the above provide ?
The script is referencing the global 'this' to hold a property called 'true' that contains an an object called 'exports' which you probably already figured out. This script appears to be one that initializes some functionality.
My guess as to WHY they are doing it this way is because they are anticipating that 'this' might not just be run in a browser (probably for injecting mocks when unit testing for example) and so it is a way to abstract the global object.
The 'exports' and 'global' parameters are only mentioned as arguments and when setting the property global["true"] in your script. This is why I think it is an initializing script and the exports variable will be used later on by other related scripts. These related scripts which might also be under test (or whatever) and so their global object might be the same as this one.
Looking for requirejs.noConflict(), or some way to remove the global footprint from the library. My use case is that I'm writing a widget to run in a 3rd party page which might already define require and define (with a previous version of requirejs or even another library entirely).
I've tried implementing this myself and it mostly works. For simplicity lets assume I do not restore original values, only remove the footprint. I do something like this in my data-main:
var mycontext = requirejs.config({context : 'asdf', baseUrl : 'http://foo.com' });
mycontext(['require','foo'], function (require, foo) {
var f = require('foo');
});
// namespace cleanup
window.requirejs = undefined;
window.require = undefined;
window.define = undefined;
The problem here is that the f value from require('foo') returns null, but only iff I do my namespace cleanup. This smells like a bug in requirejs, but I'm hoping there's an official no-conflict solution.
Hard to google this one due to all the jQuery related noConflict() questions.
The RequireJS documentation has a section on this titled "HOW CAN I PROVIDE A LIBRARY TO OTHERS THAT DOES NOT DEPEND ON REQUIREJS?"
You can also have a look at the example build file, which shows how wrap can be used.
A good example of how this works is the Flight library.
By default the library uses AMD but there is a standalone version that wraps all the modules in an anonymous function and includes a tiny require/define API at the top (They use their own build tool for this). Source can be found here.
Note that for this to work all the modules will require IDs, but the r.js optimiser will handle that for you.
Using these techniques should allow you to leave any already defined require/define variables alone.
Consider this:
<script src='global.js'></script>
<script src='require.js'></script>
<script>
require(['modular_foo'], function() {
//do stuff
});
...and in side global.js we have, among other things:
//global.js
$.getScript("modular_bar.js");
where both modular_foo and modular_bar are anonymously defined AMD modules. Using requireJS, loading something like the above would give you our favourite error, mismatched anonymous define() modules.
It's fine enough as to why that error occurs (read up on that page if you'd like to know), but the problem is, what if you can't get out of this situation?
I'm working in an established platform which is very gradually migrating to a RJS flow, for now there's no way out of using both inline legacy scripts (some of which have AMD checks to trigger define()) and our requireJS entry-point simultaneously.
In some cases I can simply place inline AMD-compatible scripts above loading the require.js library, but that doesn't work when you need to load other things (modular_bar.js) asynchronously depending on the DOM content. I could also just comment out all AMD checks from those files loading externally to RJS but that's preventing making them incompatible with ever being loaded in a modular flow.
Anyone out there had a similar experience? How do you blend your flows to overcome these sorts of conflicts?
I do not have experience using this solution in a production environment but I spent a couple days on this question figuring out the best way to approach it.
You can use a modified RequireJS library that does not allow the anonymous define's to execute, if it is being passed through eval. Also, you can disallow any define calls by removing type check for string on name in the snippet below.
The following snippet is a modification to RequireJS that will ignore anonymous defines if being called by eval. You can find the fully modified require.js in this GitHub Gist.
The code relies on the parse-stack library. If you can't include the library before RequireJS, I suggest just concatenating it to the top of of the file.
Demo
// This is a snippet of RequireJS
// ...
define = function (name, deps, callback) {
var node, context;
// We will allow named modules to be defined by `eval`
if (!(typeof name == 'string' || name instanceof String))
{
var stack = parseStack(new Error());
// If we find any eval in the stack, do not define the module
// This is to avoid the "Mismatched anonymous define() module" error
// Caused by executing an anonymous define without requireJS
for(var i = 0; i < stack.length; i++)
{
if(stack[i].name == "eval")
{
return;
}
}
}
// ...
I understand how things like proper name-spacing and the Module Pattern help issues associated with leaking into the global-scope.
I also completely see the value of resource dependency-management provided for in the require() protocol outlined in the CommonJS Specification.
However, I am befuddled as to the benefit of the AMD define() function's use and purpose.
The CommonJS signature for define is:
define(id?, dependencies?, factory);
Additionally…
I see people wrapping their Module
Pattern plug-ins with define() when
BOTH create an object at the
global-scope.
A file containing a normal Module
Pattern plug-in can be loaded
asynchronously as well.
At first, it "seemed" like yet-another plug-in wrapper...until I began to see folks use it alongside the Module Pattern.
So My Questions Are:
What does the define() protocol
outlined in CommonJS specification
buy me?
Is it somehow more eloquent?
Is it meant to replace the Modular Pattern?
Is it somehow faster?
If so, why?
// main.js
require("foo.js", function(foo) {
console.log(foo === 42); // true
});
//foo.js
/*
define(42);
define({
"foo": "bar"
});
define(["bar.js"], function(bar) {
return bar.foo;
});
*/
define(function() {
return 42;
});
Define is a great way to pass modular objects back without relying on global scope.
The particular API of define varies from library to library though.
Here the basic idea is that you call define in a file to define what that module is. Then when you require the file you get the module. This cuts out the middle man that is global scope.
It's no faster though (It's slower then injecting into global scope).
Using require and define you only have two global values.
The particular define example above matches the requireJS API