I've recently started to use RequireJS, which enables me to organize my code in a nice way. So, define and require became my best friends. But now I see one problem, which I do not know how to solve in terms of RequireJS, or in terms of some particular Design Pattern. So, imagine, I have a really huge module containing zillions of methods. I define it like so:
define(function(BIG_MODULE){
return {
property_0: "value_0",
property_1: "value_1",
....
property_zillion: "value_zillion",
method_0: function(){...},
...
method_zillion: function(){...}
}
});
Please, do not ask me, why I have such a huge module - it's just an abstraction. And, so, the question now is - if it is possible to import or require not the entire module, but some of its properties and methods? Lets say I somehow assigned my module to some local instance and if I investigate the internals of this instance, then I can see, that it contains only some particular properties and methods. Is it possible?
One thing you definitely should do is not export anything that is not meant to be part of the public API of your module.
This being said, RequireJS has no notion of importing only part of a module. When you list a module as a dependency, RequireJS loads the module, loads and executes its dependencies, calls its factory function (this is the function passed to define) with the resolved dependencies, and records what the module exported. Then when you use it somewhere else, the module's export value is bound to the corresponding parameter in the callback. So in this code
require(["foo"], function (foo) {...
you get as foo all of what was exported by the module "foo".
If you use ES6 (aka ES2015) and have the ES6 modules converted to AMD modules (by Babel, for instance), then you can have some language-based notion of a partial import. For instance, if foo exports like this return { bar: 1, baz: 2, bwip: 3 } then you could import only bar like this:
import { bar } from "foo";
console.log(bar);
This would output 1 to the console. Note, however, that this does not change how the module is loaded and processed by RequireJS. RequireJS reads the whole module and executes all of the factory function. The import above only affects how the code that loads "foo" gets to access the exported values.
Related
This might be basic, but since I am new to this I cant understand if I am doing it wrong- and its a bad practice. Say I have 3 JS files :
a.js
b.js
sdk.js
say that sdk.js is some SDK, and I would like to call functions inside it, from different js files, such as a or b.
Is the way to go is to just call functions in sdk.js from any other file directly ? this seems like no capsulation to me but I couldn't find another way without having object oriented programming.
There are several ways to call functions in another file.
1) Native Solution
In your HTML file, make sure your <script> tag importing sdk.js is loaded first, like so
<script src="./sdk.js/"></script>
<script src="./a.js/"></script>
<script src="./b.js/"></script>
As long as sdk.js declares the function in the global namespace, any file loaded afterwords should have access to it
2) ES6
If you can use ES6 features (via babel or webpack, for example), then your other scripts can import the file. Take the following example:
sdk.js:
export var foo = function(){
return "foo";
}
a.js
import foo from "./sdk.js";
foo();
3) Node.js way
Node.js supports require, another way of importing files. These files would operate under their own scope, and you would have to use module exports to make a function or variable "public".
** Worth nothing that this only works in node, not the browser, however
4) Require.js way
You can use the require.js library to import other files as well.
Additional Resources
This SO Question has far more in-depth answers than what I've outlined above. Best of luck!
Can use
module.exports = {
a:()=>{
console.log("This is 'a' function");
},
b:()=>{
console.log("This is 'b' function");
}
}
you can call those functions from next file by importing above file by
const file1 = require('that_file_name');
file1.a();
file1.b();
Hope you got the right answer.
Let's start with some reference code
var express = require('express');
var app = express();
var session = require('express-session');
app.use(session({
store: require('connect-session-knex')()
}));
I have a couple questions here that I would like to answer in case you know the answer:
Everytime a require is call in Nodejs, is that named dependency injection? Or what is the real meaning of dependency injection?
The reason why I am asking this, is because I've been reading about Node, and I see people people talking about the module or module.export pattern, and I am confuse, the module is the same as dependency?
So, all I need is a clear explanation about dependency injection, and when/where you need to inject a dependency . . .
Dependency injection is somewhat the opposite of normal module design. In normal module design, a module uses require() to load in all the other modules that it needs with the goal of making it simple for the caller to use your module. The caller can just require() in your module and your module will load all the other things it needs.
With dependency injection, rather than the module loading the things it needs, the caller is required to pass in things (usually objects) that the module needs. This can make certain types of testing easier and it can make mocking certain things for testing purposes easier.
Everytime a require is call in Nodejs, is that named dependency
injection? Or what is the real meaning of dependency injection?
No. When a module does a require() to load it's own dependencies that is not dependency injection.
The reason why I am asking this, is because I've been reading about
Node, and I see people people talking about the module or
module.export pattern, and I am confuse, the module is the same as
dependency?
A module is not the same as a dependency. Normal module design allows you to require() just the module and get back a series of exports that you can use. The module itself handles the loading of its own dependencies (usually using require() internal to the module).
Here are a couple of articles that discuss some of the pros/cons of using dependency injection. As best I can tell the main advantage is to simplify unit testing by allowing dependent objects (like databases) to be more easily mocked.
When to use Dependency Injection
When is it not appropriate to use dependency injection
Why should we use dependency injection
A classic case for using a dependency injection is when a module depends upon a database interface. If the module loads it's own database, then that module is essentially hard-wired to that particular database. There is no architecture built into the module for the caller to specify what type of storage should be used.
If, however, the module is set up so that when a caller loads and initializes the module, it must pass in an object that implements a specific database API, then the caller is free to decide what type of database should be used. Any database that meets the contract of the API can be used. But, the burden is on the caller to pick and load a specific database. There can also be hybrid circumstances where a module has a built-in database that will be used by default, but the caller can supply their own object that will be used instead if it is provided in the module constructor or module initialization.
Imagine this code.
var someModule = require('pathToSomeModule');
someModule();
Here, we DEPEND not on the name, but on the path of that file. We are also using the SAME file every time.
Let's look at angular's way (for the client, i know, bear with me)
app.controller('someCtrl', function($scope) {
$scope.foo = 'bar';
});
I know client side js doesn't have file imports / exports, but it's the underlying concept that you should look at. Nowhere does this controller specify WHAT the $scope variable ACTUALLY is, it just knows that angular is giving it something CALLED $scope.
This is Inversion of Control
It is like saying, Don't call me, I'll call you
Now let's implement our original code with something like a service container (there are many different solutions to this, containers are not the only option)
// The only require statement
var container = require('pathToContainer')
var someModule = container.resolve('someModule');
someModule();
What did we accomplish here? Now, we only have to know ONE thing, the container (or whatever abstraction you choose). We have no idea what someModule actually is, or where it's source file is, just that it's what we got from the container. The benefit of this is that, if we want to use a different implementation of someModule, as long as it conforms to the same API as the orignal, we can just replace it ONE place in our ENTIRE app. The container. Now every module that calls someModule will get the new implementation. The idea is that when you make a module, you define the api that you use to interact with it. If different implementations all conform to a single api (or you write an adapter that conforms to it) then you can swap out implementations like dirty underwear and your app will just WORK.
This approach is not for everyone, and some people hate lugging around the container.
My personal opinion, I would rather code to an interface ( a consistent api between implentations ) than code to a concrete implementation.
An actual example of Dependency Injection in node.js
// In a file, far, far, away
module.exports = function(dependencyA, dependencyB) {
dependencyA();
dependencyB();
}
// In another file, the `caller`
// This is where the actual, concrete implementation is stored
var depA = someConcreteImplementation;
var depB = someOtherConcreteImplementation;
var someModule = require('pathToSomeModule');
someModule(depA, depB);
The downside to this, is now the caller needs to know what your dependencies are. Some are comforted by this and like it, others believe it's a hassle.
I prefer this next approach, personally.
If you aren't using babel, or something that changes your functions behind the scenes, you can use this approach to get the angular-style parameter-parsing
http://krasimirtsonev.com/blog/article/Dependency-injection-in-JavaScript
Then you can parse the function you get from require, and not use a container at all.
I use injection-js Dependency injection library for JavaScript and TypeScript with ts-node:
npm i -P injection-js
It's based on the angular2 injector so Injection is super simple.
Here is the typescript version from the README.
import 'reflect-metadata';
import { ReflectiveInjector, Injectable, Injector } from 'injection-js';
class Http {}
#Injectable()
class Service {
constructor(private http: Http) {}
}
#Injectable()
class Service2 {
constructor(private injector: Injector) {}
getService(): void {
console.log(this.injector.get(Service) instanceof Service);
}
createChildInjector(): void {
const childInjector = ReflectiveInjector.resolveAndCreate([
Service
], this.injector);
}
}
const injector = ReflectiveInjector.resolveAndCreate([
Service,
Http
]);
console.log(injector.get(Service) instanceof Service);
Is there a convenient approach to code organization, which allows to create a module pattern, but define its internally exposed functionality in separate files.
So, with a module below, can SomeInternalObject2 be defined in another js file and still be accessible from the first file (assuming proper ordering of script files)?
var MyModule = (function(){
function SomeInternalObject1(){..}
function SomeInternalObject2(){..}
return {
this.publicFunction = function(){..}
}
})();
EDIT: Perhaps I'm thinking about it in a wrong way. Code organization is a development-time concern. Require.js, module-loading, etc.. is a run-time concern.
Have look at jTable (javascript). The author augments the main object from different files. It is not exactly what you are asking for, but the intention or goal seems similar. I might add code later.
In one of the rails app I am trying to use backbone with "rails-backbone" gem,
And I have created one model using scaffolding which is working fine.
but I have another model and I am trying to use different router for it, but when tries to instantiate that router from index.html.erb it fires,
"Uncaught TypeError: undefined is not a function" which clearly means there is no such router. But it is there and even in developer's tool it shows those JS files. I tried all different ways but it didn't work. Thanks in advance.
I'd guess that you're defining your router like this:
class SomeRouter extends Backbone.Router
# router code goes here
and then you're trying to create one with:
r = new SomeRouter
But CoffeeScript will wrap your files in a function to prevent scope creep:
Although suppressed within this documentation for clarity, all CoffeeScript output is wrapped in an anonymous function: (function(){ ... })(); This safety wrapper, combined with the automatic generation of the var keyword, make it exceedingly difficult to pollute the global namespace by accident.
If you'd like to create top-level variables for other scripts to use, attach them as properties on window, or on the exports object in CommonJS. The existential operator (covered below), gives you a reliable way to figure out where to add them; if you're targeting both CommonJS and the browser: exports ? this
That wrapper will hide SomeRouter inside a function so there will be no SomeRouter visible outside the file which defines it.
A common solution in Rails/Backbone apps is to manage the namespaces yourself. Set up your own namespace somewhere before any other (Java|Coffee)Script will be pulled in:
# AppName is just a placeholder, you'd use something more
# sensible in real life.
window.AppName =
Routers: { }
Views: { }
Models: { }
Collections: { }
and then define your router as:
class AppName.Routers.SomeRouter extends Backbone.Router
#...
and later:
r = new AppName.Routers.SomeRouter
similarly with models, collections, and views that need to be globally visible.
I have been reading about how the dojo 1.7 loader uses an AMD API/framework here and here too, and I came across this quote on one of the slides: "AMD(’s) greatest benefit isn’t being able to load scripts on-demand, as some people may think, the greatest benefit is the increase of the code organization/modularity and also the reduced need for globals/namespacing." But my question is, can't global variables already be avoided by using normal js functions, and maybe dojo.hitch() if you need to access another function's execution context (and another function's 'private' variables)? Put another way, other than asynchronously loading only what you need, what is the benefit of the AMD framework?
The benefits of AMD are the benefits of having a module system, analogous to a namespace system in other languages. In JavaScript, we often faked this with global variables, but modules give a number of specific benefits:
These modules are offered privacy of their top scope, facility for importing singleton objects from other modules, and exporting their own API.
--- From the CommonJS Modules/1.1.1 spec, which started it all.
Key here is the import and export facilities. Previously everyone was doing this ad-hoc, with globals (like window.jQuery, window._, etc.). To get at jQuery's exported functionality, you had to know the magic name, hope nobody conflicted with it, and be sure that the jQuery script was loaded before your script. There was no way of declaratively specifying your dependency on jQuery, and jQuery had no way of saying "this is what I export" apart from just stuffing them onto a global window.jQuery object.
A module format fixes this: each module exports specific functions, e.g.
// math.js
define(function (require, exports, module) {
exports.add = function (a, b) { return a + b; };
});
and each module can require specific other modules, e.g.
// perimeter.js
define(function (require, exports, module) {
var math = require("math");
exports.square = function (side) {
return math.add(math.add(side, side), math.add(side, side));
};
});
On why AMD should be the module system of choice, James Burke, the author of RequireJS---an AMD loader much like Dojo has---wrote a blog post detailing why he thinks AMD is the best.