Inheriting variables from parent scope in Javascript Modules - javascript

I'm working with Webpack modules to keep things organized in my projects, but something very annoying is having to explicitly specify all the variables that the module might need, instead of just having it search for them in the calling function's scope. Is there a better way to do it? Example:
main.js
import {logMonth} from "./helpers";
document.addEventListener("DOMContentLoaded", () => {
let month = "September";
logMonth();
});
helpers.js
let logMonth = () => {
console.log(month)
}
This will produce an error since logMonth() doesn't have access to the month variable.
This is an extremely simplified example, but for functions that need many variables, it can get pretty ugly to pass all the required arguments that the function might need.
My question is: Is there a way to make modules have access to the variables of the calling scope instead of explicitly passing them?

You could, but why would you want to? Modules are designed to prevent this. Always prefer pure functions, it's way easier to debug once the app becomes complicated.
Then you don't want to be searching multiple nested scopes from multiple modules for a bug, optimally you want to only be looking in the module that threw the error instead of every scope it has access to.
So logMonth = month => console.log( month ); and logMonth( 'September' ); is preferred.
You can use an object if you need to send multiple parameters to a function.
That way you do not have to change the function call signature in all places, you just add another (optional) parameter to the object:
logMonths = ({ year, month, day}) => { ...do stuff... }
This will work both with logMonths({ month: 'september' }) as with logMonths({ month: 'september', year: '2019' }), so you never have to change like logMonths( 'september' ) into logMonths( null, 'september' ) and logMonths( 2019, 'september' ) everywhere you used logMonths() before it had a year parameter.

I actually discovered a sneaky way to do this. I'll demonstrate with two JavaScript files. The first will be the top level file, and the second will be the module.
top-level.js
import {PipeLine} from './PipeLine.js';
App = {
var1: 'asdf',
var2: 'dfgh',
method1: function() {
// do something...
},
}
pipeLine = new PipeLine(App);
PipeLine.js
export class Pipeline {
constructor(app) {
this.app = app;
// Now do what you will with all the properties and methods of app.
}
}

Related

What are the consequences of exporting an object of public functions from a module?

What are the consequences of exporting functions from a module like this:
const foo = () => {
console.log('foo')
}
const bar = () => {
console.log('bar')
}
const internalFunc = () => {
console.log('internal function not exported')
}
export const FooBarService = {
foo,
bar
}
The only reason I've found is that it prevents module bundlers to perform tree-shaking of the exports.
However, exporting a module this way provides a few nice benefits like easy unit test mocking:
// No need for jest.mock('./module')
// Easy to mock single function from module
FooBarService.foo = jest.fn().mockReturnValue('mock')
Another benefit is that it allows context to where the module is used (Simply "find all references" on FooBarService)
A slightly opiniated benefit is that when reading consumer code, you can instantly see where the function comes from, due to that it is preprended with FooBarService..
You can get similar effect by using import * as FooBarService from './module', but then the name of the service is not enforced, and could differ among the consumers.
So for the sake of argument, let's say I am not to concerned with the lack of tree-shaking. All code is used somewhere in the app anyway and we do not do any code-splitting. Why should I not export my modules this way?
Benefits of using individual named exports are:
they're more concise to declare, and let you quickly discover right at their definition whether something is exported or not (in the form export const … = … or export function …(…) { … }).
they give the consumer of the module the choice to import them individually (for concise usage) or in a namespace. Enforcing the use of a namespace is rare (and could be solved with a linter rule).
they are immutable. You mention the benefit of mutable objects (easier mocking), but at the same time this makes it easier to accidentally (or deliberately) overwrite them, which causes hard-to-find bugs or at least makes reasoning harder
they are proper aliases, with hoisting across module boundaries. This is useful in certain circular-dependency scenarios (which should be few, but still). Also it allows "find all references" on individual exports.
(you already mentioned tree shaking, which kinda relies on the above properties)
Here are a couple of other benefits to multiple named exports (besides tree-shaking):
Circular dependencies are only possible when exports happen throughout the module. When all exporting happens at the end, then you can't have circular dependencies (which should be avoided anyways)
Sometimes it's nice to only import specific values from a module, instead of the entire module at once. This is especially true when importing constants, exception classes, etc. Sometimes it can be nice to just extract a couple of functions too, especially when those functions get used often. multiple named exports make this a little easier to do.
It's a more common way to do things, so if you're making an external API for others to consume, I would just stick with multiple named exports.
There may be other reasons - I can't think of anything else, but other answers might mention some.
With all of that being said, you'll notice that none of these arguments are very strong. So, if you find value in exporting an object literally everywhere, go for it!
One potential issue is that it'll expose everything, even if some functions are intended for use only inside the module, and nowhere else. For example, if you have
// service.js
const foo = () => {
console.log('foo')
}
const bar = () => {
console.log('bar')
}
const serviceInternalFn = () => {
// do stuff
}
export const FooBarService = {
foo,
bar,
serviceInternalFn,
}
then consumers will see that serviceInternalFn is exported, and may well try to use it, even if you aren't intending them to.
For somewhat similar reasons, when writing a library without modules, you usually don't want to do something like
<script>
const doSomething = () => {
// ...
};
const doSomethingInternal = () => {
// ...
};
window.myLibrary = {
doSomething
};
</script>
because then anything will be able to use doSomethingInternal, which may not be intended by the script-writer and might cause bugs or errors.
Rather, you'd want to deliberately expose only what is intended to be public:
<script>
window.myLibrary = (() => {
const doSomething = () => {
// ...
};
const doSomethingInternal = () => {
// ...
};
return {
doSomething
};
})();
</script>

Where does Inversion of Control arise during this example of Dependency Injection?

I am totally confused. There are many sources out there contradicting each other about the definitions of Dependency Injection and Inversion of Control. Here is the gist of my understanding without many additional detail, that in most cases made things more convoluted for me: Dependency Injection means that instead of my function conjuring the required dependencies, it is given the dependency as a parameter. Inversion of Control means that, for instance when you use a framework it is the framework that calls your userland code, and the control is inversed because in the 'default' case your code would be calling specific implementations in a library.
Now as I understand, somehow along the way, because my function that doesn't conjure up the dependencies anymore but gets it as an argument Inversion of Control magically happens when I use dependency injection like below.
So here is a silly example I wrote for myself to wrap my head around the idea:
getTime.js
function getTime(hour) {
return `${hour} UTC`
}
module.exports.getTime = getTime
welcome.js
function welcomeUser(name, hour) {
const getTime = require('./time').getTime
const time = getTime(`${hour} pm`)
return(`Hello ${name}, the time is ${time}!`)
}
const result = welcomeUser('Joe', '11:00')
console.log(result)
module.exports.welcomeUser = welcomeUser
welcome.test.js
const expect = require('chai').expect
const welcomeUser = require('./welcome').welcomeUser
describe('Welcome', () => {
it('Should welcome user', () => {
// But we would want to test how the welcomeUser calls the getTime function
expect(welcomeUser('Joe', '10:00')).to.equal('Hello Joe, the time is 10:00 pm UTC!')
})
})
The problem now is that the call of the getTime function is implemented in the welcome.js function, and it can not be intercepted by a test. What we would like to do is to test how the getTime function is called, and we can't to that this way.
The other problem is that the getTime function is pretty much harcoded, so we can't mock it, and that could be useful because we only want to test the welcomUser function separately, as that is the use of a unit test (the getTime function could be simultaneously implemented, for instance).
So the main problem is that the code is tightly coupled, it's harder to test and it is just wreaking havoc all around the place. Now let's use dependency injection:
getTime.js
function getTime(hour) {
return `${hour} UTC`
}
module.exports.getTime = getTime
welcome.js
const getTime = require('./time').getTime
function welcomeUser(name, hour, dependency) {
const time = dependency(hour)
return(`Hello ${name}, the time is ${time}!`)
}
const result = welcomeUser('Joe', '10:00', getTime)
console.log(result)
module.exports.welcomeUser = welcomeUser
welcome.test.js
const expect = require('chai').expect
const welcomeUser = require('./welcome').welcomeUser
describe('welcomeUser', () => {
it('should call getTime with the right hour value', () => {
const fakeGetTime = function(hour) {
expect(hour).to.equal('10:00')
}
// 'Joe' as an argument isn't even neccessary, but it's nice to leave it there
welcomeUser('Joe', '10:00', fakeGetTime)
})
it('should log the current message to the user', () => {
// Let's stub the getTime function
const fakeGetTime = function(hour) {
return `${hour} pm UTC`
}
expect(welcomeUser('Joe', '10:00', fakeGetTime)).to.equal('Hello Joe, the time is 10:00 pm UTC!')
})
})
As I understood, what I did above was Dependency Injection. Multiple sources claim that Dependency Injection is not possible without Inversion of Control. But where does Inversion of Control come into the picture?
Also what about the regular JavaScript workflow, where you just import the dependencies globally and use them later in your functions, instead of require-ing them inside of the functions or giving it to them as parameters?
Check Martin Fowler's article on IoC and DI. https://martinfowler.com/articles/injection.html
IoC: Very generic word. This inversion can happen in many ways.
DI: Can be viewed as one branch of this generic word IoC.
So in your code when you specifically implements DI, one would say your code has this general idea of IoC in the flavor of DI. What really inversed is, the default way of looking for behavior (default way is writing it within the method, inversed way would be getting behavior injected from outside).

Javascript : Passing variables from parent function to child function

I wanted to pass many variables to child functions and that passed variables should also pass on to the next child functions. Passing them each time as parameters is a tedious job. I was wondering what would be the better way to do this in Js. I tried using the this keyword , But not sure if that is the correct way of usage to achieve this.
var scriptConfig = {
availableOptions: {
version: "1",
type: "one",
status: "free",
},
availableCategories: {
navbar: true,
hasCoupon: true
}
};
window.addEventListener('load', function() {
let options = scriptConfig.availableOptions;
let version = options.version;
renderDashboard(options, version);
});
function renderDashboard(options, version) {
createNavbar(options, version);
}
function createNavbar(options, version) {
console.log(options);
console.log(version);
}
Using this
var scriptConfig = {
availableOptions: {
version: "1",
type: "one",
status: "free",
},
availableCategories: {
navbar: true,
hasCoupon: true
}
};
window.addEventListener('load', function() {
this.options = scriptConfig.availableOptions;
this.version = options.version;
this.renderDashboard();
});
function renderDashboard() {
this.createNavbar();
}
function createNavbar() {
console.log(this.options);
console.log(this.version);
}
Can anyone suggest which would be a better way ? Also it would be great if anyone suggest better coding practice for the above lines of code.
Using this in global scope (outside class) is not a good idea as it pollute the Window object. I will suggest you to create a class and wrap everything like this:
class Dashboard {
constructor(config) {
this.options = config.availableOptions;
this.version = this.options.version;
}
renderDashboard() {
this.createNavbar();
}
createNavbar() {
console.log(this.options, this.version);
}
}
var scriptConfig = {
availableOptions: {
version: "1",
type: "one",
status: "free",
},
availableCategories: {
navbar: true,
hasCoupon: true
}
};
window.addEventListener('load', function () {
const dashboard = new Dashboard(scriptConfig);
dashboard.renderDashboard();
});
There is [apply()][1] function in JS.
It lets you specify what this is when calling a function and provide additional arguments.
Well if you are constantly passing a variable from one function to its child functions, or in other terms from one ui component to its ui component, what you are doing in this case is sharing context between these entities (functions, components, or call them whatever you want depending on the type of their output)
global variables should be avoided, however in certain use cases they are the only valid solution, one use case for global variables are contexts - some people differentiate between context and global state, in that a context is a read only, while global state is mutable, for managing global states, javascript has many libraries that handle this like redux.
Assuming that you are using an immutable context, then the best option is a global variable, obviously you should hide that global variable behind a getter function to also make code more decelarticve
// Assuming you are using javascript modules if not just remove the export keyword
// obviously add the needed properties for the object
const globalContext = Object.freeze({}) // freeze object to prevent mutations to it
export const getGlobalContext = () => {
// hiding the global variable behind a getter offers two advantages, it better conveys the message that this object should not mutated by code
// later it enables more customization of context sharing, if for example later you decide to nest contexts, but if your context will get complicated, then doing that through a library should be better option
return globalContext;
}
Finally dont use the this keyword outside of instances/classes context, this will produce poor code that is hard to read, and more prone to errors, and its one of the reasons ES6 added the syntatic sugar class to make code that uses this only use it there and not in normal functions ( though it is still valid syntax)

Mediate and share data between different modules

I am just trying to get my head around event driven JS, so please bear with me. There are different kinds of modules within my app. Some just encapsulate data, others manage a part of the DOM. Some modules depend on others, sometimes one module depends on the state of multiple other modules, but I don't want them to communicate directly or pass one module to the other just for easy access.
I tried to create the simplest scenario possible to illustrate my problem (the actual modules are much more complex of course):
I have a dataModule that just exposes some data:
var dataModule = { data: 3 };
There is a configModule that exposes modifiers for displaying that data:
var configModule = { factor: 2 };
Finally there is a displayModule that combines and renders the data from the two other modules:
var displayModule = {
display: function(data, factor) {
console.log(data * factor);
}
};
I also have a simple implementation of pub-sub, so I could just mediate between the modules like this:
pubsub.subscribe("init", function() {
displayModule.display(dataModule.data, configModule.factor);
});
pubsub.publish("init"); // output: 6
However this way I seem to end up with a mediator that has to know all of the module-instances explicitly - is there even a way to avoid that? Also I don't know how this would work if there are multiple instances of these modules. What is the best way to avoid global instance-variables? I guess my question is what would be the most flexible way to manage something like that? Am I on the right track, or is this completely wrong? Sorry for not being very precise with my question, I just need someone to push me in the right direction.
You are on the right track, I'll try to give you that extra push you're talking about:
It you want loose coupling, pub-sub is a good way to go.
But, you don't really need that "mediator", each module should ideally be autonomous and encapsulate its own logic.
This is done in the following way: each module depends on the pubsub service, subscribe to all relevant events and act upon them. Each module also publishes events which might be relevant to others (code samples in a minute, bear with me).
I think the bit you might be missing here is that modules, which use events, will hardly never be just plain models. They will have some logic in them and can also hold a model (which they update when receiving events).
So instead of a dataModule you are more likely to have a dataLoaderModule which will publish the data model (e.g. {data: 3}), once he finishes loading.
Another great requirement you set is sharing data while avoiding global instance-variables - this is a very important concept and also a step in the right direction. What you miss in your solution for this is - Dependency Injection or at least a module system which allows defining dependencies.
You see, having an event driven application doesn't necessarily mean that every piece of the code should communicate using events. An application configuration model or a utility service is definitely something I would inject (when using DI, like in Angular), require (when using AMD/CommonJS) or import (when using ES6 modules).
(i.e. rather then communicating with a utility using events).
In your example it's unclear whether configModule is a static app configuration or some knob I can tweak from the UI. If it's a static app config - I would inject it.
Now, let's see some examples:
Assuming the following:
Instead of a dataModule we have a dataLoaderModule
configModule is a static configuration model.
We are using AMD modules (and not ES6 modules, which I prefer), since I see you stuck to using only ES5 features (I see no classes or consts).
We would have:
data-loader.js (aka dataLoaderModule)
define(['pubsub'], function (pubsub) {
// ... load data using some logic...
// and publish it
pubsub.publish('data-loaded', {data: 3});
});
configuration.js (aka configModule)
define([], function () {
return {factor: 2};
});
display.js (aka displayModule)
define(['configuration', 'pubsub'], function (configuration, pubsub) {
var displayModule = {
display: function (data, factor) {
console.log(data * factor);
}
};
pubsub.subscribe('data-loaded', function (data) {
displayModule.display(data, configuration.factor);
});
});
That's it.
You will notice that we have no global variables here (not even pubsub), instead we are requiring (or injecting) our dependencies.
Here you might be asking: "and what if I meant for my config to change from the UI?", so let's see that too:
In this case, I rather rename configModule to settingsDisplayModule (following your naming convention).
Also, in a more realistic app, UI modules will usually hold a model, so let's do that too.
And lets also call them "views" instead of "displayModules", and we will have:
data-loader.js (aka dataLoaderModule)
define(['pubsub'], function (pubsub) {
// ... load data using some logic...
// and publish it
pubsub.publish('data-loaded', {data: 3});
});
settings-view.js (aka settingsDisplayModule, aka config)
define(['pubsub'], function (pubsub) {
var settingsModel = {factor: 2};
var settingsView = {
display: function () {
console.log(settingsModel);
// and when settings (aka config) changes due to user interaction,
// we publish the new settings ...
pubsub.publish('setting-changed', settingsModel);
}
};
});
data-view.js (aka displayModule)
define(['pubsub'], function (pubsub) {
var model = {
data: null,
factor: 0
};
var view = {
display: function () {
if (model.data && model.factor) {
console.log(model.data * model.factor);
} else {
// whatever you do/show when you don't have data
}
}
};
pubsub.subscribe('data-loaded', function (data) {
model.data = data;
view.display();
});
pubsub.subscribe('setting-changed', function (settings) {
model.factor = settings.factor;
view.display();
});
});
And that's it.
Hope it helps :)
If not - comment!
You do not need a mediator. Just import data, config, and display and call display(data, config) where you need to.
// import data
// import config
function render(){
display(data, config)
}

Dependency Injection with RequireJS

How much can I stretch RequireJS to provide dependency injection for my app? As an example, let's say I have a model that I want to be a singleton. Not a singleton in a self-enforcing getInstance()-type singleton, but a context-enforced singleton (one instance per "context"). I'd like to do something like...
require(['mymodel'], function(mymodel) {
...
}
And have mymodel be an instance of the MyModel class. If I were to do this in multiple modules, I would want mymodel to be the same, shared instance.
I have successfully made this work by making the mymodel module like this:
define(function() {
var MyModel = function() {
this.value = 10;
}
return new MyModel();
});
Is this type of usage expected and common or am I abusing RequireJS? Is there a more appropriate way I can perform dependency injection with RequireJS? Thanks for your help. Still trying to grasp this.
This is not actually dependency injection, but instead service location: your other modules request a "class" by a string "key," and get back an instance of it that the "service locator" (in this case RequireJS) has been wired to provide for them.
Dependency injection would involve returning the MyModel constructor, i.e. return MyModel, then in a central composition root injecting an instance of MyModel into other instances. I've put together a sample of how this works here: https://gist.github.com/1274607 (also quoted below)
This way the composition root determines whether to hand out a single instance of MyModel (i.e. make it singleton scoped) or new ones for each class that requires it (instance scoped), or something in between. That logic belongs neither in the definition of MyModel, nor in the classes that ask for an instance of it.
(Side note: although I haven't used it, wire.js is a full-fledged dependency injection container for JavaScript that looks pretty cool.)
You are not necessarily abusing RequireJS by using it as you do, although what you are doing seems a bit roundabout, i.e. declaring a class than returning a new instance of it. Why not just do the following?
define(function () {
var value = 10;
return {
doStuff: function () {
alert(value);
}
};
});
The analogy you might be missing is that modules are equivalent to "namespaces" in most other languages, albeit namespaces you can attach functions and values to. (So more like Python than Java or C#.) They are not equivalent to classes, although as you have shown you can make a module's exports equal to those of a given class instance.
So you can create singletons by attaching functions and values directly to the module, but this is kind of like creating a singleton by using a static class: it is highly inflexible and generally not best practice. However, most people do treat their modules as "static classes," because properly architecting a system for dependency injection requires a lot of thought from the outset that is not really the norm in JavaScript.
Here's https://gist.github.com/1274607 inline:
// EntryPoint.js
define(function () {
return function EntryPoint(model1, model2) {
// stuff
};
});
// Model1.js
define(function () {
return function Model1() {
// stuff
};
});
// Model2.js
define(function () {
return function Model2(helper) {
// stuff
};
});
// Helper.js
define(function () {
return function Helper() {
// stuff
};
});
// composition root, probably your main module
define(function (require) {
var EntryPoint = require("./EntryPoint");
var Model1 = require("./Model1");
var Model2 = require("./Model2");
var Helper = require("./Helper");
var entryPoint = new EntryPoint(new Model1(), new Model2(new Helper()));
entryPoint.start();
});
If you're serious about DI / IOC, you might be interested in wire.js: https://github.com/cujojs/wire
We use a combination of service relocation (like Domenic describes, but using curl.js instead of RequireJS) and DI (using wire.js). Service relocation comes in very handy when using mock objects in test harnesses. DI seems the best choice for most other use cases.
Not a singleton in a self-enforcing getInstance()-type singleton, but
a context-enforced singleton (one instance per "context").
I would recommend it only for static objects. It's perfectly fine to have a static object as a module that you load using in the require/define blocks. You then create a class with only static properties and functions. You then have the equivalent of the Math Object that has constants like PI, E, SQRT and functions like round(), random(), max(), min(). Great for creating Utility classes that can be injected at any time.
Instead of this:
define(function() {
var MyModel = function() {
this.value = 10;
}
return new MyModel();
});
Which creates an instance, use the pattern for a static object (one where values are always the same as the Object never gets to be instantiated):
define(function() {
return {
value: 10
};
});
or
define(function() {
var CONSTANT = 10;
return {
value: CONSTANT
};
});
If you want to pass an instance (the result of using a Module that have return new MyModel();), then, within an initialize function, pass a variable that capture the current state / context or pass on the Object that contains information on state / context that your modules needs to know about.

Categories