Apologies if this question is too open-ended.
I have a big "helper" file filled with useful functions. They're all exported functions, so
exports.getFirstNameLastName = nameString => { ... }
This file is getting to be pretty big and has enough functions in it that I feel that it can be divided up into smaller, categorized files (e.g. parsingHelper.js, webHelper.js, etc).
I'm coming from a heavily object oriented software dev background. In something like C# I'd create a static class file for each of these categories (parsing, web, etc), then simply do the one import (using ParserHelpers;), then do ParserHelpers.GetFirstNameLastName(...); instead of importing each and every function I end up using.
My question is, is there a way to organize all my helper functions in a similar manner? I'm trying to reduce the number of individually exported/imported items and trying to split this big file into smaller files.
Would rather not use additional packages if I don't have to (using ES6).
Yes! There is a way to do something like that!
// file users.js
exports.users = {
getFirstName: () => {},
getLastName: () => {},
// Add as many as you want
}
// file categories.js
exports.categories = {
getCategoryById: () => {},
getCategoryName: () => {}
}
// You can use them separately or you can create another file an union all them:
// file helpers.js
import users from './users'
import categories from './categories'
exports helpers = {
users,
categories
}
// This way you can do something like:
import helpers from './helpers.js'
helper.users.getFirstName()
Related
What are the consequences of exporting functions from a module like this:
const foo = () => {
console.log('foo')
}
const bar = () => {
console.log('bar')
}
const internalFunc = () => {
console.log('internal function not exported')
}
export const FooBarService = {
foo,
bar
}
The only reason I've found is that it prevents module bundlers to perform tree-shaking of the exports.
However, exporting a module this way provides a few nice benefits like easy unit test mocking:
// No need for jest.mock('./module')
// Easy to mock single function from module
FooBarService.foo = jest.fn().mockReturnValue('mock')
Another benefit is that it allows context to where the module is used (Simply "find all references" on FooBarService)
A slightly opiniated benefit is that when reading consumer code, you can instantly see where the function comes from, due to that it is preprended with FooBarService..
You can get similar effect by using import * as FooBarService from './module', but then the name of the service is not enforced, and could differ among the consumers.
So for the sake of argument, let's say I am not to concerned with the lack of tree-shaking. All code is used somewhere in the app anyway and we do not do any code-splitting. Why should I not export my modules this way?
Benefits of using individual named exports are:
they're more concise to declare, and let you quickly discover right at their definition whether something is exported or not (in the form export const … = … or export function …(…) { … }).
they give the consumer of the module the choice to import them individually (for concise usage) or in a namespace. Enforcing the use of a namespace is rare (and could be solved with a linter rule).
they are immutable. You mention the benefit of mutable objects (easier mocking), but at the same time this makes it easier to accidentally (or deliberately) overwrite them, which causes hard-to-find bugs or at least makes reasoning harder
they are proper aliases, with hoisting across module boundaries. This is useful in certain circular-dependency scenarios (which should be few, but still). Also it allows "find all references" on individual exports.
(you already mentioned tree shaking, which kinda relies on the above properties)
Here are a couple of other benefits to multiple named exports (besides tree-shaking):
Circular dependencies are only possible when exports happen throughout the module. When all exporting happens at the end, then you can't have circular dependencies (which should be avoided anyways)
Sometimes it's nice to only import specific values from a module, instead of the entire module at once. This is especially true when importing constants, exception classes, etc. Sometimes it can be nice to just extract a couple of functions too, especially when those functions get used often. multiple named exports make this a little easier to do.
It's a more common way to do things, so if you're making an external API for others to consume, I would just stick with multiple named exports.
There may be other reasons - I can't think of anything else, but other answers might mention some.
With all of that being said, you'll notice that none of these arguments are very strong. So, if you find value in exporting an object literally everywhere, go for it!
One potential issue is that it'll expose everything, even if some functions are intended for use only inside the module, and nowhere else. For example, if you have
// service.js
const foo = () => {
console.log('foo')
}
const bar = () => {
console.log('bar')
}
const serviceInternalFn = () => {
// do stuff
}
export const FooBarService = {
foo,
bar,
serviceInternalFn,
}
then consumers will see that serviceInternalFn is exported, and may well try to use it, even if you aren't intending them to.
For somewhat similar reasons, when writing a library without modules, you usually don't want to do something like
<script>
const doSomething = () => {
// ...
};
const doSomethingInternal = () => {
// ...
};
window.myLibrary = {
doSomething
};
</script>
because then anything will be able to use doSomethingInternal, which may not be intended by the script-writer and might cause bugs or errors.
Rather, you'd want to deliberately expose only what is intended to be public:
<script>
window.myLibrary = (() => {
const doSomething = () => {
// ...
};
const doSomethingInternal = () => {
// ...
};
return {
doSomething
};
})();
</script>
I'm working on a custom i18n module and would love to replace this code (this is a an "about-us" page):
const messages = (await import(`./about-us.${locale}.json`))
.default as Messages;
By
const messages = (
await import(`./${__filename.replace('.tsx', `.${locale}.json`)}`)
).default as Messages;
Unfortunately __filename resolves to /index.js (I guess because of Webpack?) - is there any way to achieve what am I trying to do in my example or this would need to be built-in Next.js directly to work?
Refactor this so consumers don't know about the filesystem
Spoiler: I'm not going to tell you how to access __filename with Next.js; I don't know anything about that.
Here's a pattern that is better than what you propose, and which evades the problem entirely.
First, setup: it sounds like you've got a folder filled with these JSON files. I imagine this:
l10n/
about-us.en-US.json
about-us.fr-FR.json
contact-us.en-US.json
contact-us.fr-FR.json
... <package>.<locale>.json
That file organization is nice, but it's a mistake to make every would-be l10n consumer know about it.
What if you change the naming scheme later? Are you going to hand-edit every file that imports localized text? Why would you treat Future-You so poorly?
If a particular locale file doesn't exist, would you prefer the app crash, or just fall back to some other language?1
It's better to create a function that takes packageName and localeCode as arguments, and returns the desired content. That function then becomes the only part of the app that has to know about filenames, fallback logic, etc.
// l10n/index.js
export default function getLang( packageName, localeCode ) {
let contentPath = `${packageName}.${localeCode}.json`
// TODO: fallback logic
return JSON.parse(FS.readFileSync(contentPath, 'utf8'))
}
It is a complex job to locate and read the desired data while also ensuring that no request ever gets an empty payload and that each text key resolves to a value. Dynamic import + sane filesystem is a good start (:applause:), but that combination is not nearly robust-enough on its own.
At a previous job, we built an entire microservice just to do this one thing. (We also built a separate service for obtaining translations, and a few private npm packages to allow webapps to request and use language packs from our CMS.) You don't have to take it that far, but it hopefully illustrates that the problem space is not tiny.
1 Fallback logic: e.g. en-UK & en-US are usually interchangeable; some clusters of Romance languages might be acceptable in an emergency (Spanish/Portuguese/Brazilian come to mind); also Germanic languages, etc. What works and doesn't depends on the content and context, but no version of fallback will fit into a dynamic import.
You can access __filename in getStaticProps and getServerSideProps if that helps?
I pass __filename to a function that needs it (which has a local API fetch in it), before returning the results to the render.
export async function getStaticProps(context) {
return {
props: {
html: await getData({ page: __filename })
}, // will be passed to the page component as props
};
}
After a long search, the solution was to write a useMessages hook that would be "injected" with the correct strings using a custom Babel plugin.
A Webpack loader didn't seem like the right option as the loader only has access to the content of the file it loads. By using Babel, we have a lot more options to inject code into the final compiled version.
Let's say that I have two files named dbPerson.js and dbCar.js both with their own methods like:
dbPerson.js:
module.exports = {
getAll(callback){}
}
dbCar.js:
module.exports = {
getOne(callback){}
}
And then I have another file named db.js that would import those files:
const dbPerson=require('./dbPerson');
const dbCar=require('./dbCar');
And ultimately I have a main file which imports db.js:
const db=require('./db');
I want to be able to call the methods by using db.getAll() and db.getOne(), but this way I would need to use db.dbPerson.getAll() and db.dbCar.getOne(), is there a way so I don't need to do that?
UPDATE: I found that the answer below helps me but thanks for everyone who tried to understand what I meant, next time I'm gonna try to be more clear!
There are few ways to do it. Which path you take should depend on your requirements.
First let's assume you want to merge these two objects(dbCar and dbPerson). Then you can do it simply using the object spread syntax.
// db.js
module.exports = {
...dbPerson,
...dbCar,
}
Note that doing this way, conflicting properties of dbPerson will be overridden by the properties of dbCar. And by the naming, it does not make sense to merge these two objects (What would you call something that is a Person and also a Car?).
Another approach is to use a proxy object to forward the calls.
// db.js
module.exports = {
getAll: dbPerson.getAll,
getOne: dbCar.getOne,
}
You would need need to export those functions from your db.js file too:
//db.js
const dbPerson=require('./dbPerson');
const dbCar=require('./dbCar');
module.exports = {
getAll: dbPerson.getAll,
getOne: dbCar.getOne
}
You say:
I want to be able to call the methods by using db.getAll() and db.getOne(), but this way I would need to use db.dbPerson.getAll() and db.dbCar.getOne(), is there a way so I don't need to do that?
Is there a reason that this would be a problem?
As far as I can see - you are talking about two different things, cars and people.
Doing db.dbCar.getOne() makes sense to me, if I did db.getOne(), I wouldn't be sure whether I'm getting one car or one person.
You could just rename the methods in your export, like:
module.exports {
getAllPersons: dbPerson.getAll,
getOneCar: dbCar.getOne,
}
I want to create a really basic CRUD (sort-of) example app, to see how things work.
I want to store items (items of a shopping-list) in an array, using functions defined in my listService.js such as addItem(item), getAllItems() and so on.
My problems come when using the same module (listService.js) in different files, because it creates the array, in which it stores the data, multiple times, and I want it to be like a static "global" (but not a global variable) array.
listService.js looks like this:
const items = [];
function addItem (item) {
items.push(item);
}
function getItems () {
return items;
}
module.exports = {addItem, getItems};
and I want to use it in mainWindowScript.js and addWindowScript.js, in addWindowScript.js to add the elements I want to add to the array, and in mainWindowScript.js to get the elements and put them in a table. (I will implement later on Observer pattern to deal with adding in table when needed)
addWindowScript.js looks something like this:
const electron = require('electron');
const {ipcRenderer} = electron;
const service = require('../../service/listService.js');
const form = document.querySelector('form');
form.addEventListener('submit', submitForm);
function submitForm(e) {
e.preventDefault();
const item = document.querySelector("#item").value;
service.addItem(item);
console.log(service.getItems());
// This prints well all the items I add
// ...
}
and mainWindowScript.js like this:
const electron = require('electron');
const service = require('../../service/listService.js');
const buttonShowAll = document.querySelector("#showAllBtn")
buttonShowAll.addEventListener("click", () => {
console.log(service.getItems());
// This just shows an empty array, after I add the items in the add window
});
In Java or C#, or C++ or whatever I would just create a Class for each of those and in main I'd create an instance of the Service and pass a reference of it to the windows. How can I do something similar here ?
When I first wrote the example (from a youtube video) I handled this by
sending messages through the ipcRenderer to the main module, and then sending it forward to the other window, but I don't want to deal with this every time there's a signal from one window to another.
ipcRenderer.send('item:add', item);
and in main
ipcMain.on('item:add', (event, item) => {
mainWindow.webContents.send('item:add', item);
})
So, to sum up, I want to do something like : require the module, use the function wherever the place and have only one instance of the object.
require the module, use the function wherever the place and have only one instance of the object.
TL:DR - no, that isn't possible.
Long version: Nature of Electron is multi process, code you runs in main process (node.js side) and renderer (chromium browser) is runnning in different process. So even you require same module file, object created memory in each process is different. There is no way to share object between process except synchrnonize objects via ipc communication. There are couple of handful synchronization logic modules out there, or you could write your module do those job like
module.js
if (//main process)
// setup object
//listen changes from renderer, update object, broadcast to renderer again
else (//rendere process)
//send changes to main
//listen changes from main
but either cases you can't get away from ipc.
I'm new to NodeJs development coming from the .NET world
i'm searching the web for best practices regrading DI / DIP in Javascript
In .NET i would declare my dependencies at the constructor whereas in javascript i see a common pattern is to declare dependencies in the module level via a require statement.
for me it looks like that when i use require i'm coupled to a specific file while using a constructor to receive my dependency is more flexible.
What would you recommend doing as a best practice in javascript? (I'm looking for the architectural pattern and not an IOC technical solution)
searching the web i came along this blog post (which has some very interesting discussion in the comments):
https://blog.risingstack.com/dependency-injection-in-node-js/
it summerizes my conflict pretty good.
here's some code from the blog post to make you understand what i'm talking about:
// team.js
var User = require('./user');
function getTeam(teamId) {
return User.find({teamId: teamId});
}
module.exports.getTeam = getTeam;
A simple test would look something like this:
// team.spec.js
var Team = require('./team');
var User = require('./user');
describe('Team', function() {
it('#getTeam', function* () {
var users = [{id: 1, id: 2}];
this.sandbox.stub(User, 'find', function() {
return Promise.resolve(users);
});
var team = yield team.getTeam();
expect(team).to.eql(users);
});
});
VS DI:
// team.js
function Team(options) {
this.options = options;
}
Team.prototype.getTeam = function(teamId) {
return this.options.User.find({teamId: teamId})
}
function create(options) {
return new Team(options);
}
test:
// team.spec.js
var Team = require('./team');
describe('Team', function() {
it('#getTeam', function* () {
var users = [{id: 1, id: 2}];
var fakeUser = {
find: function() {
return Promise.resolve(users);
}
};
var team = Team.create({
User: fakeUser
});
var team = yield team.getTeam();
expect(team).to.eql(users);
});
});
Regarding your question: I don't think that there is a common practice in the JS community. I've seen both types in the wild, require modifications (like rewire or proxyquire) and constructor injection (often using a dedicated DI container). However, personally, I think not using a DI container is a better fit with JS. And that's because JS is a dynamic language with functions as first-class citizens. Let me explain that:
Using DI containers enforce constructor injection for everything. It creates a huge configuration overhead for two main reasons:
Providing mocks in unit tests
Creating abstract components that know nothing about their environment
Regarding the first argument: I would not adjust my code just for my unit tests. If it makes your code cleaner, simpler, more versatile and less error-prone, then go for out. But if your only reason is your unit test, I would not take the trade-off. You can get pretty far with require modifications and monkey patching. And if you find yourself writing too many mocks, you should probably not write a unit test at all, but an integration test. Eric Elliott has written a great article about this problem.
Regarding the second argument: This is a valid argument. If you want to create a component that only cares about an interface, but not about the actual implementation, I would opt for a simple constructor injection. However, since JS does not force you to use classes for everything, why not just use functions?
In functional programming, separating stateful IO from actual processing is a common paradigm. For instance, if you're writing code that is supposed to count file types in a folder, one could write this (especially when he/she is coming from a language that enforce classes everywhere):
const fs = require("fs");
class FileTypeCounter {
countFileTypes(dirname, callback) {
fs.readdir(dirname, function (err) {
if (err) return callback(err);
// recursively walk all folders and count file types
// ...
callback(null, fileTypes);
});
}
}
Now if you want to test that, you need to change your code in order to inject a fake fs module:
class FileTypeCounter {
constructor(fs) {
this.fs = fs;
}
countFileTypes(dirname, callback) {
this.fs.readdir(dirname, function (err) {
// ...
});
}
}
Now, everyone who is using your class needs to inject fs into the constructor. Since this is boring and makes your code more complicated once you have long dependency graphs, developers invented DI containers where they can just configure stuff and the DI container figures out the instantiation.
However, what about just writing pure functions?
function fileTypeCounter(allFiles) {
// count file types
return fileTypes;
}
function getAllFilesInDir(dirname, callback) {
// recursively walk all folders and collect all files
// ...
callback(null, allFiles);
}
// now let's compose both functions
function getAllFileTypesInDir(dirname, callback) {
getAllFilesInDir(dirname, (err, allFiles) => {
callback(err, !err && fileTypeCounter(allFiles));
});
}
Now you have two super-versatile functions out-of-the-box, one which is doing IO and the other one which processes the data. fileTypeCounter is a pure function and super-easy to test. getAllFilesInDir is impure but a such a common task, you'll often find it already on npm where other people have written integration tests for it. getAllFileTypesInDir just composes your functions with a little bit of control flow. This is a typical case for an integration test where you want to make sure that your whole application is working correctly.
By separating your code between IO and data processing, you won't find the need to inject anything at all. And if you don't need to inject anything, that's a good sign. Pure functions are the easiest thing to test and are still the easiest way to share code between projects.
In the past, DI containers as we know them from Java and .NET did not exist. With Node 6 came ES6 Proxies which opened up the possibility of such containers - Awilix for example.
So let's rewrite your code to modern ES6.
class Team {
constructor ({ User }) {
this.User = user
}
getTeam (teamId) {
return this.User.find({ teamId: teamId })
}
}
And the test:
import Team from './Team'
describe('Team', function() {
it('#getTeam', async function () {
const users = [{id: 1, id: 2}]
const fakeUser = {
find: function() {
return Promise.resolve(users)
}
}
const team = new Team({
User: fakeUser
})
const team = await team.getTeam()
expect(team).to.eql(users)
})
})
Now, using Awilix, let's write our composition root:
import { createContainer, asClass } from 'awilix'
import Team from './Team'
import User from './User'
const container = createContainer()
.register({
Team: asClass(Team),
User: asClass(User)
})
// Grab an instance of Team
const team = container.resolve('Team')
// Alternatively...
const team = container.cradle.Team
// Use it
team.getTeam(123) // calls User.find()
That's as simple as it gets; Awilix can handle object lifetimes as well, just like the .NET / Java containers did. This lets you do cool stuff like inject the current user to your services, intantiating your services once per http request, etc.