There's a set of JS modules representing similar OOP Classes: think of e.g. different types of backend tasks (SendEmailTask, WriteToDbTask, WriteToDiskTask), or different actions on a drawing canvas (DrawArc, DrawLine, DrawBezier). Those are just examples.
Each of those is a single JS file, with its own define, and they're all located in a common directory. In a client module that depends on all of those, the dependency list and argument list have to include each one of those separately, e.g. something like:
define([
'tasks/sendEmailTask', 'tasks/writeToDbTask', 'tasks/writeToDiskTask', ...
], function (SendEmailTask, WriteToDbTask, WriteToDiskTask, ...) {
/* ... */
/* ... new SendEmailTask(); */
/* ... new WriteToDbTask(); */
/* ... new WriteToDiskTask(); */
});
and they both must be updated everytime a new module is added to the set, e.g. MakeCoffeeTask, which seems to me as a BadThing™.
Is there a way to avoid those last issues? I was thinking of a couple of possible ways, but I don't know how to make them work:
Create a kind of namespace module. Each of those sub-modules depends on the namespace one and adds its definition to it. But than if the client too depends only on the namespace, how do you ensure that client gets loaded after all sub-modules?
Express both dependencies and arguments with some kind of wildcard, like 'tasks/*' for dependency and < no idea > for arguments.
You can make a different module for each section. For example: 'tasks/main'
in the Task main you can include all the files you need. and in the task main, all the function of task will be assigned. and you call high level functions. Take look :
define(['tasks/main'], function(task){
task.add('Project Status meeting');
task.SendEmailNotification('tAll');
});
in the task/main.js :
define(['tasks/sendEmailTask', 'tasks/writeToDbTask', 'tasks/writeToDiskTask',], function(sendEmailTask, writeToDbTask, writeToDiskTask){
var task = {
add = function(taskObj){
// write your code using writeToDbTask, writeToDiskTask
},
SendEmailNotification = function(whom){
// write your code using sendEmailTask, writeToDbTask, writeToDiskTask
};
return task;
});
does it help you ?
I was not able to find any other solution. This is what I came up with in the meantime... it is just an improvement over current situation, but I don't think it solves the posed issues.
// in tasks/taskNamespace.js or in any client code
define (function(require) {
var _taskArray = [
require('tasks/sendEmailTask'),
require('tasks/writeToDbTask'),
require('tasks/writeToDiskTask'),
/* ... , */
];
var TaskNamespace = {};
_taskArray.forEach(function (taskClass) {
TaskNamespace[taskClass.name] = taskClass;
});
return TaskNamespace;
});
It takes advantage of RequireJS Sugar syntax and of JS Function.name coming new feature, proposed in ES6.
What's the end result? You still have to manually add each Class to a list, but it's only one list instead of two. WHOA
Related
I am working on supporting a REST API that literally has thousands of functions/objects/stats/etc., and placing all those calls into one file does not strike me as very maintainable. What I want to do is have a 'base' file that has the main constructor function, a few utility and very common functions, and then files for each section of API calls.
The Problem: How do you attach functions from other files to the 'base' Object so that referencing the main object allows for access from the subsections you have added to your program??
Let me try and illustrate what I am looking to do:
1) 'base' file has the main constructor:
var IPAddr = "";
var Token = "";
exports.Main = function(opts) {
IPAddr = opts.IPAddr;
Token = opts.Token;
}
2) 'file1' has some subfunctions that I want to define:
Main.prototype.Function1 = function(callback) {
// stuff done here
callback(error, data);
}
Main.prototype.Function2 = function(callback) {
// stuff done here
callback(error,data);
}
3) Program file brings it all together:
var Main = require('main.js');
var Main?!? = require('file1.js');
Main.Function1(function(err,out) {
if(err) {
// error stuff here
}
// main stuff here
}
Is there a way to combine an Object from multiple code files?? A 120,000 line Node.JS file just doesn't seem to be the way to go to me....not to mention it takes too long to load! Thanks :)
SOLUTION: For those who may stumble upon this in the future... I took the source code for Object.assign and back ported it to my v0.12 version of Node and got it working.
I used the code from here: https://github.com/sindresorhus/object-assign/blob/master/index.js and put it in a separate file that I just require('./object-assign.js') without assigning it to a var. Then my code looks something like this:
require('./object-assign.js');
var Main = require('./Main.js');
Object.assign(Main.prototype, require('./file1.js'));
Object.assign(Main.prototype, require('./file2.js'));
And all my functions from the two files show up under the Main() Object...too cool :)
At first each file works in its own scope, so all local variables are not shared.
However, as hacky approach, you may just add Main into global scope available everywhere by writing global.Main = Main right after you define it, please make sure that you require main file first in list of requires.
The better(who said?) approach is to extend prototype of Main later, but in this case you may need to update a lot of code. Just mix-in additional functionality into base class
file1.js
module.exports = {
x: function() {/*****/}
}
index.js
var Main = require('main.js');
Object.assign(Main.prototype, require('file1.js'));
Shure.
constructor.js
module.exports = function(){
//whatever
};
prototype.js
module.exports = {
someMethod(){ return "test";}
};
main.js
const Main = require("./constructor.js");
Object.assign( Main.prototype, require("./prototype.js"));
So, I'm a big fan of creating global namespaces in javascript. For example, if my app is named Xyz I normally have an object XYZ which I fill with properties and nested objects, for an example:
XYZ.Resources.ErrorMessage // = "An error while making request, please try again"
XYZ.DAL.City // = { getAll: function() { ... }, getById: function(id) { .. } }
XYZ.ViewModels.City // = { .... }
XYZ.Models.City // = { .... }
I sort of picked this up while working on a project with Knockout, and I really like it because there are no wild references to some objects declare in god-knows-where. Everything is in one place.
Now. This is ok for front-end, however, I'm currently developing a basic skeleton for a project which will start in a month, and it uses Node.
What I wanted was, instead of all the requires in .js files, I'd have a single object ('XYZ') which would hold all requires in one place. For example:
Instead of:
// route.js file
var cityModel = require('./models/city');
var cityService = require('./services/city');
app.get('/city', function() { ...........});
I would make an object:
XYZ.Models.City = require('./models/city');
XYZ.DAL.City = require('./services/city');
And use it like:
// route.js file
var cityModel = XYZ.Models.City;
var cityService = XYZ.DAL.City;
app.get('/city', function() { ...........});
I don't really have in-depth knowledge but all of the requires get cached and are served, if cached, from memory so re-requiring in multiple files isn't a problem.
Is this an ok workflow, or should I just stick to the standard procedure of referencing dependencies?
edit: I forgot to say, would this sort-of-factory pattern block the main thread, or delay the starting of the server? I just need to know what are the downsides... I don't mind the requires in code, but I just renamed a single folder and had to go through five files to change the paths... Which is really inconvenient.
I think that's a bad idea, because you are going to serve a ton of modules every single time, and you may not need them always. Your namespaced object will get quite monstrous. require will check the module cache first, so I'd use standard requires for each request / script that you need on the server.
I'm building a midly size app using backbone and its friends jquery and underscore. My plan is to use QunitJS to create unittests.
I already have a Proof-Of-Concept of the app, so I basicly have a good grasp of how the code should look like without having to test it. It looks like that:
(function() {
// prepare some model
var Query = BackboneModel.extend({});
query = new Query();
// register some events
$('body.something').click(function() {
query.set('key', 'value');
# ...
});
// create some backbone view
var QueryView = Backbone.View.extend({...})
// init backbone view
query.view = new QueryView();
// add some plumbing here ...
// and so on...
})();
Otherwise said:
I wrap the module inside a function to avoid pollution
Class declaration and class use is interleaved
event registration is interleaved with the rest of the code
variables global to the module are reused inside the module
Now I need to test that. The problem is, I think, mainly about event registration and plumbing.
My plan is to wrap the code in functions and export every function and objects I want to test. The code will look like this:
var app = (function() {
var app = {}
// prepare some model
var Query = BackboneModel.extend({});
app.query = new Query();
// register some events
app.registerEvent = function() {
$('body.something').click(function() {
query.set('key', 'value');
# ...
});
};
app.registerEvent(); // XXX: call immediatly
// create some backbone view
app.QueryView = Backbone.View.extend({...})
// init backbone view
app.query.view = new QueryView();
// add some plumbing here ...
// wrapped in function with correct arguments and call it immediatly
// and so on...
// ...
return app;
})();
This is the first time I need to write tests in javascript for this kind of application so I'm wondering whether my approach to make the code testable is correct or can be improved. For instance, It seems silly to me to wrap the registration of events in function without arguments and call them immediatly.
Is there a javascript way to do this?
So I found an excellent way to test private functions while keeping my production code clean. I am not sure if you are using any build system like Grunt or Gulp, but if you are open to it you could do something like this:
//has a dependency of 'test' causing it to run testing suite first
gulp.task('js', ['test'], function() {
return gulp.src(source)
.pipe(plumber())
//create sourcemaps so console errors point to original file
.pipe(sourcemaps.init())
//strip out any code between comments
.pipe(stripCode({
start_comment: 'start-test',
end_comment: 'end-test'
}))
//combine all separatefiles into one
.pipe(concatenate('mimic.min.js'))
//minify and mangle
.pipe(uglify())
.pipe(sourcemaps.write('maps'))
.pipe(gulp.dest('dist/js'));
});
And the file could look like:
var app = (function () {
function somePrivateFunction () {}
function someotherPrivateFunction () {}
var app = {
publicFunction: function(){}
publicVar: publicVar
}
/* start-test */
app.test = {
testableFunction: somePrivateFunction
}
/* end-test */
}());
Everything between the test comments gets stripped out after the tests are run so the production code is clean. Grunt has a version of this and I assume any automated build system can do the same. You can even set a watch task so that it runs the tests on every save. Otherwise you'll have to manually remove the exported test object before deployment.
In the case of Backbone just attach a test object to the module and reference that object in the tests. Or if you really want to separate it, set the object in the global scope. window.testObject = { //list of objects and methods to test... }; and strip that code out before deployment.
Basically what I do is avoid any calls in the library I want to test. So everything is wrapped in a function and exported.
Then I have two other files one is main.js where I do the plumbing and should be tested using integration tests with selenium and tests.js which does unit testing.
This is my second weekend playing with Node, so this is a bit newbie.
I have a js file full of common utilities that provide stuff that JavaScript doesn't. Severely clipped, it looks like this:
module.exports = {
Round: function(num, dec) {
return Math.round(num * Math.pow(10,dec)) / Math.pow(10,dec);
}
};
Many other custom code modules - also included via require() statements - need to call the utility functions. They make calls like this:
module.exports = {
Init: function(pie) {
// does lots of other stuff, but now needs to round a number
// using the custom rounding fn provided in the common util code
console.log(util.Round(pie, 2)); // ReferenceError: util is not defined
}
};
The node.js file that is actually run is very simple (well, for this example). It just require()'s in the code and kicks off the custom code's Init() fn, like this:
var util = require("./utilities.js");
var customCode = require("./programCode.js");
customCode.Init(Math.PI);
Well, this doesn't work, I get a "ReferenceError: util is not defined" coming from the customCode. I know everything in each required file is "private" and this is why the error is occuring, but I also know that the variable holding the utility code object has GOT to be stored somewhere, perhaps hanging off of global?
I searched through global but didn't see any reference to utils in there. I was thinking of using something like global.utils.Round in the custom code.
So the question is, given that the utility code could be referred to as anything really (var u, util, or utility), how in heck can I organize this so that other code modules can see these utilities?
There are at least two ways to solve this:
If you need something from another module in a file, just require it. That's the easy one.
Provide something which actually builds the module for you. I will explain this in a second.
However, your current approach won't work as the node.js module system doesn't provide globals as you might expect them from other languages. Except for the things exported with module.exports you get nothing from the required module, and the required module doesn't know anything of the requiree's environment.
Just require it
To avoid the gap mentioned above, you need to require the other module beforehand:
// -- file: ./programCode.js
var util = require(...);
module.exports = {
Init: function(pie) {
console.log(util.Round(pie, 2));
}
};
requires are cached, so don't think too much about performance at this point.
Keep it flexible
In this case you don't directly export the contents of your module. Instead, you provide a constructor that will create the actual content. This enables you to give some additional arguments, for example another version of your utility library:
// -- file: ./programCode.js
module.exports = {
create: function(util){
return {
Init: function(pie) {
console.log(util.Round(pie, 2));
}
}
}
};
// --- other file
var util = require(...);
var myModule = require('./module').create(util);
As you can see this will create a new object when you call create. As such it will consume more memory as the first approach. Thus I recommend you to just require() things.
First a bit of history, we have an engine which is made up of many javascript files which are essentially modules. These modules return a single class that are assigned to the global scope, although under a specified namespace.
The engine itself is used to display eLearning content, with each different eLearning course requiring slightly different needs, which is where we include javascript files into the page based on the necessary functionality. (There is only one entry page).
I've been trying to weigh up if it's worth changing to AMD, require.js and r.js or if it's better to stay with our current system which includes everything required on the page and minimises it into one script.
One of my biggest problems with going to AMD would be that it seems to be harder to extend a class easily. For example, sometimes we have to adjust the behaviour of the original class slightly. So we add another script include on the page that extends the original class by copying the original prototype, execute the original function that's being overridden with apply and then do whatever additional code is required.
Can you extend an AMD module without adapting the original file? Or am I missing the point and we're best staying with what we're doing at the moment?
I recently started a project using RequireJS, and the method I use to extend underscore boils down to something like this:
Relevant Directory Structure:
/scripts
/scripts/underscore.js
/scripts/base/underscore.js
The real underscore library goes to /scripts/base/underscore.js.
My extensions go in /scripts/underscore.js.
The code in /scripts/underscore.js looks like this:
define(['./base/underscore'], function (_) {
'use strict';
var exports = {};
// add new underscore methods to exports
_.mixin(exports); // underscore's method for adding methods to itself
return _; // return the same object as returned from the underscore module
});
For a normal extension, it could look more like this:
define(['underscore', './base/SomeClass'], function (_, SomeClass) {
'use strict';
_.extend(SomeClass.prototype, {
someMethod: function (someValue) {
return this.somethingOrOther(someValue * 5);
}
});
return SomeClass;
});
Note on underscore: Elsewhere I used the RequireJS shim-config to get underscore to load as an AMD module, but that should have no effect on this process with non-shimmed AMD modules.
You can have modules that contain your constructor functions. when these modules get included, they are ready for use. then you can create objects out of them afterwards.
example in require:
//construction.js
define(function(){
//expose a constructor function
return function(){
this....
}
});
//then in foo.js
define([construction],function(Construction){
var newObj = new Construction; //one object using constructor
});
//then in bar.js
define([construction],function(Construction){
//play with Construction's prototype here then use it
var newObj = new Construction;
});