I am planing to build a software with electron which should have many different windows. I already read a book and study the documentation to get familiar with the electron ipc module and how it works.
This concept works for me, with a manageable number of windows but even more windows my application gets even more confusing it becomes.
I am also new to javascript and Prototype-based object-oriented programming
I am trying to imagin how could i build a large software without loosing the overview of main.js.
The problem is that the main.js need specific ipc methods for every single window. I think of a solution in which i may hold the ipc methods in other javascript files with relevant names.
now i need to do in main.js for example:
ipcMain.on('window1-call1' .. {
//do stuff
}
ipcMain.on('window1-call2' .. {
//do stuff
}
ipcMain.on('window2-call1' .. {
//do stuff
}
and so on .....
Here is one example from my main.js, preload.js and renderer.js
to show how i build up a simple IPC example:
//main.js
ipc.on('open-directory-dialog', function (event) {
dialog.showOpenDialog(mainWindow, {
title: 'Select a image...',
properties: ['openFile'],
defaultPath: '/home',
buttonLabel: "Select...",
filters: [
{ name: 'Images', extensions: ['jpg', 'png', 'gif'] }
]
}, function (files) {
if (files) event.sender.send('selectedItem', files)
})
})
//preload.js
window.ipcFiledialog = function (channel) {
ipcRenderer.send('open-directory-dialog')
}
//callback
ipcRenderer.on('selectedItem', function (event, path) {
setSelectedItem(path)
})
//renderer.js
selectDirBtn.addEventListener('click', function (event) {
window.ipcFiledialog('open-directory-dialog')
})
function setSelectedItem (files) {
document.getElementById('selectedItem').innerHTML = files
}
With this approach i have to create every window in main.js (this will grow the main.js) and make sure i call a specific ipc method for the right window.
The callback method needs to call also a specific method for every window.
So i would need many methods and this construct does not look very flexible.
Is there any best practice to prevent main.js to grow into the “infinite” and get more flexible ipc methods ?
Related
The code I'm trying to test relies on RequireJs loader plugins. Example with requirejs/text:
require(['text!templates/foo'], function (data) {
// handle loaded data
});
For a specific unit test, I'm trying to mock the response for text!templates/foo and override with one relevant for the test:
it('should load a template', function (done) {
// TODO: mock 'text!templates/foo' here to return 'mock_data'
// templateViewer uses the text plugin internally to do the actual loading
templateViewer.templateFor('foo', function (error, templateData) {
expect(templateData).toEqual('mock_data');
done();
});
});
I've looked at RequireJs dependency mock solutions, especially Squire.js but it seems they are all suited for mocking regular dependencies and not plugin responses.
I've also looked at stub libraries like sinon to maybe replace the actual require call but that seems problematic.
What's the recommended practice? I prefer not to replace the entire text plugin with a mock one in my requirejs configuration, just override some of its responses in specific tests.
My setup is node+mocha+requirejs
Edit
Please see this example fiddle project to see my issue with Squire:
http://runnable.com/VUBoI0ex6v9Gs-BJ/squirejs-with-plugins-for-node-js-and-hello-world
This will mock what you'd get from requiring text!foo/x.html. Plugins are not special, you just need to mock the entire path, including the plugin name.
var requirejs = require("requirejs");
var assert = require("assert");
requirejs.config({
baseUrl: __dirname,
packages: [
{
name: "squire",
location: "node_modules/squirejs",
main: "src/Squire"
}
]
});
var x;
before(function (done) {
requirejs(["squire"], function (Squire) {
var injector = new Squire();
injector.mock("text!foo/x.html", "foo").require(["text!foo/x.html"],
function (_x) {
x = _x;
done();
});
});
});
it("foo", function () {
assert.equal(x, "foo");
});
The problem you run into with the example code you added to your question is that you use the global require instead of using a require passed by your loader. You should add require as a dependency:
define(['require', ....], function (require, ....) {
The require module is special and reserved by RequireJS. It returns a reference to the require function. You must use it, for instance, when you use RequireJS's contexts so that a module loaded in a specific context uses a require function that is bound to that context. SquireJS also needs you to do this so that it can trap your calls to require. The global require bypasses SquireJS.
Using RequireJS I'm building an app which make extensive use of widgets. For each widget I have at least 3 separate files:
request.js containing code for setting up request/response handlers to request a widget in another part of my application
controller.js containing handling between model and view
view.js containing handling between user and controller
Module definition in request.js:
define(['common/view/widget/entity/term/list/table/controller'],
function(WidgetController) { ... });
Module definition in controller.js:
define(['common/view/widget/entity/term/list/table/view'],
function(WidgetView) { ... });
Module definition of view.js is:
define(['module','require'],function(module,require) {
'use strict';
var WidgetView = <constructor definition>;
return WidgetView;
});
I have lots of these little situations as above in the case of widgets I have developed. What I dislike is using the full path every time when a module is requiring another module and both are located in the same folder. I'd like to simply specify as follows (assuming we have a RequireJS plugin which solves this for us):
define(['currentfolder!controller'],
function(WidgetController) { ... });
For this, I have written a small plugin, as I couldn't find it on the web:
define({
load: function (name, parentRequire, onload, config) {
var path = parentRequire.toUrl('.').substring(config.baseUrl.length) + '/' + name;
parentRequire([path], function (value) {
onload(value);
});
}
});
As you might notice, in its basic form it looks like the example of the RequireJS plugins documentation.
Now in some cases, the above works fine (e.g. from the request.js to the controller.js), but in other cases a load timeout occurs (from controller.js to view.js). When I look at the paths which are generated, all are proper RequireJS paths. Looking at the load timeouts, the following is logged:
Timestamp: 13-09-13 17:27:10
Error: Error: Load timeout for modules: currentfolder!view_unnormalized2,currentfolder!view
http://requirejs.org/docs/errors.html#timeout
Source File: http://localhost/app/vendor/requirejs/require.js?msv15z
Line: 159
The above log was from a test I did with only loading the view.js from controller.js using currentfolder!view in the list of modules in the define statement. Since I only requested currentfolder!view once, I'm confused as to why I both see currentfolder!view_unnormalized2 and currentfolder!view in the message.
Any idea as to why this might be happening?
My answer may not answer your primary questions, but it will help you achieve what you're trying to do with your plugin.
In fact, Require.js support relative paths for requiring modules when using CommonJS style. Like so:
define(function( require, exports, module ) {
var relativeModule = require("./subfolder/module");
module.exports = function() {
console.log( relativeModule );
};
});
I've written a function which I'd like to use as a Grunt task. I can do this by adding this to the Gruntfile:
grunt.registerTask('foo', function () {
// code here
});
However, it makes more sense to keep the function code in a separate file. I plan to define a bunch of these custom tasks and I don't want to bloat the Gruntfile.
I'm not sure what the preferred way of registering such tasks is. I have found this to work:
grunt.registerTask('foo', function () {
require('./path/to/foo.js')(grunt);
});
So, I'm having the inline function like in the fist example, but this time, I'm loading an external file and invoking it immediately. In that external file, I of course have to write:
module.exports = function (grunt) {
// code here
}
This works, but it feels hackish. Is there a more proper way of doing this?
Short answer: the alternative to this
grunt.registerTask('foo', function () {
require('./path/to/foo.js')(grunt);
});
is http://gruntjs.com/api/grunt#grunt.loadtasks
Long answer:
Normally when you have tasks in external files there are served as other nodejs modules. So, if that is something that you will use in several projects you may want to register it in the registry. Later inside your Gruntfile.js you will have:
grunt.loadNpmTasks('yout-module-here');
The grunt's documentation says:
Load tasks from the specified Grunt plugin. This plugin must be installed locally via npm, and must be relative to the Gruntfile
However, if you don't want to upload anything to the registry you should use loadTasks
grunt.loadTasks('path/to/your/task/directory');
So, once the task is loaded you may use it in your configuration.
Here is a simple grunt task placed in external file:
'use strict';
module.exports = function(grunt) {
grunt.registerMultiTask('nameoftask', 'description', function() {
var self = this;
// this.data here contains your configuration
});
};
And later in Gruntfile.js
grunt.initConfig({
nameoftask: {
task: {
// parameters here
}
}
});
I had a similar problem.
I wanted to modularize my grunt config and custom tasks by functionnalities (big UX/UI blocks) rather than by technical features. AND I wanted to keep the config files next to task files... (better when working on a large legacy codebase with an varied team - 5 persons with varying JS knowledge)
So I externalized my tasks like Krasimir did.
In the gruntfile, I wrote :
//power of globbing for loading tasks
var tasksLocations = ['./grunt-config/default_tasks.js', './grunt-config/**/tasks.js'];
var taskFiles = grunt.file.expand({
filter: "isFile"
}, tasksLocations);
taskFiles.forEach(function(path) {
grunt.log.writeln("=> loading & registering : " + path);
require(path)(grunt);
});
You will find the whole boilerplate gruntfile here (external config and tasks loading) : https://gist.github.com/0gust1/7683132
I notice in the documentation there is a way to pass custom configuration into a module:
requirejs.config({
baseUrl: './js',
paths: {
jquery: 'libs/jquery-1.9.1',
jqueryui: 'libs/jquery-ui-1.9.2'
},
config: {
'baz': {
color: 'blue'
}
}
});
Which you can then access from the module:
define(['module'], function (module) {
var color = module.config().color; // 'blue'
});
But is there also a way to access the top-level paths configuration, something like this?
define(['module', 'require'], function (module, require) {
console.log( module.paths() ); // no method paths()
console.log( require.paths() ); // no method paths()
});
FYI, this is not for a production site. I'm trying to wire together some odd debug/config code inside a QUnit test page. I want to enumerate which module names have a custom path defined. This question touched on the issue but only lets me query known modules, not enumerate them.
It is available, but it's an implementation detail that shouldn't be depended on in production code ( which you've already said it's not for, but fair warning to others! )
The config for the main context is available at require.s.contexts._.config. Other configurations will also hang off of that contexts property with whatever name you associated with it.
I don't believe require exposes that anywhere, at least I can't find it looking through the immense codebase. There are two ways you could achieve this though. The first and most obvious is to define the config as a global variable. The second, and closer to what you want, is to create a require plugin that overrides the load function to attach the config to the module:
define({
load: function (name, req, onload, config) {
req([name], function (value) {
value.requireConfig = config;
onload(value);
});
}
});
I am currently using requirejs to manage module js/css dependencies.
I'd like to discover the possibilities of having node do this via a centralized config file.
So instead of manually doing something like
define([
'jquery'
'lib/somelib'
'views/someview']
within each module.
I'd have node inject the dependencies ie
require('moduleA').setDeps('jquery','lib/somelib','views/someview')
Anyway, I'm interested in any projects looking at dependency injection for node.
thanks
I've come up with a solution for dependency injection. It's called injectr, and it uses node's vm library and replaces the default functionality of require when including a file.
So in your tests, instead of require('libToTest'), use injectr('libToTest' { 'libToMock' : myMock });. I wanted to make the interface as straightforward as possible, with no need to alter the code being tested. I think it works quite well.
It's just worth noting that injectr files are relative to the working directory, unlike require which is relative to the current file, but that shouldn't matter because it's only used in tests.
I've previously toyed with the idea of providing an alternate require to make a form of dependency injection available in Node.js.
Module code
For example, suppose you have following statements in code.js:
fs = require('fs');
console.log(fs.readFileSync('text.txt', 'utf-8'));
If you run this code with node code.js, then it will print out the contents of text.txt.
Injector code
However, suppose you have a test module that wants to abstract away the file system.
Your test file test.js could then look like this:
var origRequire = global.require;
global.require = dependencyLookup;
require('./code.js');
function dependencyLookup (file) {
switch (file) {
case 'fs': return { readFileSync: function () { return "test contents"; } };
default: return origRequire(file);
}
}
If you now run node test.js, it will print out "test contents", even though it includes code.js.
I've also written a module to accomplish this, it's called rewire. Just use npm install rewire and then:
var rewire = require("rewire"),
myModule = rewire("./path/to/myModule.js"); // exactly like require()
// Your module will now export a special setter and getter for private variables.
myModule.__set__("myPrivateVar", 123);
myModule.__get__("myPrivateVar"); // = 123
// This allows you to mock almost everything within the module e.g. the fs-module.
// Just pass the variable name as first parameter and your mock as second.
myModule.__set__("fs", {
readFile: function (path, encoding, cb) {
cb(null, "Success!");
}
});
myModule.readSomethingFromFileSystem(function (err, data) {
console.log(data); // = Success!
});
I've been inspired by Nathan MacInnes's injectr but used a different approach. I don't use vm to eval the test-module, in fact I use node's own require. This way your module behaves exactly like using require() (except your modifications). Also debugging is fully supported.