I would like to optimize the inter process communication of my Electron application by using arguments based on classes. For example: I would like to tell the renderer process to display a certain image together with additional information that my main process has fetched from some file (or something like that). I would then create an instance of a class such as
class structuredArgument {
var base64image;
var additionalInformation1;
var additionalInformation2;
[...]
}
let instanceOfStructuredArgument = new structuredArgument();
I would then send that instance to the renderer process
event.reply('display-image', instanceOfStructuredArgument);
If I were able to share the class definition above between the main and the renderer process, I could create an instance of it in the renderer process like this
let instanceOfStructuredArgument = Object.assign(new structuredArgument(), arg);
This would allow me to check whether the argument is actually valid, to use the class' properties in a comfortable way, etc.
The problem is, however: I can include "classDefinitions.js" in the renderer process with a script tag in my index.html. Since the main process is "node based" I need to "export" my class definitions
module.exports.structuredArgument = structuredArgument, etc.)
Unfortunately, this "module.exports" statement is invalid from the renderer process' perspective ("Uncaught ReferenceError: module is not defined").
Is there a "right" and well working "best practice" way of purposely sharing class definitions by the main and the renderer process of an Electron app to avoid redundancy and ensure that both processes are using the same class definitions?
Thank you very much for your support!
Related
I'm learning JavaScript and using AWS SDK from JavaScript.
Reading an IAM example from the documentation, i saw the following pattern:
Create a file name iamClient.js where you instantiate an object and export it.
Create another file where you import the client created above to use it.
What is the main benefit of doing this instead of just create and use the object in the same file ?
I know this is a small example and maybe there is no issue doing everything in the same file, but i'm more curious if this is just for organization/best practice if something bigger is created based on this sample or if there is some sort of technical reason. :)
Well, that's how you'd create a configuration singleton for instance in another language.
Creation of such object might be expensive in time sometimes so you create it once and then just reuse it :)
During testing, if you provide a mock for your iamClient module you're set for all unit-tests (assuming you're using Jest or similar)
It also helps you to not repeat yourself as it's a codesmell
I am trying to write a small app with Electron that needs a database. Currently I'm testing PouchDB, but that shouldn't really matter.
For better code quality I created a class that is going to handle the common database requests - it should be the only way to access the db.
Not sure, if I understood the main/renderer process concept correctly, but I think the main process should take care about db access. So this is my current configuration:
main.js
import Database from './database'
export const myDB = new Database()
database.js (obviously only a stub)
export default class Database {
hello = () => {
console.log("Hello World")
}
}
Root.js (one of the ui components [using react])
const remote = require('electron').remote
const main = remote.require('./main.js')
...
<button onClick={() => main.myDB.hello()}>Test</button>
My question: Is this a feasible solution for code structuring or am I getting something completely wrong? My JS experiences are just using some jQuery effects and Node experience is missing completely. This is just a small hobby project, so I just wanted to start coding ;)
You have it at the right end, the database related code should be executed in the main process.
main.js is what would be the main process, but it seems to be missing the code that creates a browser window (which in turn creates the renderer process). Take a look at the example of Electron here, the magic happens at createWindow().
Root.js is executed in the renderer process, it can only communicate with the main process through 'remote' or 'ipcRenderer' - the latter being a bit more secure. A bit more information about remote can be found on electron.rocks. You are doing it the right way, in terms of code structure.
The main process is responsible for creating and managing BrowserWindow instances and various application events. It can also do things like register global shortcuts, create native menus and dialogs, respond to auto-update events, and more. Your app’s entry point will point to a JavaScript file that will be executed in the main process. A subset of Electron APIs (see graphic below) are available in the main process, as well as all node.js modules. The docs state: “The basic rule is: if a module is GUI or low-level system related, then it should be only available in the main process.”
^Quoted from somewhere
I'm using the latest qooxdoo SDK (3.5) and am trying to find a way to dynamically load a module. Each module would implement an "init" function which creates a window in the application and, from that point, is self-contained.
What I would need is the ability to call an arbitrary init function without knowing the module existed beforehand. For example, someone uploads a custom module and tries to run it--I just need to call the module's init function (or error out if the call fails).
Any ideas?
Edit:
Something like:
function loadModule(modName) {
var mod = new qx.something.loadModule(modName);
mod.init();
}
I found 3 ways that Qooxdoo has to run dynamic code. The first way is via the built-in parts loader. "Parts" are basically portions of an application that qooxdoo will load "just-in-time" when you actually need them--for example, a class that operates a rarely used form or dialog box. This method is not truly dynamic (in my opinion) in that it requires the code to be included in the build process that Qooxdoo provides. Explaining exactly how it works is out of scope for this answer and, frankly, I'm not yet all that familiar with it myself.
The second way is via the qx.Class.getByName() function call. It works like so:
qx.Class.define("Bacon", {
extend: qx.core.Object,
construct: function(foo, bar) {
this.foo = foo;
this.bar = bar;
}
});
var klass = qx.Class.getByName("Bacon");
var obj = new klass("foo", "bar");
this.debug(obj.foo);
This method was found on the Qooxdoo mailing list here. This method will work for code included in the build process and for code introduced dynamically but, in my opinion, is trumped by the third method for the simple reason that if you are introducing a new class dynamically, you'll have to use the third method anyway.
The final method I located was actually revealed to me via studying the source code for the Qooxdoo playground. (The source code is available as part of the desktop download.)
The playground reads the code in from the editor and creates an anonymous function out of it, then executes the function. There is a bunch of setup and tear-down the playground does surrounding the following code, but I've removed it for brevity and clarity. If you are interested in doing something similar yourself, I highly recommend viewing the source code for the playground application. The dynamic code execution is contained within the __updatePlayground function starting on line 810 (Qooxdoo v3.5).
var fun;
try {
fun = qx.event.GlobalError.observeMethod(new Function(code));
} catch(ex) {
//do something with the exception
}
try {
fun.call();
} catch(ex) {
//do something with the exception
}
The code is straightforward and uses the built-in Javascript function "call" to execute the anonymous function.
Please define module.
Qooxdoo source code uses the same convention as Java - one class per file. Do you really want to load classes individually and deal with dependencies? If not, what's your definition of a module?
Other than that, qooxdoo has a notion of packages, which are groups of classes, interfaces and mixins, framework, contribs, including the framework itself, packed by the generator in an optimized way, so that the classes used earlier are loaded earlier. Using qooxdoo's own packaging mechanism requires no more effort than running build with custom arguments or customizing the config.json - all described in detail in the manual.
If your idea of a module is sort of a sub-application, mostly decoupled from everything else in the big application, I'm not sure it's achievable without either significantly modifying the generator code (what ./generate.py calls) or accepting some size overhead.
I won't go into details of modifying the generator - if you go this route you'll need to dig deeply anyway, and you'll learn more than I know about the generator.
What you can do while staying within what qooxdoo allows is to create a separate island application for each module, build your own infrastructure for inter-modules communication via JavaScript attached to the top window, and run the modules inside the main page, with some manually added magic to make the various modules behave like tab panes or qooxdoo windows. The overhead you'll have to take, besides some awkward custom, non-qooxdoo code, is that all modules will re-load the qooxdoo framework code.
I'm in the process of moving a JSF heavy web application to a REST and mainly JS module application .
I've watched "scalable javascript application architecture" by Nicholas Zakas on YUI theater (excellent video) and I implemented much of the talk with good success, but I have some questions:
I found the lecture a little confusing in regards to the relationship between modules and sandboxes, on one hand, to my understanding, modules should not be affected by something happening outside of their sandbox and this is why they publish events via the sandbox (and not via the core, since the core is for hiding the base library) but each module in the application gets a new sandbox? Shouldn't the sandbox limit events to the modules using it or should events be published cross page? e.g. : if I have two editable tables but I want to contain each one in a different sandbox and its events affect only the modules inside that sandbox, something like message box per table which is a different module/widget, how can I do that with sandbox per module, of course I can prefix the events with the moduleid but that creates coupling that I want to avoid ... and I don't want to package modules together as one module per combination as I already have 6-7 modules.
While I can hide the base library for small things like id selector etc.. I would still like to use the base library for module dependencies and resource loading and use something like YUI loader or dojo.require so in fact I'm hiding the base library but the modules themselves are defined and loaded by the base library ... seems a little strange to me.
libraries don't return simple js objects but usually wrap them e.g. : You can do something like $$('.classname').each(.. which cleans the code a lot, it makes no sense to wrap the base and then in the module create a dependency for the base library by executing .each but not using those features makes a lot of code written which can be left out ... and implementing that functionality is very bug prone.
Does anyone have any experience with building a front side stack of this order? How easy is it to change a base library and/or have modules from different libraries, using yui datatable but doing form validation with dojo ... ?
Somewhat of a combination of 2+4 if you choose to do something like I said and load dojo form validation widgets for inputs via YUI loader would that mean dojocore is a module and the form module is dependent on it?
We use this pattern heavily in our applications. Check out the book JavaScript Patterns by Stoyan Stefanov for a detailed look in how to implement the Sandbox pattern. Basically it looks something like this:
(function (global) {
var Sandbox = function fn (modules, callback) {
var installedModules = Sandbox.modules,
i = 0,
len = modules.length;
if (!(this instanceof fn)) {
return new fn(modules, callback);
}
// modules is an array in this instance:
for (; i < len; i++) {
installedModules[modules[i]](this);
}
callback(this);
};
Sandbox.modules = {};
global.Sandbox = Sandbox;
})(this);
// Example module:
// You extend the current sandbox instance with new functions
Sandbox.modules.ajax = function(sandbox) {
sandbox.ajax = $.ajax;
sandbox.json = $.getJSON;
};
// Example of running your code in the sandbox on some page:
Sandbox(['ajax'], function(sandbox) {
sandbox.ajax({
type: 'post',
url: '/Sample/Url',
success: function(response) {
// success code here. remember this ajax maps back to $.ajax
}
});
});
I'm currently in the process of refactoring my webplayer so that we'll be more easily able to run it on our other internet radio stations. Much of the setup between these players will be very similar, however, some will need to have different UI plugins / other plugins.
Currently in the webplayer I do something like this in it's init():
_this.ui = new UI();
_this.ui.playlist = new Playlist();
_this.ui.channelDropdown = new ChannelDropdown();
_this.ui.timecode = ne Timecode();
etc etc
This works fine but that blocks me into requiring those objects at run time. What I'd like to do is be able to add those based on the stations needs. Basically my question is, do I need to add some kind of "addPlugin()" functionality here? And if I do that, do I need to constantly check from my WebPlayer object if that plugin exists before it attempts to use it? Like...
if (_hasPlugin('playlist')) this.plugins.playlist.add(track);
I apologize if some of this might not be clear... really trying to get my head wrapped around all of this. I feel I'm closer but I'm still stuck. Any advice on how I should proceed with this would be greatly appreciated.
Thanks in advance,
Lee
You would need to expose certain functionality within your application that you want others to be able to work off of. IE: making public get{} set{} accessors on major components like your UI and your player. The more functionality you expose the more plugins will be able to modify important parts of your functionality.
So let's say you have a UI.header, and header contains properties that define how the header displays the UI. So you expose header.backgroundImage as a public string, header.Text as a public string, and header.height as a public int. Now someone designing a plugin can change your header values and make it look and say what they want.
It's all about how much you want people to be able to alter your player based on what you expose.
You can define JavaScript classes for your plugins, load them as dependencies to the webplayer and instantiate them at runtime as needed with the help of RequireJS AMD.
//in your webplayer.js
define(['./ui','./playlist'],function(ui, playlist){
var webPlayer = function(stationID){
//initializing work
}
return webPlayer;
});
At runtime load the webplayer.js file when needed and instantiate the web player.
Have a look at BoilerplateJS which is a reference architecture for JavaScript product development. Concerns such as event handling, creating self contained components, handling interaction between them, who decides when to create/show/hide your UI components are taken care of in order to quickstart development.