Reload module at runtime - javascript

I am considering to port a highly demanded(lots of traffic) sockets-based architecture from .NET to Node.JS using Socket.IO.
My current system is developed in .NET and use some scripting languages, loaded at runtime, so I can do hot-fixes if needed by issuing a reload command to the server, without having to restart the different servers/dispatcher processes.
I originally built it this way so, like I said, I could do hot fixes if needed and also keep the system available with transparent fixes.
I am new to Node.JS but this is what I want to accomplish:
Load javascript files on demand at runtime, store them in variables somewhere and call the script functions.
What would be the best solution? How to call a specific function from a javascript file loaded at runtime as a string? Can i load a javascript file, store it in a variable and call functions in a normal way just like a require?
Thanks!

If I understood your question correctly. You can check the vm module out.
Or if you want to be able to reload required files, you must clear the cache and reload the file, something this package can do. Check the code, you'll get the idea.
Modules are cached after the first time they are loaded. This means
(among other things) that every call to require('foo') will get
exactly the same object returned, if it would resolve to the same
file.
Multiple calls to require('foo') may not cause the module code to be
executed multiple times. This is an important feature. With it,
"partially done" objects can be returned, thus allowing transitive
dependencies to be loaded even when they would cause cycles.
More information can be found here.
Delete the cached module:
delete require.cache[require.resolve('./mymodule.js')]
Require it again. (maybe a require inside a function you can call)
update
Someone I know is implementing a similar approach. You can find the code here.

You can have a look at my module-invalidate module that allows you to invalidate a required module. The module will then be automatically reloaded on further access.
Example:
module ./myModule.js
module.invalidable = true;
var count = 0;
exports.count = function() {
return count++;
}
main module ./index.js
require('module-invalidate');
var myModule = require('./myModule.js');
console.log( myModule.count() ); // 0
console.log( myModule.count() ); // 1
mySystem.on('hot-reload-requested', function() {
module.invalidateByPath('./myModule.js'); // or module.constructor.invalidateByExports(myModule);
console.log( myModule.count() ); // 0
console.log( myModule.count() ); // 1
});

Related

How to check if the current file is the main one in javascript (iOS Instruments UI Automator)

In Python there is a neat thing like this:
if __name__ == "__main__":
runTests()
So this runs runTests() only if the file was run separately and not imported. When the file will be imported it will not run the tests.
So I want to create some kind of modules for my tests in Instruments UI Automation - but I also want to test the module itself.
Is there any global variable that I can check to see if the current file is being run as a main file or as an imported one?
Good question, unfortunately the answer is no. If you are looking for something like the __name__ variable in python it does not exist. A hacky workaround that isn't really a solution to your problem (as it would require you to edit your test script before running), you could set a global variable prior to your inputs
var BEING_RUN = false;
#import "path/to/jsFile.js"
with an if statement in each file
//BEING_RUN = true;
if (BEING_RUN) {
doStuff();
}
and you could uncomment BEING_RUN = true; only on the file(s) being run. Again, this is not really a solution but it may suit your needs.
My recommendation would be to structure your test library in such a way that you don't import things from files that also contain UI tests. So you should put functions and other things that are used by your tests in files that get imported, and only have the actual simulation of user interaction in your test files. I'd also suggest checking out tuneup.js (http://www.tuneupjs.org/) it makes iOS UI Automation much more pleasant.

TypeScript use typescript-require shared files

I use TypeScript to code my javascript file with Object Oriented Programing.
I want to use the node module https://npmjs.org/package/typescript-require to require my .ts files from other files.
I want to share my files in both server and client side. (Browser) And that's very important. Note that the folder /shared/ doesn't mean shared between client and server but between Game server and Web server. I use pomelo.js as framework, that's why.
For the moment I'm not using (successfully) the typescript-require library.
I do like that:
shared/lib/message.js
var Message = require('./../classes/Message');
module.exports = {
getNewInstance: function(message, data, status){
console.log(requireTs);// Global typescript-require instance
console.log(Message);
return new Message(message, data, status);
}
};
This file need the Message.js to create new instances.
shared/classes/Message.ts
class Message{
// Big stuff
}
try{
module.exports = Message;
}catch(e){}
At the end of the fil I add this try/catch to add the class to the module.exports if it exists. (It works, but it's not really a good way to do it, I would like to do better)
If I load the file from the browser, the module.export won't exists.
So, what I did above is working. Now if I try to use the typescript-require module, I'll change some things:
shared/lib/message.js
var Message = requireTs('./../classes/Message.ts');
I use requireTs instead of require, it's a global var. I precise I'm using .ts file.
shared/classes/Message.ts
export class Message{
// Big stuff
}
// remove the compatibility script at the end
Now, if I try like this and if I take a look to the console server, I get requireTs is object and Message is undefined in shared/lib/message.js.
I get the same if I don't use the export keyword in Message.ts. Even if I use my little script at the end I get always an error.
But there is more, I have another class name ValidatorMessage.ts which extends Message.ts, it's not working if I use the export keyword...
Did I did something wrong? I tried several other things but nothing is working, looks like the typescript-require is not able to require .ts files.
Thank you for your help.
Looking at the typescript-require library, I see it hasn't been updated for 9 months. As it includes the lib.d.ts typing central to TypeScript (and the node.d.ts typing), and as these have progressed greatly in the past 9 months (along with needed changes due to language updates), it's probably not compatible with the latest TypeScript releases (just my assumption, I may be wrong).
Sharing modules between Node and the browser is not easy with TypeScript, as they both use very different module systems (CommonJS in Node, and typically something like RequireJS in the browser). TypeScript emits code for one or the other, depending on the --module switch given. (Note: There is a Universal Module Definition (UMD) pattern some folks use, but TypeScript doesn't support this directly).
What goals exactly are you trying to achieve, and I may be able to offer some guidance.
I am doing the same and keep having issues whichever way I try to do things... The main problems for me are:
I write my typescript as namespaces and components, so there is no export module with multiple file compilation you have to do a hack to add some _exporter.ts at the end to add the export for your library-output.js to be importable as a module, this would require something like:
module.exports.MyRootNamespace = MyRootNamespace
If you do the above it works, however then you get the issue of when you need to reference classes from other modules (such as MyRootNamespace1.SomeClass being referenced by MyRootNamespace2.SomeOtherClass) you can reference it but then it will compile it into your library-output2.js file so you end up having duplicates of classes if you are trying to re-use typescript across multiple compiled targets (like how you would have 1 solution in VS and multiple projects which have their own dll outputs)
Assuming you are not happy with hacking the exports and/or duplicating your references then you can just import them into the global scope, which is a hack but works... however then when you decide you want to test your code (using whatever nodejs testing framework) you will need to mock out certain things, and as the dependencies for your components may not be included via a require() call (and your module may depend upon node_modules which are not really usable with global scope hacking) and this then makes it difficult to satisfy dependencies and mock certain ones, its like an all or nothing sort of approach.
Finally you can try to mitigate all these problems by using a typescript framework such as appex which allows you to run your typescript directly rather than the compile into js first, and while it seems very good up front it is VERY hard to debug compilation errors, this is currently my preferred way but I have an issue where my typescript compiles fine via tsc, but just blows up with a max stack size exception on appex, and I am at the mercy of the project maintainer to fix this (I was not able to find the underlying issue). There are also not many of these sort of projects out there however they make the issue of compiling at module level/file level etc a moot point.
Ultimately I have had nothing but problems trying to wrestle with Typescript to get it to work in a way which is maintainable and testable. I also am trying to re-use some of the typescript components on the clientside however if you go down the npm hack route to get your modules included you then have to make sure your client side uses a require compatible resource/package loader. As much as I would love to just use typescript on my client and my server projects, it just does not seem to want to work in a nice way.
Solution here:
Inheritance TypeScript with exported class and modules
Finally I don't use require-typescript but typescript.api instead, it works well. (You have to load lib.d.ts if you use it, else you'll get some errors on the console.
I don't have a solution to have the script on the browser yet. (Because of export keyword I have some errors client side) I think add a exports global var to avoid errors like this.
Thank you for your help Bill.

Why do concatenated RequireJS AMD modules need a loader?

We love RequireJS and AMD during development, where we can edit a module, hit reload in our browser, and immediately see the result. But when it comes time to concatenate our modules into a single file for production deployment, there apparently has to be an AMD loader still present, whether that loader is RequireJS itself or its smaller partner “almond” as explained here:
http://requirejs.org/docs/faq-optimization.html#wrap
My confusion is: why is a loader necessary at all? Unless you have very unusual circumstances that make it necessary for you to make require() calls inside of your modules, it would appear that a series of AMD modules could be concatenated without a loader present at all. The simplest possible example would be a pair of modules like the following.
ModA.js:
define([], function() {
return {a: 1};
});
ModB.js:
define(['ModA'], function(A) {
return {b : 2};
});
Given these two modules, it seems that a concatenator could simply produce the following text, and not burden the production server or browser with the extra bandwidth or computation required by either RequireJS or Almond.
I imagine a concatenator that produces (and I am using chevron-quotes «,» to show where the snippets from the two modules above have been inserted):
(function() {
var ModA = «function() {
return {a: 1};
}»();
var ModB = «function(A) {
return {b : 2};
}»(ModA);
return ModB;
})();
This, so far as I can see, would correctly reproduce the semantics of AMD, with a minimum of extraneous glue JavaScript. Is there such a concatenator available? If not, would I be a fool for thinking that I should write one — are there really very few code bases that consist of simple and clean modules written with define() and that never need further require() calls inside that kick off later asynchronous fetches of code?
An AMD optimiser has the scope to optimise more than the number of files to be downloaded, it can also optimise the number of modules loaded in memory.
For example, if you have 10 modules and can optimise them to 1 file, then you have saved yourself 9 downloads.
If Page1 uses all 10 modules then that's great. But what if Page2 only uses 1? An AMD loader can delay the execution of the 'factory function' until a module is require'd. Therefore, Page2 only triggers a single 'factory function' to execute.
If each module consumes 100kb of memory upon being require'd, then an AMD framework that has runtime optimisation will also save us 900kb of memory on Page2.
An example of this could be an 'About Box' style dialog. Where the very execution of it is delayed until the very last second as it won't be accessed in 99% of cases. E.g. (in loose jQuery syntax):
aboutBoxBtn.click(function () {
require(['aboutBox'], function (aboutBox) {
aboutBox.show();
}
});
You save the expense of creating the JS objects and DOM associated with the 'About Box' until you are sure it's necessary.
For more info, see Delay executing defines until first require for requirejs's take on this.
The only real benefit is if you use modules across sections so there's a benefit to caching modules independently.
I had the same need, so I created a simple AMD "compiler" for that purpose that does just that. You can get it at https://github.com/amitayh/amd-compiler
Please note that it has many features missing, but it does the job (at least for me). Feel free to contribute to the codebase.
In case you compile you code with require.js into a single large file for production, you can use almond.js to completely replace require.
Almond only handles the module references management not the loading itself which is no longer needed.
Be careful of the restrictions almond imposes in order to work
There is no reason why there couldn't be a build tool such as the one you propose.
The last time* I looked at the optimizer's output, it converted the modules to explicitly named modules, and then concatenated those together. It relied on require itself to make sure that the factory functions were called in the right order, and that the proper module objects were passed around. To build a tool like you want, you would have to explicitly linearize the modules -- not impossible, but a lot more work. That's probably why it hasn't been done.
I believe** that the optimizer has a feature to automatically include require itself (or almond) into the built file, so that you only have to have one download. That would be larger than the output of the build tool you want, but otherwise the same.
If there was a build tool that produced the kind of output you're asking for, It would have to be more careful, in case of the synchronous require, the use of exports instead of return, and any other CommonJS compatibility features.
*That was a few years ago. 2010, I think.
**But can't seem to find it right now.

Patterns or techniques for ensuring script availability?

As a project I have been working on has grown, so has the frequency of situations where all scripts on a page are not available when other code tries to access them. Though this happens most often after code is updated (e.g. not cached) I've had it come up more and more in testing, when it never used to happen.
I've addressed this, partially, by using a function to wait for a module to become available (see this question) and this addresses the concern, mostly, but I'm not totally thrilled with the implementation, and am looking for a more industrial strength pattern to deal with this. These are possible solutions I've come up with:
1) Load scripts on demand with something like ensure - not ideal. Requires actual script name dependency information to be included in each script, not just module/object name, to do this. Still have to take some action before using a resource to ensure it's available.
2) Manage script loading order. If this would even work (e.g. I don't think that simply putting script A before script B guarantees it will be available since they can be loaded concurrently), it would be a pain, since you don't know a dependency until you've loaded the thing that depends on it. Would require a lot of work to set up on a site that has lots of pages that use different resources (and I have no intention of loading everything used everywhere on the site on every page).
3) Wait for everything to be loaded on a given page before allowing user interaction. Far from ideal for obvious reasons. Still doesn't address dependencies that happen in initialization code.
4) Expand upon my current solution. Currently works like (this is pseudocode, but the basic logic process):
// Depends on Module2
Module1 = (function () {
self = {};
// function requires dependency
// waitFor waits until global named 'dependency' is available then calls callback
self.initialized=false;
self.init = function() {
waitFor('Module2', function() {
self.intialized=true;
});
}
// waitForInitialization sets a callback when self.initialized=true
self.func = self.waitForInitialization(func() {
Module2.doStuff();
});
}
//UI-initiated function requires dependency
self.uiFunc = function() {
if (!self.initialized) {
showPleaseWaitDialog();
self.waitForInitialization(function() {
dismissPleaseWaitDialog();
self.uiFuncImpl);
} else {
self.uiFuncImpl();
}
}
self.uiFuncImpl= function() {
Module2.doStuff();
}
} ());
I can think of ways to create a prototype that would deal with the dependency issue more transparently than my code above, and fully intend to do that if I have to, but is this truly the best solution? What do others do? What are considered best practices?
2) Script Load Order - scripts will always be executed in the order they are placed in the DOM, so while they might load concurrently they will execute in an orderly fashion (faced this same problem on a large project I worked on).
?) If script load order is not an ideal solution for you, you could look into the Promise model.
??) If Promises and Load Order won't work for you, you could listen for a namespaced event that each module could fire when it's initialized, that way if the object exists it can be used and if not its initialization could be listened for.

Is it possible to use requirejs when modules may have to be removed to conserve memory

We develop an application in an embedded environment. It is a high level computing environment with a complete webbrowser on top of a busybox Linux system. The only exception is that the system has a limited amount of system memory.
Our application is built in JavaScript and runs inside a Webkit based webbrowser and consists of a lot of javascript modules that are loaded in sequence (Which is not very efficient).
Some modules provide common functionality that is used by several modules. We are in the process of converting our current javascript loader with requirejs, but there is one specific need we have to address first.
Is it possible to unload a module when it has been loaded using requirejs? Assume that we dynamically loads a module using :
require(["somemodule.js"], function(m) { m.run(); } );
That works well for loading and running 'somemodule' and also resolving all dependencies for 'somemodule' and the requirejs framework will store a reference to 'somemodule' for future requests.
If we at some point need to reclaim memory, e.g to be able to load and run an infinite number of modules, we have to start removing some of them after some time. Is that possible with requirejs without altering the internal implementation?
Has anyone dealt with this kind of problem before? Most single page JS apps runs in a webbrowser on a desktop PC where memory usage usually is not a major concern.
RequireJS does not have a built-in unload feature, but it could be added perhaps as an additional part you could build into it. If you would like to have that feature, feel free to propose it in the mailing list or as a GitHub issue.
If you wanted to experiment to see if it helps your situation, what you need to do is the following:
1) Remove the defined module from the RequireJS module cache. If you are not using the multiversion support, you can do something like:
var context = require.s.contexts['_'];
delete context.defined[moduleName];
delete context.specified[moduleName];
delete context.loaded[moduleName];
2) Then you can try removing the script tag to see if that helps:
var scripts = document.getElementsByTagName('script');
for (var i = scripts.length - 1; i >= 0; i--) {
var script = scripts[i];
if (script.getAttribute('data-requiremodule') === moduleName) {
script.parentNode.removeChild(script);
break;
}
}
Note that the module may not be garbage collected if another module holds on to it via the closure function(){} that defines that other module. That other module would need to be removed too.
You can try to limit that impact by not passing in the module as a function argument, but just use require("somemodule") inside the function definition whenever you want to get a hold of dependent modules, and not holding on to that require return value for too long.
Also, in your example above, for modules that use require.def to define themselves, it should look like this (without the .js suffix):
require(["somemodule"], function(m) { m.run(); } );
Try this: require.undef(moduleName)

Categories