I am writing some code in JavaScript + Flow and would like to keep it as pure as possible, which also means skipping globals such as window or document, passing those as function arguments. But itโs quite easy to forget a stray document reference here or there. Is it possible to ban those globals somehow, allowing them only in the top-level file? So far I am doing this at the top of most of my documents:
const window = undefined
const document = undefined
This way only instances passed in as arguments work:
// This works
function foo(document: Document) {
document.doThisOrThat();
}
// This triggers a typecheck error ๐
function bar() {
document.doThisOrThat();
}
Are there other solutions? (I would love a whitelist approach, disallowing all globals except those whitelisted.)
You can set Flow to ignore the flowlibs by adding no_flowlib=true in the [options] section of the flowconfig.
From there, you can make your own libs folder and only include the library definitions you want. To make them globally available, add the path to your libs folder in the [libs] section of your flowconfig.
Related
I'm writing a bootstrap firefox extension. Using firefox developer edition v36.2a
I have two problems that seem related, which I don't understand. The first one, mere annoyance:
my bootstrap Console.utils.imports a js file which loads my application (creates a namespace object and imports all other js files as properties on the namespace. All other files include this same namespace file and thus have access to each other. They even store state and everything works much similar to how node caches modules, eg it seems there is only one object of every class which is shared throughout the whole application. Now, this happens:
my namespace file does:
namespace.console = Components.utils.import( 'you know the one/console.jsm' )
namespace.Services = Components.utils.import( 'you know the one/Services.jsm')
namespace.Cu = Components.utils
Now in another file that imports this file, I get the namespace object, but namespace.Cu will be undefined. console.log( namespace ) is fine however and shows me Cu, that I can expand and see all its properties and so on... All other things (console, Services, my own classes) are fine, but trying Cc, Ci etc from Components -> undefined.
In another place in my app I have a function (in file A) which returns an array of nsiDomWindows. A function in file B calls it and when it arrives, similar story: in console all looks fine, Array with ChromeWidows that I can look at. But it's no longer an array actually and is of type object and array[ 0 ] is undefined. If I put both classes in the same file they're fine.
For further confusion, I think I have already used this method in another file and all was fine:
// A.js
//
function A()
{
b = new B()
c = new C()
let windows = b.windowList () // works fine, yields an actual valid array
// with valid windows inside
c.doSomething() // broken, see below
c.doSomething( windows ) // passing it as a parameter
// doesn't help, still broken
}
// B.js
//
function B()
{
this.windowList = function windowList()
{
let result = []
// get open windows from nsiWindowMediator
// foreach...
//
result.push( nsiDomWindow )
console.log( typeof result ) // Array -> it's completely valid here
return result
}
}
// C.js
//
function C()
{
this.b = new B()
this.doSomething = function doSomething( windows )
{
if( ! windows )
windows = this.b.windowList()
console.log( windows ) // in the console shows:
// Array[ ChromeWindow -> about:home ]
// I can inspect all it's properties, looks ok
console.log( typeof windows ) // object
console.log( windows.constructor.name ) // undefined
console.log( windows[ 0 ] ) // undefined
// looping over the object properties shows that it does
// have 1 property called '0' which points at undefined
// note: there was only one open window in the array.
}
}
// Note: the order in which I use Components.utils.import on these is B, C, A
I also wondered if this had to do with security measures in gecko, but no wrapper objects are to be seen anywhere, and afaik they should only shield content code from chrome code (this is all chrome code).
It's the kind of bugs that frustrates me, because I can't think of one sensible reason why the return value of a function shouldn't be the same thing on two sides of the call.
Lucky enough there is a workaround, I will have my make file concatenate all my js files and be done with Components.utils once and for all. Seems the less of the mozilla API I have to use, the happier and more productive I'll be.
I'm seeing errors from mozilla code as well. I just had an error saying "shouldLog is not a function" in Console.jsm. This function is clearly defined higher up in that file. The line that throws the error is an anonymous function that is being returned. That should inherit the scope without a doubt, but it doesn't. It might be that something is mangling return values and that this is related.
I filed a bug on bugzilla with a sample addon attached which demonstrates the first of the problems mentioned above.
update:
I'm sorry, I confused two things. I just figured out what happened in the first case of Cu not being available. I just loaded the file in which the problem happens before adding Cu to my namespace and since I did const { console, Services, Cu } = namespace on the global scope, this actually got evaluated before the property was made. The confusing part about it is that the console keeps references, not copies of the objects it shows you (an unfortunate design choice if you ask me) logging the namespace before the code in question gives you a view on it's state later on, which I missed to take into consideration.
That still doesn't explain how one function can return a perfectly sound value to a receiving function who receives something else, or how a function declared in Console.jsm doesn't seem to exist anymore in a function who inherits it's scope.
All in all, the first problem hasn't got anything to do with the others. I can't however create a reproducible small addon to illustrate the other 2. If I think of a way I'll upload it.
Btw: The pitfall by console can be easily seen (put the following in a javascript scratchpad):
let a = { some: "value" }
console.log( a )
delete a.some
// output: Object { }
For debugging purposes (usually it's main purpose) console has it's limits, and it usually better to set a breakpoint in a debugger or use JSON.stringify
I think I've narrowed all the fore mentioned problems to unloading uncorrectly. I had eventlisteners that I didn't remove and when they fired, objects or scopes can be invalidated.
Using the Extension Auto-Installer somewhat made these problems more severe because when reloading the addon when it hasn't properly unloaded creates an unreliable state of the addon code.
The shouldLog is not a function was because I thought that I had to unload all modules that I loaded with Components.utils.unload, so I unloaded Console.jsm.
After fixing that plus wrapping all the entry points into my software in try-catch blocks, things now run smoother. I now rarely have to uninstall the addon and restart firefox after updating the code.
There's nothing like a few fun days of debugging...
I didnt read it all but:
I think the first issue is you are importing it wrong.
If you want to bring it to a certain namespace to the exported var from Cu.import you should do it like this:
Cu.import('resource://gre/modules/ctypes.jsm', namespace);
No need for the var blah = Cu.import
Console.jsm and Services.jsm export a var named te same thing.
You can even just do Cu.import('rsource://gre/modules/ctypes.jsm') and start using it like ctypes.blah
Also to get access to Cu do this:
var {Cu: utils, Cr: results} = Components
Next i think if you change the module in one scope, it will not change in other scopes until you do Cu.import again in that scope.
I have one js files . I load it using other javascrupt file using eval() function. I have seen eval is slow and with some other limtation. Since i need to store my JS file object in cache and use it anytime i need after apllication starts. I dont want to do eval() everytime.
Is there anyway to do it in simple way.
var evalObj;
if(evalObj) {
console.log('eval object already evaluated');
_myfunctionInJSFile_(layouts.FormatDate(startTime), threadName, level, categoryName, message);
}
else {
evalObj = eval(fs.readFileSync('./myJSFile', 'utf8'));
console.log('re evaluating object ..' );
_myfunctionInJSFile_(layouts.FormatDate(startTime), threadName, level,message);
}
myJSFile
var _sigmaAlarmHandler_ =function(args)
{
var args = Array.prototype.slice.call(arguments);
args.unshift();
console.log('Alarm : ', args);
}
Either the conditional eval is not working.
In node.js you can simple require your js-file:
var obj = require('./myJSFile');
obj.foo();
./myJSFile.js:
exports.foo = function() {
console.log('foo');
}
This file becomes a module with exported functions, that you need.
It loads once, then every require reuse already loaded module.
If it is not commonjs-compliant (i.e. using module.exports will not work), then you can run it in its own vm:
var vm = require('vm');
vm.runInNewContext(jscode,{/*globalvars*/});
where the second parameter is an object with global vars made available in the context in which the jscode is run. So if the second param is, say, {a:1,b:"foo"} then your jscode will run with the global variable a set to 1 and the global variable b set to "foo".
The jscode itself is a string that you load from a file or elsewhere.
Think of vm.runInNewContext() as "practice safe eval". Well, relatively safe, you can still do some dangerous stuff if you pass in particular vars, like process or file etc.
I used this for the declarative part of cansecurity http://github.com/deitch/cansecurity for nodejs
You can view the sample in the file lib/declarative.js
Here is the API for vm http://nodejs.org/api/vm.html
There are options to run in the same context, etc. But that is very risky.
When you actually run the code, using your example above:
_myfunctionInJSFile_(layouts.FormatDate(startTime), threadName, level,message);
you are looking to pass in 4 params: startTime, threadName, level, message and execute the function. The issue is that you cannot run the function on the current context. You need the function to be defined and run in the file. So you should have something like:
vm.runInNewContext(jscode,{startTime:layouts.FormatDate(startTime),threadName:threadName,level:level,message:message});
And then the jscode should look like
function _myfunctionInJSFile(startTime,threadName,level,message) {
// do whatever you need to do
}
// EXECUTE IT - the above vars are set by the global context provide in vm.runInNewContext
_myfunctionInJSFile(startTime,threadName,level,message);
If you prefer to define the function and have it loaded and run in this context, then just use the commonjs format.
I think i have found the answer for this.
Since my application is running in node js which uses v8 engine platform. When the application starts v8 engine caches all the code/configuration and can be used anytime.
Similarly in my code i will pre-load the JS code using eval and i will do it only once. So on next call i will return only the loaded JS code. Here i need to modify the code to load once.
But main point we have look is that in future if any body has similar requirement they can cache their JS codes using eval (thanks to v8 engine) and use it till your application is running.
I previously run into the problems of data hiding under modularization in JavaScript. Please see the links below:
Module pattern- How to split the code for one module into different js files?
JavaScript - extract out function while keeping it private
To illustrate the problem, see the example below. My goal is to split my long js file into 2 files, but some functions need to access some private variables:
first.js:
(function(context) {
var parentPrivate = 'parentPrivate';
})(window.myGlobalNamespace);
second.js:
(function(context) {
this.childFunction = console.log('trying to access parent private field: ' + parentPriavte);
}(window.myGlobalNamespace.subNamspace);
Now this wouldn't work because child doesn't have access to parent. One solution is to make parentPrivate publicly visible, but that is unacceptable in my case.
Quoting #Louis who gave an answer for one of the previous questions:
"We can't have a field that's accessible by child but not to outside
public (i.e. protected). Is there any way to achieve that?"
If you want modularization (i.e. you want the child to be coded
separately from the parent), I do not believe this is possible in
JavaScript. It would be possible to have child and parent operate in
the same closure but then this would not be modular. This is true with
or without RequireJS.
The problem is that the parent and the child are not inside the same closure. Therefore I'm thinking, does it make sense to create a library that puts files into the same closure?
Something like:
concatenator.putIntoOneClosure(["public/js/first.js", "public/js/second.js"]);
Of course we can take in more arguments to specify namespaces etc. Note that it is not the same functionality we get from RequireJS. RequireJS achieves modularization while this concatenator focuses on data hiding under the condition of modularization.
So does any of the above make sense? Or am I missing out some important points? Any thoughts are welcomed.
If you need things available in two separate files, then you can't have true privacy... however, something similar to this may work for you:
first.js:
(function(context) {
var sharedProperties = {
sharedProp1: "This is shared"
};
function alertSharedProp1() {
alert (sharedProperties.sharedProp1)
}
window[context] = {
sharedProperties: sharedProperties,
alertSharedProp1: alertSharedProp1
};
})("myGlobalNamespace");
second.js:
(function(parent, context) {
// CHANGED: `this` doesn't do what you think it does here.
var childFunction = function() {
console.log('trying to access parent private field: ' + window.myGlobalNamespace.sharedProperties.sharedProp1);
};
window[parent][context] = {
childFunction: childFunction
};
}("myGlobalNamespace", "subNamspace"));
window.myGlobalNamespace.subNamspace.childFunction();
Edit detailed answer based on comments
What I did was to set up a source file that looked like this:
master.js
(function() {
##include: file1.js##
##include: file2.js##
}());
Then I wrote a script (in windows scripting, in my case) that read in master.js and then read through line by line looking for the ##include: filename.js## lines. When it found such a line it read in the include file and just dumped it out.
My particular needs were special since I was writing a browser plugin that needed to work in three different browsers and had to be wrapped up separately, yet for my own sanity I wanted separate files to work with.
I would like to break my javascript code to several .js files. Each of those .js has code that need to be inside the $(document).ready(..). So in each file a new $(document).ready(..) will start.
How could I call from filea.js functions declared in fileb.js (both inside a $(document).ready block) ?
If this is not possible, can you propose an alternative?
Thank you.
Edit: I would like to clarify that I would like to avoid using the global scope. I was hoping something in the line of using named functions as handlers for the event but I can't really see how to do it.
You can make a local variable global with
window.globalname = localname;
Remember that functions are variables.
You really can't get away from declaring a global. Creating a single global isn't so bad, you can then namespace all your functions under it.
Put this in something like a main.js file, so you can keep your shared functions here:
// name this something unique to your page/site/app
var MYAPP = {};
// now we can attach functions to it
MYAPP.funcA = function() { /* ... */ };
MYAPP.funcB = function() { /* ... */ };
Then, in each of your anonymous functions you can access MYAPP.funcA(), MYAPP.funcB(), etc. You can also modify MYAPP on the fly to add functions, properties, etc.
In the end you have a single global (darn it!), but if you've named it properly you are creating a global namespace where your app code can safely reside.
As long as the files are loaded in order (i.e. functions in filea.js get loaded before fileb.js calls them, you should be fine.
In order to make sure files load their dependencies first, you could consider require.js or head.js
I've had luck with the latter: http://headjs.com/
I started to read several tutorials about RequireJS. In none of them was the "define" keyword explained satisfactorily for me. Could someone help me with the following :
define(
["Models/Person", "Utils/random", "jquery"],
function (Person, randomUtility, $) {..}
)
What is "define"? Is define a function with an array and an anonymous function inside of it? Or is it something else? Can someone give me more information about this kind of definitions?
Addition: Thank you nnnnnn and pradeek for your answers. Here in Europe it was 2:30 in the night when I was posting the question. Maybe therefore I didn't recognize it was a simple function call.
define is not specific to RequireJS, it is part of the AMD specification. Burke will note that RequireJS doesn't implement exactly how AMD specifies it, since AMD didn't really keep browsers in mind.
define doesn't have an anonymous function in it. define is a method made available to AMD based JavaScript files for loading their data. Libraries like RequireJS make this available to you. The specific implementation probably isn't valuable to you. So I'll go over the one you provided as it's the most common way to declare a module.
define( [array], object );
Array is a list of modules that this module depends on. There is a 1 to 1 relationship between modules and files. You can not have multiple modules in a file nor multiple files for one module.
Object is the module you are defining. This can be anything, a struct, or a function that returns a struct. Read the docs on RequireJS for more details.
If object is a function, the arguments passed to the function are the modules listed as dependencies in the first define argument. It is also important to note than when you pass a function as object, it will only run one time. The methods or properties created on this one instantiation can be accessed at any time though, can then be accessed by other modules that list this module as a dependency.
Good luck, I recommend playing around with this and reading the docs when things don't make sense. RequireJS docs are great as a quick start on how AMD modules work.
I found define defined near the bottom of require.js (I too was wondering what kind of a thing this define word is, and this is the answer I was looking for):
/**
* The function that handles definitions of modules. Differs from
* require() in that a string for the module should be the first argument,
* and the function to execute after dependencies are loaded should
* return a value to define the module corresponding to the first argument's
* name.
*/
define = function (name, deps, callback) {
var node, context;
//Allow for anonymous modules
if (typeof name !== 'string') {
//Adjust args appropriately
callback = deps;
deps = name;
name = null;
}
//This module may not have dependencies
if (!isArray(deps)) {
callback = deps;
deps = null;
}
//If no name, and callback is a function, then figure out if it a
//CommonJS thing with dependencies.
if (!deps && isFunction(callback)) {
deps = [];
//Remove comments from the callback string,
//look for require calls, and pull them into the dependencies,
//but only if there are function args.
if (callback.length) {
callback
.toString()
.replace(commentRegExp, '')
.replace(cjsRequireRegExp, function (match, dep) {
deps.push(dep);
});
//May be a CommonJS thing even without require calls, but still
//could use exports, and module. Avoid doing exports and module
//work though if it just needs require.
//REQUIRES the function to expect the CommonJS variables in the
//order listed below.
deps = (callback.length === 1 ? ['require'] : ['require', 'exports', 'module']).concat(deps);
}
}
//If in IE 6-8 and hit an anonymous define() call, do the interactive
//work.
if (useInteractive) {
node = currentlyAddingScript || getInteractiveScript();
if (node) {
if (!name) {
name = node.getAttribute('data-requiremodule');
}
context = contexts[node.getAttribute('data-requirecontext')];
}
}
//Always save off evaluating the def call until the script onload handler.
//This allows multiple modules to be in a file without prematurely
//tracing dependencies, and allows for anonymous module support,
//where the module name is not known until the script onload event
//occurs. If no context, use the global queue, and get it processed
//in the onscript load callback.
(context ? context.defQueue : globalDefQueue).push([name, deps, callback]);
};
I found this page Why AMD? very helpful. To summarize from this page, AMD specification is helpful in overcoming "write a bunch of script tags with implicit dependencies that you have to manually order" problem. It is helpful in loading the dependencies before executing the required functions, similar to import in other programming languages like python. AMD also prevents the global namespace pollution problem. Check "It is an improvement over the web's current "globals and script tags" because" section.
I think the RequireJs API specification sums it up pretty well:
If the module has dependencies, the first argument should be an array of dependency names, and the second argument should be a definition function. The function will be called to define the module once all dependencies have loaded. The function should return an object that defines the module.
They list examples of all the various syntactic forms of defines.