I have simple example:
Javascript:
function testBut(b){
alert("!");
}
HTML:
<button onclick="testBut(this)">Test</button>
Now I run my .js script through Google Closure compiler (running from commandline), and I want to keep testBut function intact.
I read in many places that I have to use --externs option and define another file with names of exported functions, in this case it would hold just:
function testBut(b){}
And additionally I need to add funny line to my js code:
window['testBut']=testBut;
So now questions:
is that really so stupid system in Closure with two bug-prone steps to be done, in order to keep desired function?
is there no "#..." annotation that would simply suit same purpose? I tried #export, but it requires --generate_exports option, and still it generates similar ugly and useless goog.a("testBut", testBut); in target code (I tried same code, and those goog.a(...) seems simply useless), and this still requires the exports file
Ideally I'm looking for simple annotation or commandline switch that would tell "don't remove/rename this function", as simple as possible, no code added, no another file.
Thanks
Don't confuse externs and exports.
Externs - provide type information and symbol names when using other code that will NOT be compiled along with your source.
Exports - Make your symbols, properties or functions available to OTHER code that will not be compiled.
So in your simple example, you need:
function testBut(b){
alert("!");
}
window["testBut"] = testBut;
However this can be simplified even further IF testBut is only for external calls:
window["testBut"] = function(b) {
alert("!");
};
Why not always use the second notation? Because internal usage (calls within the compiled code) would then have to use the full quoted syntax which blocks type checking and reduces compression.
Why not use a JSDoc Annotation for Exports?
This question comes up a lot.
For one, there isn't global consensus on how exports should be done. There are different ways to accomplish an export.
Also, exporting symbols and functions by definition blocks dead-code elimination. Take the case of a library author. The author wishes to compile his library exporting all public symbols. However, doing so means that when other users include his library in a compilation, no dead-code elimination occurs. This negates one of the primary advantages of ADVANCED_OPTIMIZATIONS.
Library authors are encouraged to provide their exports at the bottom of the file or in a separate file so that they can be excluded by other users.
It has been suggested before to provide a command line argument to control exporting based on a namespace. IE something like --export mynamespace.*. However, no author has yet tackled that issue and it is not a trivial change.
Related
Trying to find a good and proper pattern to handle a circular module dependency in Python. Usually, the solution is to remove it (through refactoring); however, in this particular case we would really like to have the functionality that requires the circular import.
EDIT: According to answers below, the usual angle of attack for this kind of issue would be a refactor. However, for the sake of this question, assume that is not an option (for whatever reason).
The problem:
The logging module requires the configuration module for some of its configuration data. However, for some of the configuration functions I would really like to use the custom logging functions that are defined in the logging module. Obviously, importing the logging module in configuration raises an error.
The possible solutions we can think of:
Don't do it. As I said before, this is not a good option, unless all other possibilities are ugly and bad.
Monkey-patch the module. This doesn't sound too bad: load the logging module dynamically into configuration after the initial import, and before any of its functions are actually used. This implies defining global, per-module variables, though.
Dependency injection. I've read and run into dependency injection alternatives (particularly in the Java Enterprise space) and they remove some of this headache; however, they may be too complicated to use and manage, which is something we'd like to avoid. I'm not aware of how the panorama is about this in Python, though.
What is a good way to enable this functionality?
Thanks very much!
As already said, there's probably some refactoring needed. According to the names, it might be ok if a logging modules uses configuration, when thinking about what things should be in configuration one think about configuration parameters, then a question arises, why is that configuration logging at all?
Chances are that the parts of the code under configuration that uses logging does not belong to the configuration module: seems like it is doing some kind of processing and logging either results or errors.
Without inner knowledge, and using only common sense, a "configuration" module should be something simple without much processing and it should be a leaf in the import tree.
Hope it helps!
Will this work for you?
# MODULE a (file a.py)
import b
HELLO = "Hello"
# MODULE b (file b.py)
try:
import a
# All the code for b goes here, for example:
print("b done",a.HELLO))
except:
if hasattr(a,'HELLO'):
raise
else:
pass
Now I can do an import b. When the circular import (caused by the import b statement in a) throws an exception, it gets caught and discarded. Of course your entire module b will have to indented one extra block spacing, and you have to have inside knowledge of where the variable HELLO is declared in a.
If you don't want to modify b.py by inserting the try:except: logic, you can move the whole b source to a new file, call it c.py, and make a simple file b.py like this:
# new Module b.py
try:
from c import *
print("b done",a.HELLO)
except:
if hasattr(a,"HELLO"):
raise
else:
pass
# The c.py file is now a copy of b.py:
import a
# All the code from the original b, for example:
print("b done",a.HELLO))
This will import the entire namespace from c to b, and paper over the circular import as well.
I realize this is gross, so don't tell anyone about it.
A cyclic module dependency is usually a code smell.
It indicates that part of the code should be re-factored so that it is external to both modules.
So if I'm reading your use case right, logging accesses configuration to get configuration data. However, configuration has some functions that, when called, require that stuff from logging be imported in configuration.
If that is the case (that is, configuration doesn't really need logging until you start calling functions), the answer is simple: in configuration, place all the imports from logging at the bottom of the file, after all the class, function and constant definitions.
Python reads things from top to bottom: when it comes across an import statement in configuration, it runs it, but at this point, configuration already exists as a module that can be imported, even if it's not fully initialized yet: it only has the attributes that were declared before the import statement was run.
I do agree with the others though, that circular imports are usually a code smell.
I'm trying to apply new documentation to legacy code that is
Undocumented
Minified (We don't have the source code)
Included in the rest of the code via window.thing = thing in the minified file, instead of using modules and exports.
Used literally everywhere. It's the base framework for the entire web application.
My main intent is to get vsCode to display some intellisense for this module when I create a new page/file, rather than having to copy&paste code from other pages/files before it will tell me how to use the module's methods. And even then, it doesn't tell me how it works. Although after a few months I can now write this code somewhat reliably without looking, I've had to unravel the minified file some to find all the methods and properties available. I should be copy&pasting imports, not the whole damn file. We're also bringing in new developers soon, and (although coworkers disagree), I want to have something more for them to look at then just 'follow the pattern on the other pages'.
We are using webpack and have the ability to use modules. I prefer the module pattern, but it is clear that those who came before us did not. Using window.thing to use the module between files. My coworkers prefer to 'follow the pattern already there', rather than trying to fix old code. I don't whole-heartedly disagree, but still. So I need to make this as unobtrusive as possible.
webapp/documentation/main.js
import './thing.js';
/**#type {thing}**/
var thing; // This does not work. `thing` is marked as any.
// So I changed the type to thingModule.
/**#type {thingModule}**/
var thing; // This works.
/**#type {thingModule}*/
window.thing; // This does not work.
/**#type {thingModule}**/
var thing = window.thing; // Works for thing
// But does not change window.thing
However, none of those above propagate into the next file.
webapp/view/someFile.js
import '../../documentation/main.js';
/**#type {thingModule}**/ var thing = window.thing;
// Cannot change below this line //
thing.addView(/*blah blah blah*/);
thing.doStuff();
This allows me to look up properties of thing. But it does change the code slightly. Not much, but just enough that it would be frowned upon if left in the code when committed. Plus, if I find other modules that need similar documentation, I don't want a growing import statement for just documentation.
I need to be able to include it in a single line that only provides documentation.
import '../../documentation/main.js';
// Cannot change below this line //
thing.addView(/*blah blah blah*/);
thing.doStuff();
In this case, thing is shown as ':any' instead of ':thingModule' like it needs to be.
tl;dr:
I need to assign the #typedef to window.thing and make the jsDoc definition propagate anywhere documentation is imported. And I need to not change the actual declaration of window.thing.
In the tutorial mentioned here, the namespace provided by the module is:
goog.provide('tutorial.notepad.Note');
But I am wondering why not this:
goog.provide('tutorial.notepad');
Since, according to the rule mentioned below:
tutorial = tutorial || {};
tutorial.notepad = tutorial.notepad || {};
tutorial.notepad.Note = tutorial.notepad.Note || {};
If we just provided:
goog.provide('tutorial.notepad'); then, we would already have:
tutorial = tutorial || {};
tutorial.notepad = tutorial.notepad || {};
to which we could have added property Note
tutorial.notepad.Note = function() {};
Hence, my question is:
Why not just declare goog.provide('tutorial.notepad') and then use that to include top level Classes, instead its recommended to use goog.provide('tutorial.notepad.Note') for each Class which feels redundant to me.
Having goog.provide('tutorial.notepad'); creates an entry in the "tree of dependencies" for that namespace, but it does not create an entry for the class tutorial.notepad.Note. If you manually create tutorial.notepad.Note as in your example code then you don't activate closure-compiler mechanisms to include the class tutorial.notepad.Note into the tree of namespace dependencies that closure-compiler uses.
The reason is that goog.provide is used by closure compiler to set up the tree of dependencies used to figure out what namespaces to load and in what order.
By not using goog.provide, but mimicking its effects with the code you show, the compiler doesn't learn about the class Note and how it fits into the tree of namespaces and classes and their dependencies.
There are two ways to run closure-compiler based code: compiled and uncompiled. Each of these build and use the tree of namespace dependencies differently:
UNCOMPILED One of the great things about closure-compiler is that you can run all your code uncompiled. A necessary step in that process is to use depswriter.py, a Python program which reads all your source files (looking for goog.provide and goog.require calls) and produces a file deps.js. That deps.js file is the embodiment of the namespace dependency tree. Here is one sample line (of 333) from my project's deps.js file:
goog.addDependency('../../../src/lab/app/ViewPanner.js',
['myphysicslab.lab.app.ViewPanner'], ['myphysicslab.lab.util.DoubleRect',
'myphysicslab.lab.util.UtilityCore', 'myphysicslab.lab.util.Vector',
'myphysicslab.lab.view.CoordMap', 'myphysicslab.lab.view.LabView'], false);
When I run my code in the uncompiled state, there is a <script> tag that runs that deps.js script. Doing that causes an in-memory version of the namespace dependency tree to be created which is accessed by goog.require at run-time to load whatever other files are needed for that particular class.
COMPILED The compiler (a Java program) does much the same thing as described above as part of the compilation process. The difference is that the resulting tree of namespace dependencies is only used during compilation to figure how what order to define classes in, to figure out what is needed, etc. The tree of namespace dependencies is discarded when compilation is finished.
References:
https://github.com/google/closure-compiler/wiki/Managing-Dependencies
https://github.com/google/closure-compiler/wiki/Debugging-Uncompiled-Source-Code
Responding to your comment:
Why not just declare goog.provide('tutorial.notepad') and then use that to include top level Classes, instead its recommended to use goog.provide('tutorial.notepad.Note') for each Class which feels redundant to me.
I think this gets into issues about the goals and design of closure-compiler. As #Technetium points out, using closure-compiler "is extremely verbose" - it requires annotating your JavaScript code with comments to tell what are the input and output types of every method (function) and the type of each property of an object (class).
(I'm no compiler expert but) I think doing what you suggest would require the compiler to "understand" your code and make guesses about what you regard to be a class, and what you regard to be the constructor and methods or other properties of that class. This would be a much harder problem than what the closure-compiler designers arrived at - especially because JavaScript is such a "loose" language which allows you to do almost anything you can think of.
In practice I find the goog.provide to be not at all troublesome. I usually am defining only one class per file. What I find much more of a bother is all the goog.require statements. I can often have 20 or 30 of these in a file and this list of files is often repeated in a similar class. I have 3870 occurrences of goog.require in my code.
Even this would be OK, but what makes it worse is that closure-compiler has a goog.scope mechanism which lets you use shorter names, like I can then say Vector instead of new myphysicslab.lab.util.Vector. That's very nice, but the problem is that each class you've already goog.required you then have to make a short variable within the goog.scope with a line like this:
var Vector = myphysicslab.lab.util.Vector;
Anyway, my point is: yes, closure-compiler requires a lot more code than raw JavaScript. But the goog.provide is the least of the issues in that regard.
One more thing: user #Technetium states
The real reason to use it is to run your Google Closure code through the javascript-to-javascript Closure Compiler that removes dead/unused code while minimizing and obfuscating the pieces you do use.
While that's an incredibly useful feature, there is another hugely important reason to use closure-compiler: type checking. If you take the time to add the annotations to your functions, then the compiler will "have your back" by catching errors. This is a big help on any project, but becomes critical when you have multiple developers working on a project and is one of the main reasons that Google developed closure compiler.
A couple things at play here:
You can only evoke goog.provide() once per namespace.
You may currently have your "class" defined in a single file, say Note.js, with goog.provide('tutorial.notepad'); right now. However, if you add another file, say Tab.js, that has the "class" tutorial.notepad.Tab in it, you're going to run into this error when Tab.js also calls goog.provide('tutorial.nodepad').
Calling goog.provide('tutorial.notepad') does not tell the Closure Compiler about the "class" tutorial.notepad.Note
Google Closure code is extremely verbose in its raw library form. The real reason to use it is to run your Google Closure code through the javascript-to-javascript Closure Compiler that removes dead/unused code while minimizing and obfuscating the pieces you do use. While your example works in debug mode since it does not leverage the Closure Compiler, once the Closure Compiler is ran and tries to build a dependency map it will fail to find the tutorial.notepad.Note class when something tries to reference it via goog.requires('tutorial.notepad.Note'). If you want to learn more about how this dependency map works, owler's answer is a very good starting place.
As an aside, note that I use "class" in quotes, and quite intentionally. While Google Closure gives the look and feel of object oriented programming in many ways with its #constructor annotation, and a rough analog of package/import via goog.provide/goog.require syntax, it is still JavaScript at the end of the day.
Sample my code
function doit(){
var num=num1+num2
}
After publish i want like in abstract format. Like -
function a(){var b=c+d}; //should not be easily readable
Yes all these minifiers do is rename functions and variables to obfuscated names so they have no meaning to anyone reading them. Then they remove all indentation, line returns, and unnecesary spaces. I personally use jsMIN and ocassionally this online tool. http://javascript-minifier.com/
I use TypeScript to code my javascript file with Object Oriented Programing.
I want to use the node module https://npmjs.org/package/typescript-require to require my .ts files from other files.
I want to share my files in both server and client side. (Browser) And that's very important. Note that the folder /shared/ doesn't mean shared between client and server but between Game server and Web server. I use pomelo.js as framework, that's why.
For the moment I'm not using (successfully) the typescript-require library.
I do like that:
shared/lib/message.js
var Message = require('./../classes/Message');
module.exports = {
getNewInstance: function(message, data, status){
console.log(requireTs);// Global typescript-require instance
console.log(Message);
return new Message(message, data, status);
}
};
This file need the Message.js to create new instances.
shared/classes/Message.ts
class Message{
// Big stuff
}
try{
module.exports = Message;
}catch(e){}
At the end of the fil I add this try/catch to add the class to the module.exports if it exists. (It works, but it's not really a good way to do it, I would like to do better)
If I load the file from the browser, the module.export won't exists.
So, what I did above is working. Now if I try to use the typescript-require module, I'll change some things:
shared/lib/message.js
var Message = requireTs('./../classes/Message.ts');
I use requireTs instead of require, it's a global var. I precise I'm using .ts file.
shared/classes/Message.ts
export class Message{
// Big stuff
}
// remove the compatibility script at the end
Now, if I try like this and if I take a look to the console server, I get requireTs is object and Message is undefined in shared/lib/message.js.
I get the same if I don't use the export keyword in Message.ts. Even if I use my little script at the end I get always an error.
But there is more, I have another class name ValidatorMessage.ts which extends Message.ts, it's not working if I use the export keyword...
Did I did something wrong? I tried several other things but nothing is working, looks like the typescript-require is not able to require .ts files.
Thank you for your help.
Looking at the typescript-require library, I see it hasn't been updated for 9 months. As it includes the lib.d.ts typing central to TypeScript (and the node.d.ts typing), and as these have progressed greatly in the past 9 months (along with needed changes due to language updates), it's probably not compatible with the latest TypeScript releases (just my assumption, I may be wrong).
Sharing modules between Node and the browser is not easy with TypeScript, as they both use very different module systems (CommonJS in Node, and typically something like RequireJS in the browser). TypeScript emits code for one or the other, depending on the --module switch given. (Note: There is a Universal Module Definition (UMD) pattern some folks use, but TypeScript doesn't support this directly).
What goals exactly are you trying to achieve, and I may be able to offer some guidance.
I am doing the same and keep having issues whichever way I try to do things... The main problems for me are:
I write my typescript as namespaces and components, so there is no export module with multiple file compilation you have to do a hack to add some _exporter.ts at the end to add the export for your library-output.js to be importable as a module, this would require something like:
module.exports.MyRootNamespace = MyRootNamespace
If you do the above it works, however then you get the issue of when you need to reference classes from other modules (such as MyRootNamespace1.SomeClass being referenced by MyRootNamespace2.SomeOtherClass) you can reference it but then it will compile it into your library-output2.js file so you end up having duplicates of classes if you are trying to re-use typescript across multiple compiled targets (like how you would have 1 solution in VS and multiple projects which have their own dll outputs)
Assuming you are not happy with hacking the exports and/or duplicating your references then you can just import them into the global scope, which is a hack but works... however then when you decide you want to test your code (using whatever nodejs testing framework) you will need to mock out certain things, and as the dependencies for your components may not be included via a require() call (and your module may depend upon node_modules which are not really usable with global scope hacking) and this then makes it difficult to satisfy dependencies and mock certain ones, its like an all or nothing sort of approach.
Finally you can try to mitigate all these problems by using a typescript framework such as appex which allows you to run your typescript directly rather than the compile into js first, and while it seems very good up front it is VERY hard to debug compilation errors, this is currently my preferred way but I have an issue where my typescript compiles fine via tsc, but just blows up with a max stack size exception on appex, and I am at the mercy of the project maintainer to fix this (I was not able to find the underlying issue). There are also not many of these sort of projects out there however they make the issue of compiling at module level/file level etc a moot point.
Ultimately I have had nothing but problems trying to wrestle with Typescript to get it to work in a way which is maintainable and testable. I also am trying to re-use some of the typescript components on the clientside however if you go down the npm hack route to get your modules included you then have to make sure your client side uses a require compatible resource/package loader. As much as I would love to just use typescript on my client and my server projects, it just does not seem to want to work in a nice way.
Solution here:
Inheritance TypeScript with exported class and modules
Finally I don't use require-typescript but typescript.api instead, it works well. (You have to load lib.d.ts if you use it, else you'll get some errors on the console.
I don't have a solution to have the script on the browser yet. (Because of export keyword I have some errors client side) I think add a exports global var to avoid errors like this.
Thank you for your help Bill.