Debugging Closure-compiler Compiled Javascript - javascript

I have a complex dojo app that works correctly uncompiled, but after compiling with Google's Closure Compiler, I get subtle differences in some behaviours.
As it is, it's exceedingly difficult to debug, and I've been unable to find any information about possible functional differences between compiled and uncompiled Javascript with Google Closure.
Can anyone point me in the direction of known differences, or share any similar experiences and some ideas of where to start looking?

General Closure Compiler Debugging Tips
Use the VERBOSE warning level. This turns on all of the checks.
Use the debug flag. It makes renamed symbols ridiculously long, but they are named in such a way you can find the original. If code works with the debug flag but not without it, it is almost certainly a renaming issue.
Definitely use formatting=PRETTY_PRINT. Debugging compacted code is painful without it.
Use source maps
Disable the type based optimizations with --use_types_for_optimization false. Incorrect type annotations can cause the compiler to make invalid assumptions.
UPDATE: As of the 20150315 compiler release, the type based optimizations are enabled by default.

With the help of Chad's answer, I found a bug where my working code like so:
a = [b, b = a][0]; // swap variable values
Was compiled to:
a = b;
It might be fixed in later versions, because tests with the online Closure compiler app don't demonstrate the same bug. Fixed it by not trying to be clever, and using a 3rd variable to hold the old value while swapping values.

A couple problems that I've seen with dojo 1.3 (pre-closure):
If you have a class property named class it needs to be quoted. So {class: "css"} needs to be written as {"class": "css"} this includes any widget fields.
Make sure you remove any debugger statements.

Related

Validate JavaScript code without enforcing a specific style

I would like to validate some JavaScript code for syntactic correctness, but without enforcing a specific coding-style to the user.
My first approach was to use esvalidate that comes included with esprima. This does the job partially, as it detects unexpected tokens, such as:
constx foo = {};
What it does not detect is among other things usage of variables that have never been declared, such as:
const foox = {
bar() {}
};
foo.bar();
A tool such as eslint would detect this, but it's very difficult to configure ESLint in a way that no specific style is enforced to the user (I don't say it's not possible, I just say it's a huge amount of work since you need to check every single rule and decide whether to enable or disable it, and with hundreds of rules this is… yes, well, a huge amount of work).
What other options do I have? How could I validate the code without this effort?
By the way: What is it that I want to validate here? It's not the syntax (I mean, syntactically, everything is fine, it just does not make sense), but it's also not the semantics. What would be the correct term for this type of checks?
Years ago Linters were only there to check the style of your code. Nowadays they do more, a lot more. Even static analysis. ESLint is such a powerful tool and IMHO it is exactly what you're looking for.
You may think the initial configuration may be costly, but as the ESLint page says:
No rules are enabled by default.
Besides, having a consistent coding style across your time is very beneficial and if you and your team found a common basline you can share the .eslintrc between projects.
Enabling the no-undef rule and using a plugin for your IDE, like one of those should solve your issue -> static code analysis at development time :)
To answer your last question, I think that what you want is static analysis. This will be harder with JavaScript in general because of its dynamic nature and lack of types, and relative lack of maturity in tooling. Decades of work have gone into writing static analyzers for C, for example, and that work won't immediately carry over into others languages.
Something like jshint may be what you want because its goal is to help developers "write complex programs without worrying about typos and language gotchas".
Hope this helps!
ESLint Recommended set: Recommended rules have checkmark next to them do not include any stylistic rules. You can use that to verify validity of the script without enforcing any specific style. At the very least it's a good starting point, and you can always turn off the rules you don't like.

How to hide "Is this intentional?" warning in Ext JS 6?

I'm currently developing an Ext JS 6 application with some levels of inheritance on components, where some alias mappings get overridden. Ext JS debug is so friendly to let me know each time when I do this.
[W] Overriding existing mapping: 'controller.my-controller' From 'MyBase.app.Controller' to 'MySub.app.Controller'. Is this intentional?
In my case, it is intentional. The warnings are stacking up, and it is starting to get hard to see the forest for the trees.
Can I turn these statements off somehow just for those classes that are intentional?
Is renaming all my aliases a better solution?
I would recommend renaming your aliases.
Having two classes with the same alias might lead to problems when trying to instantiate one of them. There already is a great answer to this question for ExtJS 5 with a more detailed explanation. The core concepts are the same for ExtJs 6.
As an alias should always be unique (in it's namespace), it's probably a good idea to keep the warnings, just in case you ever miss something ... ;-)
If you really want to suppress these warnings anyway you could overwrite the Ext.log.warn function with an empty one. As you can see in ext/packages/core/src/class/Inventory.js there seems to be no other way around it.
Hope this answers your questions!
You can easily override the warning by forcing the "update" parameter to be set to true when addAlias calls addMapping.
Ext.Inventory.prototype.addAlias = function(className,alias, update) {
return this.addMapping(className, alias, this.aliasToName, this.nameToAliases, true);
};
https://fiddle.sencha.com/#fiddle/1dec

jQuery Full & Minified version major functionality differences?

I wonder what are the functionality differences in jQuery (jQuery*.js) full & Minified (jQuery*.min.js) Versions.
http://code.jquery.com/jquery-latest.js
http://code.jquery.com/jquery-latest.min.js
I know there is a difference in size But Any functionality differences?
Cheers.
Wikipedia says this:
Minification (also minimisation or minimization), in computer
programming languages and especially JavaScript, is the process of
removing all unnecessary characters from source code without changing
its functionality.
http://en.wikipedia.org/wiki/Minify
So, none.
One reason you have the choice of uncompressed is so you can examine the source code to track down a bug if you need to. In theory, anyway.
The only difference is the size of the code.
Any functional difference is a bug, and should be reported to the minification tool.
jQuery is minified by UglifyJS.
There are no functional differences.
The minified version just has all the of the line breaks and space characters, and anything else that isn't necessary for Javascript to work, removed.
Other than that, they are functionally identical.
EDIT: As noted by SLaks, it also changes names where safe. Safe meaning it is not publicly available.
This means that it it could change an internal variable from register to a. Similarly, it could change a function name from perform() to b().
Please note that those are just examples, and are most likely not in the code itself.

How can I prove that my JavaScript files are in the scope of a specific JS or ECMA version?

Let's say you would get a bunch of .js files and now it is your job to sort them into groups like:
requires at least JavaScript 1.85
requires at least E4X (ECMAScript 4 EX)
requires at least ECMAScript 5
or something like this.
I am interested in any solution, but especially in those which work using JavaScript or PHP. This is used for creation of automated specifications, but it shouldn't matter - this is a nice task which should be easy to solve - however, I have no idea how and it is not easy for me. So, if this is easy to you, please share any hints.
I would expect something like this - http://kangax.github.com/es5-compat-table/# - just not for browsers, rather for a given file to be checked against different implementations of JavaScript.
My guess is, that each version must have some specifics, which can be tested for. However, all I can find is stuff about "what version does this browser support".
PS: Don't take "now it is your job" literally, I used it to demonstrate the task, not to imply that I expect work done for me; while in the progress of solving this, it would be just nice to have some help or direction.
EDIT: I took the easy way out, by recquiring ECMAScript 5 to be supported at least as good as by the current FireFox for my projekt to work as intendet and expected.
However, I am still intereseted in any solution-attemps or at least an definite answer of "is possible(, with XY)" or "is not possible, because ..."; XY can be just some Keyword, like FrameworkXY or DesignPatternXY or whatever or a more detailed solution of course.
Essentially you are looking to find the minimum requirements for some javascript file. I'd say that isn't possible until run time. JavaScript is a dynamic language. As such you don't have compile time errors. As a result, you can't tell until you are within some closure that something doesn't work, and even then it would be misleading. Your dependencies could in fact fix many compatibility issues.
Example:
JS File A uses some ES5 feature
JS File B provides a shim for ES5 deficient browsers or at least mimics it in some way.
JS File A and B are always loaded together, but independently A looks like it won't work.
Example2:
Object.create is what you want to test
Some guy named Crockford adds create to Object.prototype
Object.create now works in less compatible browsers, and nothing is broken.
Solution 1:
Build or find a dependency map. You definitely already have a dependency map, either explicitly or you could generate it by iterating over you HTML files.
Run all relevant code paths in environments with decreasing functionality (eg: ES5, then E4X, then JS 1.x, and so forth).
Once a bundle of JS files fail for some code path you know their minimum requirement.
Perhaps you could iterate over the public functions in your objects and use dependency injection to fill in constructors and methods. This sounds really hard though.
Solution 2:
Use webdriver to visit your pages in various environments.
Map window.onerror to a function that tells you if your current page broke while performing some actions.
On error you will know that there is a problem with the bundle on the current page so save that data.
Both these solutions assume that you always write perfect JS that never has errors, which is something you should strive for but isn't realistic. This might; however, provide you with some basic "smoke testing" though.
This is not possible in an exact way, and it also is not a great way of looking at things for this type of issue.
Why its not possible
Javascript doesn't have static typing. But properties are determined by the prototype chain. This means that for any piece of code you would have to infer the type of an object and check along the prototype chain before determining what function would be called for a function call.
You would for instance, have to be able to tell that $(x).bind() o $(x).map are not making calls to the ecmascript5 map or bind functions, but the jQuery ones. This means that you would really have to parse out the whole code and make inferences on type. If you didn't have the whole code base this would be impossible. If you had a function that took an object and you called bind, you would have no idea if that was supposed to be Function.prototype.bind or jQuery.bind because thats not decided till runtime. In fact its possible (though not good coding practice) that it could be both, and that what is run depends on the input to a function, or even depends on user input. So you might be able to make a guess about this, but you couldn't do it exactly.
Making all of this even more impossible, the eval function combined with the ability to get user input or ajax data means that you don't even know what types some objects are or could be, even leaving aside the issue that eval could attempt to run code that meets any specification.
Here's an example of a piece of code that you couldn't parse
var userInput = $("#input").val();
var objectThatCouldBeAnything = eval(userInput);
object.map(function(x){
return !!x;
});
There's no way to tell if this code is parsing a jQuery object in the eval and running jQuery.map or producing an array and running Array.prototype.map. And thats the strength and weakness of a dynamically typed language like javascript. It provides tremendous flexibility, but limits what you can tell about the code before run time.
Why its not a good strategy
ECMAScript specifications are a standard, but in practice they are never implemented perfectly or consistently. Different environments implement different parts of the standard. Having a "ECMAScript5" piece of code does not guarantee that any particular browser will implement all of its properties perfectly. You really have to determine that on a property by property basis.
What you're much better off doing is finding a list of functions or properties that are used by the code. You can then compare that against the supported properties for a particular environment.
This is still a difficult to impossible problem for the reasons mentioned above, but its at least a useful one. And you could gain value doing this even using a loose approximation (assuming that bind actually is ecmascript5 unless its on a $() wrap. Thats not going to be perfect, but still might be useful).
Trying to figure out a standard thats implemented just isn't practical in terms of helping you decide whether to use it in a particular environment. Its much better to know what functions or properties its using so that you can compare that to the environment and add polyfills if necessary.

Is there a tool to remove unused methods in javascript?

I've got a collection of javascript files from a 3rd party, and I'd like to remove all the unused methods to get size down to a more reasonable level.
Does anyone know of a tool that does this for Javascript? At the very least give a list of unused/used methods, so I could do the manually trimming? This would be in addition to running something like the YUI Javascript compressor tool...
Otherwise my thought is to write a perl script to attempt to help me do this.
No. Because you can "use" methods in insanely dynamic ways like this.
obj[prompt("Gimme a method name.")]();
Check out JSCoverage . Generates code coverage statistics that show which lines of a program have been executed (and which have been missed).
I'd like to remove all the unused methods to get size down to a more reasonable level.
There are a couple of tools available:
npm install -g fixmyjs
fixmyjs <filename or folder>
A configurable module that uses JSHint (Github, docs) to flag functions that are unused and perform clean up as well.
I'm not sure that it removes undefined functions as opposed to flagging them. though it is a great tool for cleanup, it appears to lack compatibility with later versions of ECMAScript (more info below).
There is also the Google Closure Compiler which claims to remove dead JS but this is more of a build tool.
Updated
If you are using something like Babel, consider adding ESLint to your text editor, which can trigger a warning on unused methods and even variables and has a --fix CLI option for autofixing some errors and style issues.
I like ESLint because it contains multiple plugins for alternate libs (like React warnings if you're missing a prop), allowing you to catch bugs in advance. They have a solid ecosystem.
As an example: on my NodeJS projects, the config I use is based off of the Airbnb Style Guide.
You'll have to write a perl script. Take no notice of the nay-sayers above.
Such a tool could work with libraries that are designed to only make function calls explicitly. That means no delegates or pointers to functions would be allowed, the use of which in any case only results in unreadable "spaghetti code" and is not best practice. Even if it removes some of these hidden functions you'll discover most if not all of them in testing. The ones you dont discover will be so infrequently used that they will not be worth your time fixing them. Dont obsess with perfection. People go mad doing that.
So applying this one restriction to JavaScript (and libraries) will result in incredible reductions in page size and therefore load times, not to mention readability and maintainability. This is already the case for tools that remove unused CSS such as grunt_CSS and unCSS (see http://addyosmani.com/blog/removing-unused-css/) and which report typical reductions down to one tenth the original size.
Its a win/win situation.
Its noteworthy that all interpreters must address this issue of how to manage self modifying code. For the life of me I dont understand why people want to persist with unrestrained freedom. As noted by Triptych above Javascript functions can be called in ways that are literally "insane". This insane fexibility corrupts the fundamental doctrine of separation of code and data, enables real-time code injection, and invalidates any attempt to maintain code integrity. The result is always unreadable code that is impossible to debug and the side effect to JavaScript - removing the ability to run automatic code pre-optimisation and validation - is much much worse than any possible benefit.
AND - you'd have to feel pretty insecure about your work to want to deliberately obsficate it from both your collegues and yourself. Browser clients that do work extremely well take the "less is more" approach and the best example I've seeen to date is Microsoft Office combination of Access Web Forms paired with SharePoint Access Servcies. The productivity of having a ubiquitous heavy tightly managed runtime interpreter client and its server side clone is absolutely phenomenal.
The future of JavaScript self modifying code technologies therfore is bringing them back into line to respect the...
KISS principle of code and data: Keep It Seperate, Stupid.
Unless the library author kept track of dependencies and provided a way to download the minimal code [e.g. MooTools Core download], it will be hard to to identify 'unused' functions.
The problem is that JS is a dynamic language and there are several ways to call a function.
E.g. you may have a method like
function test()
{
//
}
You can call it like
test();
var i = 10;
var hello = i > 1 ? 'test' : 'xyz';
window[hello]();
I know this is an old question by UglifyJS2 supports removing unused code which may be what you are looking for.
Also worth noting that eslint supports an option called no-unused-vars which actually does some basic handling of detecting if functions are being used or not. It definitely detects it if you make the function anonymous and store it as a variable (but just be aware that as a variable the function declaration doesn't get hoisted immediately)
In the context of detecting unused functions, while extreme, you can consider breaking up a majority of your functions into separate modules because there are packages and tools to help detect unused modules. There is a little segment of sindreshorus's thoughts on tiny modules which might be relevant to that philosophy but that may be extreme for your use case.
Following would help:
If you have fully covered test cases, running Code Coverage tool like istanbul (https://github.com/gotwarlost/istanbul) or nyc (https://github.com/istanbuljs/nyc), would give a hint of untouched functions.
At least the above will help find the covered functions, that you may thought unused.

Categories