If I have a function:
export function createWeeklyStats(activities, offset, length) {
...
}
And I call the function like:
createWeeklyStats(myListOfActivities, 0)
JSHint does not complain that I'm missing the length parameter. I could not find a matching enforcing option here:
JSHint Options Reference
Does one exist?
I'm updating an existing method to include a new required parameter, and while I'm a self-proclaimed adult, I'll throw a fit right here if my best way out is a full-text search.
The commenters are correct that there is no way to ask js[hl]int to find this.
Here are some options:
Explicitly check arg count
Check argument count:
function createWeeklyStats(activities, offset, length) {
assert(arguments.length === 3);
Granted, that will raise the error at run-time, rather than compile-time.
Perhaps you want to write a little higher-order function which helps you out here:
function check_arg_count(f) {
return function() {
assert(arguments.length === f.length);
return f.apply(this, arguments);
};
}
Now instead of calling createWeeklyStats, you call check_arg_count(createWeeklyStats).
I don't see anything particularly wrong with this approach. It will be a little tricky dealing with optional arguments. At some point, you might want to throw in the towel and make the move to a more strongly typed language or TypeScript.
Use a refactoring tool
In this case you are refactoring, so how about using a refactoring tool? Many editors/IDE have such features. Check out the documentation.
There are other tools to help with refactoring. For instance, take a look at grasp. Actually, the grasp tutorial describes a similar case involving refactoring a function's argument list:
$ grasp -r -e 'createWeeklyStats($a, $o)' -R 'createWeeklyStats($a, $o, 0)' })' .
This will add the third argument to all calls. Changing the new third parameter is something you will obviously have to take care of yourself. grasp provides other features to narrow the scope of the change.
Customize the linter
You can write custom rules for eslint (but apparently not jshint; see this question). This could get you started: https://gist.github.com/jareware/7179093. Also http://eslint.org/docs/developer-guide/working-with-rules.html.
Related
A colleague advised me to add "use strict"; to the top of my JS code to highlight any gaps in my definitions and potential reference errors, etc. I am very happy with it because it has identified several pieces of code which might have been a problem down the line.
However, another colleague advised me that when calling functions which take multiple arguments, it can be helpful to name the arguments as they are specified, especially if it's something like a bunch of booleans. To illustrate, here's a couple of function calls:
logData(data, target, preserveLog=true, changeClass=false, wrapLine=false);
...is a whole lot clearer than:
logData(data, target, true, false, false);
But "use strict"; hates this. everywhere I've done this, I get a reference error in the console. It still runs fine, as would be expected, but the console is now cluttered with all these apparently non-defined references.
Does anyone know if there's a way around this so that I can keep my coding conventions which my colleagues appreciate, or am I going to have to either stop using "use strict"; or go through all my code and remove the names of arguments?
Thanks.
However, another colleague advised me that when calling functions which take multiple arguments, it can be helpful to name the arguments as they are specified, especially if it's something like a bunch of booleans.
This is terrible advice!
Javascript doesn't actually support passing arguments by name this way. Each of the arguments you pass "by name" is actually being treated as an assignment to a global variable with that name, and "use strict" is correctly identifying this as an error.
If you want to be more clear about what values you're passing, assign the values to real local variables and pass those, e.g.
var preserveLog = true;
var changeClass = false;
var wrapLine = false;
logData(data, target, preserveLog, changeClass, wrapLine);
If you really wanted to keep using your original pattern, you could even assign to those variables in the function call, so long as you declare them as local variables first:
var preserveLog, changeClass, wrapLine;
logData(data, target, preserveLog=true, changeClass=false, wrapLine=false);
(With a hat-tip to dav_i for this answer, which I based my recommendation off of.)
Duskwuff has already provided an excellent answer and I won't add anything to that, other than to say I fully agree with it, but he didn't mention any conventions that arose due to ES6.
In ES6, you still don't have named parameters, but you have the next best thing, which is Object destructuring assignment.
This allows us to pass what appears to be named parameters, but are really just destructured object properties of an object that is never directly used.
In the context of the example you provided, it would look something like this:
logData({data, target, preserveLog:true, changeClass:false, wrapLine:false});
Where the function is defined as:
function logData({data, target, preserveLog, changeClass, wrapLine}) { ... }
I've seen a lot of libraries that prefer this calling convention where ES6 is available, and it's very convenient too because the order of the parameters is also no longer important.
What seems to make many Java peeps anxious about JS is its "cool dad" nature; it doesn't care if you smoke pot or hang out with your friends until 2am. Without that structure, it's impossible to check for type safety at "compile time"... or is it?
Of course, javascript has types, but it's not strongly typed. That being said, a human reading the following excerpt will notice that this is going to throw a runtime exception:
function f(anArray) {
"use strict";
anArray.push("hi");
}
f(5); //runtime exception for sure
We can see this as programmers because, even though types aren't explicitly declared (e.g. int c;), we can gather various other characteristics to deduce its type (it's a number without quotes). It seems like there's an algorithm (such as a decision tree) that could easily infer the type of a given object.
The essence is that in a dynamically typed language, types exist, but their use and conversions are implicit. My question, then, is:
Is it plausible that linters could use implicit conventions to determine what the intended type of a method is, and warn about a potential runtime error at "lint time"?
Thank you in advance.
Code inspectors like linters or type checkers can only go "that far" in analysing code to spot type incompatibilities.
Consider for instance this code:
function f(a) {
return a%2 ? [a] : false;
}
x = [];
for (var i=1; i < 10; i+=2) {
x = f(i).concat(x);
}
document.write(x);
This will not be a problem; but it would have been if i had started at 2 instead of 1. In general the value passed to f could be the result of a complex algorithm, and a code inspector would have to actually run the code to know the result. This of course is not the idea of such a tool, and so in practice it is only possible to find trivial cases of type incompatibilities.
I would advise that you take a look at Tern, you can install it into your text editor (or just run it as a executable), and it will attempt to determine the type of a variable in a certain scope, and offers tools such as code completion, method suggestions (based off of determined type), function argument hints, etc. It's not perfect, but works very well based on JavaScript's limitations.
One of the things that really draws me to TDD is the clear development of your spec alongside implementation.
I am seeking to implement a constructor that accepts a configuration object
function MyConstructor(conf) {}
conf is currently spec'd to have two keys: a and b, where a is a RegExp and b is a Function, and as part of my TDD spec elucidation ambitions, I am writing tests that spec out this object as such:
I would like MyConstructor to throw an Error if either a is not a RegExp or b is not a Function.
MyConstructor throws an Error if either a or b is missing from the configuration.
Now, I know that I could encapsulate this behavior in some other constructor, say a Configuration constructor that creates "configuration" objects. But the way I am seeing this now, regardless of where this behavior ends up, this behavior has to be encapsulated somewhere for this spec to be elaborated via TDD.
The problem is: I seems to me that as the number of keys on the conf object grows, so does the number of tests—exponentially! This is especially due to the second bullet above.
For example, say I have 4 keys: a, b, c and d, and I need to make sure that if any are missing an Error is thrown. It seems that this requires that I write a ton of identical, banal tests that cover all the possibilities (combinations!) for missing keys. That doesn't sound right! Yet, I can't think of a good way explicitly or inductively test that all scenarios are covered. Any thoughts?
Objects without a class definition or interface are hard to test. If your objects are ducks you'll need to use ducktyping to check.
You can also wonder about how useful it is to completely test certain functions. You can test the boundaries but you can never test all values;
If your function looks like this:
function sum(a, b) {
if (a === 42) {
throw new Error("All glory to the hypnotoad");
}
return a + b;
}
how are you expected to find this bug?
I would suggest you use Duck Typing to enforce the types. Essentially, what you'll do is use the objects passed in by your keys as you'd expect them to, and let the JS runtime complain if, say, a doesn't behave like a RegEx or you can't call b like a function.
Edit: I found this interesting library which looks like it can do exactly what I was describing at the bottom: https://github.com/philbooth/check-types.js
Looks like you can do it by calling check.quacksLike.
I'm fairly new to using javascript and I'm loving the amount of power it offers, but sometimes it is too flexible for my sanity to handle. I would like an easy way to enforce that some argument honors a specific interface.
Here's a simple example method that highlights my problem:
var execute = function(args)
{
executor.execute(args);
}
Let's say that the executor expects args to have a property called cmd. If it is not defined, an error might be caught at another level when the program tries to reference cmd but it is undefined. Such an error would be more annoying to debug than explicitly enforcing cmd's existence in this method. The executor might even expect that args has a function called getExecutionContext() which gets passed around a bit. I could imagine much more complex scenarios where debugging would quickly become a nightmare of tracing through function calls to see where an argument was first passed in.
Neither do I want to do something on the lines of:
var execute = function(args)
{
if(args.cmd === undefined || args.getExecutionContext === undefined ||
typeof args.getExecutionContext !== 'function')
throw new Error("args not setup correctly");
executor.execute(args);
}
This would entail a significant amount of maintenance for every function that has arguments, especially for complex arguments. I would much rather be able to specify an interface and somehow enforce a contract that tells javascript that I expect input matching this interface.
Maybe something like:
var baseCommand =
{
cmd: '',
getExecutionContext: function(){}
};
var execute = function(args)
{
enforce(args, baseCommand); //throws an error if args does not honor
//baseCommand's properties
executor.execute(args);
}
I could then reuse these interfaces amongst my different functions and define objects that extend them to be passed into my functions without worrying about misspelling property names or passing in the wrong argument. Any ideas on how to implement this, or where I could utilize an existing implementation?
I don't see any other way to enforce this. It's one of the side effects of the dynamic nature of JavaScript. It's essentially a free-for-all, and with that freedom comes responsibility :-)
If you're in need of type checking you could have a look at typescript (it's not JavaScript) or google's closure compiler (javascript with comments).
Closure compiler uses comments to figure out what type is expected when you compile it. Looks like a lot of trouble but can be helpful in big projects.
There are other benefits that come with closure compiler as you will be forced to produce comments that are used in an IDE like netbeans, it minifies your code, removes unused code and flattens namespaces. So code organized in namespaces like myApp.myModule.myObject.myFunction will be flattened to minimize object look up.
Cons are that you need to use externs when you use libraries that are not compiler compatible like jQuery.
The way that this kind of thing is typically dealt with in javascript is to use defaults. Most of the time you simply want to provide a guarentee that certain members exist to prevent things like reference errors, but I think that you could use the principal to get what you want.
By using something like jQuery's extend method, we can guarentee that a parameter implements a set of defined defaults.
var defaults = {
prop1: 'exists',
prop2: function() { return 'foo'; }
};
function someCall(args) {
var options = $.extend({}, defaults, args);
// Do work with options... It is now guarentee'd to have members prop1 and prop2, defined by the caller if they exist, using defaults if not.
}
If you really want to throw errors at run time if a specific member wasn't provided, you could perhaps define a function that throws an error, and include it in your defaults. Thus, if a member was provided by the caller, it would overwrite the default, but if it was missed, it could either take on some default functionality or throw an error as you wish.
I have recently learned that you can use a neat switch statement with fallthrough to set default argument values in Javascript:
function myFunc(arg1, arg2, arg3) {
//replace unpassed arguments with their defaults:
switch (arguments.length) {
case 0 : arg1 = "default1";
case 1 : arg2 = "default2";
case 2 : arg3 = "default3";
}
}
I have grown to like it a lot, since not only is it very short but it also works based on parameters actually passed, without relying on having a special class of values (null, falsy, etc) serve as placeholders as in the more traditional versions:
function myFunc(arg1, arg2, arg3){
//replace falsy arguments with their defaults:
arg1 = arg1 || "default1";
arg2 = arg2 || "default2";
arg3 = arg3 || "default3";
}
My inital though after seeing the version using the switch was that I should consider using it "by default" over the || version.
The switch fallthough makes it not much longer and it has the advantage that it is much more "robust" in that it does not care about the types of the parameters. In the general case, it sounds like a good idea to not have to worry about what would happen with all the falsy values ('', 0, null, false ...) whenever I have to make a function with default parameters.
I would then reserve the arg = arg || x for the actual cases where I want to check for truthyness instead of repurposing it as the general rule for parameter defaulting.
However, I found very few examples of this pattern when I did a code search for it so I had to put on my skeptic hat. Why didn't I find more examples of this idiom?
Is it just now very well known?
Did I not search well enough? Did I get confused by the large number of false positives?
Is there something that makes it inferior to the alternatives?
Some reasons that I (and some of the comments) could think of for avoiding switch(arguments.length):
Using named parameters passed via an object literal is very flexible and extensible. Perhaps places where more arguments can be optional are using this instead?
Perhaps most of the time we do want to check for truthyness? Using a category of values as palceholders also allows default parameters to appear in the middle instead of only at the end : myFunc('arg1', null, 'arg3')
Perhaps most people just prefer the very short arg = arg || "default" and most of the time we just don't care about falsy values?
Perhaps accessing arguements is evil/unperformant?
Perhaps this kind of switch case fallthrough has a bad part I didn't think about?
Are these cons enough to avoid using switch(arguments.length) as a staple default argument pattern or is it a neat trick I should keep and use in my code?
Since the question has been updated, it's really a matter of opinion. There are a number of javascript features that many people suggest avoiding, such as switch and ternary. This is why there is not a lot of information on some of those features.
The reason that suggestion is made is because many people miss-use those features and create problems in their code. The bugs are sometimes difficult to detect and it can be difficult for others to understand what your code is doing (particularly those unfamiliar with javascript or new programmers).
So if you like to do it that way, and you're not worried about the opinions (or skill level) of anyone working on your code. By all means, your approach will work. I have used the switch statement myself on occasion, and while I don't think it's really "good" or "bad", it's hard to find a situation that requires it.
You asked how I might go about this without an if-else chain:
function myFunc(args) {
var allArgs = {
arg1:"default1",
arg2:"default2",
arg3:"default3"
};
for (var key in args) {
allArgs[key] = args[key];
}
}
myFunc({arg1:null, arg3:'test'})
Just a guess, but Doug Crockford discourages the use of switch statements in 'JavaScript: the Good Parts.' His logic is that switch statements are a common source of bugs because it's difficult to locate them when using 'fall through' logic. It's easy to see when a case is triggered, but it's often difficult to determine if every case in a result set has been covered, especially if it's not your code.