Is it a bad practice to declare an Array in arguments? - javascript

validationError([elem1,elem2],type,shiftNo);
or
var arr = [elem1,elem2];
validationError(arr,type,shiftNo);
What I mean to ask is approach 1 of calling the function considered bad ( also does it have any performance ramifications). and for that matter is it a bad approach to declare strings, object and functions inside arguments.

Performance is not an issue, not in a language like JS, Ruby or whatnot. So all we can do is think about code readability. And this case is not strongly related to JS, so will be my examples.
move = ["E2", "E4"];
if chessboard.valid(move, player) {
...
}
This clearly states: "if the move (E2 E4) is valid for this chessboard, then...", you don't even need to look at the docs to know that. If we write that without assigning our array a name, the result looks a little cryptic (still easy to guess, but harder for such a tiny example):
if chessboard.valid(["E2", "E4"], player) {
...
}
What is this supposed to mean? What does valid stand for here? Maybe, it's asking whether these cells contain valid player's pieces? This is a sympthom of a design flaw, more precisely bad naming. It makes bold assumptions about how the chessboard code will be used. We can make it obvious that this array represents a move by renaming the chessboard's method:
if chessboard.valid_move(["E2", "E4"], player) {
...
}
This is better, but you may not have an API that allows your code to stay so readable without some additional naming.
So, I suggest a rule of thumb:
If the array will be used more than once, name it.
If the meaning of the array is not obvious from where it goes (function name), name it.
Don't name it, unless points 1 or 2 apply.

It doesn't make any difference really. Either way, you create a Javascript Array, which basically is an Object and get a reference in return (which you pass to your method). If you don't need to access that array (or other data) later in your code, the second approach is completely fine.

Are the contents of arr ever going to get used again? If so, then option 2 is definitely the way to go. If not... something as simple as this is probably just personal opinion.
Personally, I'd have to say that option 2 is better practice, even though sometimes I'm guilty of using option 1. Option 2 is easier to read, it's easier to follow and it's less likely someone will have to re-read it because they became temporarily confused or lost in flow of thought whilst reading through your code (especially newer programmers). For those reasons it's easier to maintain, you, and potentially future developers working with your code, will likely save time working with it.
The only negatives I can see would be generating an absolutely miniscule amount of overhead, and now you have 2 lines of code instead of 1. But I think that's irrelevant, the tiny potential benefits of option 2 outweigh the tiny negatives of option 1.

It is subjective, but in my opinion it is better to use the second approach.
As #jAndy said there is no difference in the code execution neither the performance of your code, but it is easier to debug and easier to read and understand the second approach.

Related

variable assignment vs passing directly to a function

What are the advantages/disadvantages in terms of memory management of the following?
Assigning to a variable then passing it to a function
const a = {foo: 'bar'}; // won't be reused anywhere else, for readability
myFunc(a);
Passing directly to a function
myFunc({foo: 'bar'});
The first and second code have absolutely no difference between them (unless you also need to use a later in your code) in the way the variable is passed.
The are only 2 cases in which the first might be preferred over the second.
You need to use the variable elsewhere
The variable declaration is too long and you want to split it in two lines or you are using a complex algorithm and want to give a name to each step for readability.
It depends on the implementation of the JavaScript engine. One engine might allocate memory for the variable in the first example and not allocate memory in the directly-passed example, while another implementation might be smart enough to compile the code in such a way that it makes the first example not allocate memory for a variable and thus leaves the first example behaving as the directly-passed example.
I don't know enough about the specific engines to tell you what each one does specifically. You'd have to take a look at each JS engine (or ask authors of each) to get a more conclusive answer.

Performance difference between lodash "get" and "if else" clauses

Let's say you have a typescript object, where any element can be undefined. If you want to access a heavily nested component, you have to do a lot of comparisons against undefined.
I wanted to compare two ways of doing this in terms of performance: regular if-else comparisons and the lodash function get.
I have found this beautiful tool called jsben were you can benchmark different pieces of js code. However, I fail to interpret the results correctly.
In this test, lodash get seems to be slightly faster. However, if I define my variable in the Setup block (as opposed to the Boilerplate code), the if-else code is faster by a wide margin.
What is the proper way of benchmarking all this?
How should I interpret the results?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I think you're asking the wrong question.
First of all, if you're going to do performance micro-optimization (as opposed to, say, algorithmic optimization), you should really know whether the code in question is a bottleneck in your system. Fix the worst bottlenecks until your performance is fine, then stop worrying overmuch about it. I'd be quite surprised if variation between these ever amounted to more than a rounding error in a serious application. But I've been surprised before; hence the need to test.
Then, when it comes to the actual optimization, the two implementations are only slightly different in speed, in either configuration. But if you want to test the deep access to your object, it looks as though the second one is the correct way to think about it. It doesn't seem as though it should make much difference in relative speeds, but the first one puts the initialization code where it will be "executed before every block and is part of the benchmark." The second one puts it where "it will be run before every test, and is not part of the benchmark." Since you want to compare data access and not data initialization, this seems more appropriate.
Given this, there seems to be a very slight performance advantage to the families && families.Trump && families.Trump.members && ... technique. (Note: no ifs or elses in sight here!)
But is it worth it? I would say not. The code is much, much uglier. I would not add a library such as lodash (or my favorite, Ramda) just to use a function as simple as this, but if I was already using lodash I wouldn't hesitate to use the simpler code here. And I might import one from lodash or Ramda, or simply write my own otherwise, as it's fairly simple code.
That native code is going to be faster than more generic library code shouldn't be a surprise. It doesn't always happen, as sometimes libraries get to take shortcuts that the native engine cannot, but it's likely the norm. The reason to use these libraries rarely has to do with performance, but with writing more expressive code. Here the lodash version wins, hands-down.
What is the proper way of benchmarking all this?
Only benchmark the actual code you are comparing, move as much as possible outside of the tested block. Run every of the two pieces a few (hundred) thousand times, to average out the influence of other parts.
How should I interpret the results?
1) check if they are valid:
Do the results fit your expectation?
If not, could there be a cause for that?
Does the testcase replicate your actual usecase?
2) check if the result is relevant:
How does the time it takes compare to the actual time in your usecase? If your code takes 200ms to load, and both tests run in under ~1ms, your result doesnt matter. If you however try to optimize code that runs 60 times per second, 1ms is already a lot.
3) check if the result is worth the work
Often you have to do a lot of refactoring, or you have to type a lot, does the performance gain outweight the time you invest?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I'd say no. use _.get (unless you are planning to run that a few hundred times per second).

Any linter to warn about side-effects in JavaScript?

With the flexibility of JavaScript, we can write code full of side-effects, or just purely functional.
I have been interested in functional JavaScript, and wanting to start a project in this paradigm. And a linter about that can surely help me gathering good practices. Is there any linter to enforce pure functional and side-effect free style?
Purity Analysis is equivalent to Solving the Halting Problem, so any kind of static analysis that can determine whether code is pure or impure is impossible in the general case. There will always be infinitely many programs for which it is undecidable whether or not they are pure; some of those programs will be pure, some impure.
Now, you deliberately used the term "linter" instead of static analyzer (although of course a linter is just a static analyzer), which seems to imply that you are fine with an approximate heuristic result. You can have a linter that will sometimes tell you that your code is pure, sometimes tell you that your code is impure, and most times tell you that it cannot decide whether your code is pure or impure. And you can have a whitelist of operations that are known to be pure (e.g. adding two Numbers using the + operator), and a blacklist of operations that are known to be impure (e.g. anything that can throw an exception, any sort of loops, if statements, Array.prototype.forEach) and do a heuristic scan for those.
But in the end, the results will be too unreliable to do anything serious with them.
I haven't used this myself but I found this plugin for ESLint: https://github.com/jfmengels/eslint-plugin-fp
You cannot use JS commpletely without side effects. Every DOM-access is a side effect, and we could have an argument wether the whole global namespace may also fall under that definition.
The best you can do is, stay reasonable. I'm splitting this logically into two groups:
the work horses (utilities): their purpose is to take some data and to process it somehow. These are (mostly) side effects free. mostly, because somethimes these functions need some state, like a counter or a cache, wich could be argued to be a side effect, but since this is isolated/enclosed into these functions I don't really care. like the functions that you pass to Array#map() or to a promises then(), and similar places.
and the management: these functions rarely do some data-processing on their own, they mostly orchestrate the data flow, from whereever it is created, to whatever processing(-utilities) it has to be run, up to where it ends, like modifying the DOM, or mutating an object.
var theOnesINeed = compose(...);
var theOtherOnesINeed = compose(...);
var intoADifferentFormat = function(value){ return ... }
function(event){
var a = someList.filter(theOnesINeed).map(intoADifferentFormat);
var b = someOtherList.filter(theOtherOnesINeed);
var rows = a.concat(b).map(wrap('<li>', '</li>'));
document.querySelector('#youYesYou').innerHTML = rows.join('\n');
}
so that all functions stay as short and simple as possible. And don't be afraid of descriptive names (not like these way to general ones :))

Javascript single use variable declaration

This may be a obvious question, but I was wondering if there was an efficiency difference between declaring something in a one time use variable or simply executing the code once (rather then storing it then using it). For example
var rowId = 3,
updateStuff(rowId);
VS
updateStuff(3);
I think that worrying about performance in this instance is largely irrelevant. These sorts of micro-optimizations don't really buy you all that much. In my opinion, the readability of the code here is more important, since it's better to identify the 3 as rowId than leaving it as a magic number.
If you are really concerned about optimization, minimizing your code with something like Google Closure would help; it has many tools that will help you make your code more efficient.
If you were using google closure to compile and if you annotated the var as a constant (as it appears in your example), I suspect the compiler would remove the constant and replace it with the literal.
My advice is to pick the readable option and if you really need the micro-optimizations and reduction in source size, use the google closure compiler or a minifier-- a tool is more likeley to accumulate enough micro-optimizations to make something noticeably faster, although I'd guess most of the saved milliseconds would come from reduced network time from minification.
And var declarations do take time and if you added that var to the global namespace, then there is one more thing that will be searched during resolving names.
With a modern browser that compiles your code, the generated code will be exactly the same unless the variable is referenced somewhere else also.
Even with an old browser if there is a difference it won't be something to worry about.
EDIT:
delete does not apply to non-objects, therefore my initial response was wrong. Furtermore since the variable rowId is declared with the flag var it will be cleaned up by the garbage collection. Was it on the other hand defined without, it would live for the duration of the page / application.
source: http://lostechies.com/derickbailey/2012/03/19/backbone-js-and-javascript-garbage-collection/
The variable rowId will be kept in memory and will not be released by the garbage collector since it's referenced to. Unless you release it afterwards like below it will be there till the end of the programs life. Also remember it take time to create the variable (minimal, but you asked)
var rowId = 3
updateStuff(rowId);
delete rowId
With efficiency in mind, then yes there is a difference. Your second example is the quickest and doesn't spend any extra resources.
OBS. Some languages do optimize code as such and simply remove the sequence, but I strongly doubt that JavaScript does so.

Is giving arguments wrapped inside an object to a function a bad idea?

Usually, as my code base grows, the functions recieve more and more arguments and it becomes a little sloppy to maintain, so I usually just default to something like this:
f = function(args){return args.v + 1;}
f({"v":2});
It looks way more cleaner, but is it OK to do? Why isn't everyone doing this?
Usually if a function is taking too many parameters it is probably doing too much. You should look into refactoring it into smaller functions/objects.
Uncle Bob suggests functions with very few parameters in his book Clean Code and I agree. It makes it difficult to read code when you have many parameters or pass in an an object or array. I can't guess what the function is going to do with all the parameters or what it is expecting as parameters if your passing in an object without reading the body of the function.
"it looks way more cleaner, but is it OK to do?"
Yes.
"Why isn't everyone doing this?"
Aren't they though? This is standard practice for the most common "many parameter" situation in Javascript: passing options. Since there's no overhead to creating a one-off object in Javascript, it's common practice when you have more than a couple parameters.
It is fine - as long as you keep a good level of control over the kind of objects you pass around as variables, and validate these values as necessary (thinking of "false" / null / undefined type issues) then yes, this is good practice. Also a lot easier to maintain.

Categories