Any linter to warn about side-effects in JavaScript? - javascript

With the flexibility of JavaScript, we can write code full of side-effects, or just purely functional.
I have been interested in functional JavaScript, and wanting to start a project in this paradigm. And a linter about that can surely help me gathering good practices. Is there any linter to enforce pure functional and side-effect free style?

Purity Analysis is equivalent to Solving the Halting Problem, so any kind of static analysis that can determine whether code is pure or impure is impossible in the general case. There will always be infinitely many programs for which it is undecidable whether or not they are pure; some of those programs will be pure, some impure.
Now, you deliberately used the term "linter" instead of static analyzer (although of course a linter is just a static analyzer), which seems to imply that you are fine with an approximate heuristic result. You can have a linter that will sometimes tell you that your code is pure, sometimes tell you that your code is impure, and most times tell you that it cannot decide whether your code is pure or impure. And you can have a whitelist of operations that are known to be pure (e.g. adding two Numbers using the + operator), and a blacklist of operations that are known to be impure (e.g. anything that can throw an exception, any sort of loops, if statements, Array.prototype.forEach) and do a heuristic scan for those.
But in the end, the results will be too unreliable to do anything serious with them.

I haven't used this myself but I found this plugin for ESLint: https://github.com/jfmengels/eslint-plugin-fp

You cannot use JS commpletely without side effects. Every DOM-access is a side effect, and we could have an argument wether the whole global namespace may also fall under that definition.
The best you can do is, stay reasonable. I'm splitting this logically into two groups:
the work horses (utilities): their purpose is to take some data and to process it somehow. These are (mostly) side effects free. mostly, because somethimes these functions need some state, like a counter or a cache, wich could be argued to be a side effect, but since this is isolated/enclosed into these functions I don't really care. like the functions that you pass to Array#map() or to a promises then(), and similar places.
and the management: these functions rarely do some data-processing on their own, they mostly orchestrate the data flow, from whereever it is created, to whatever processing(-utilities) it has to be run, up to where it ends, like modifying the DOM, or mutating an object.
var theOnesINeed = compose(...);
var theOtherOnesINeed = compose(...);
var intoADifferentFormat = function(value){ return ... }
function(event){
var a = someList.filter(theOnesINeed).map(intoADifferentFormat);
var b = someOtherList.filter(theOtherOnesINeed);
var rows = a.concat(b).map(wrap('<li>', '</li>'));
document.querySelector('#youYesYou').innerHTML = rows.join('\n');
}
so that all functions stay as short and simple as possible. And don't be afraid of descriptive names (not like these way to general ones :))

Related

Why use of eval() is not recommended [duplicate]

The eval function is a powerful and easy way to dynamically generate code, so what are the caveats?
Improper use of eval opens up your
code for injection attacks
Debugging can be more challenging
(no line numbers, etc.)
eval'd code executes slower (no opportunity to compile/cache eval'd code)
Edit: As #Jeff Walden points out in comments, #3 is less true today than it was in 2008. However, while some caching of compiled scripts may happen this will only be limited to scripts that are eval'd repeated with no modification. A more likely scenario is that you are eval'ing scripts that have undergone slight modification each time and as such could not be cached. Let's just say that SOME eval'd code executes more slowly.
eval isn't always evil. There are times where it's perfectly appropriate.
However, eval is currently and historically massively over-used by people who don't know what they're doing. That includes people writing JavaScript tutorials, unfortunately, and in some cases this can indeed have security consequences - or, more often, simple bugs. So the more we can do to throw a question mark over eval, the better. Any time you use eval you need to sanity-check what you're doing, because chances are you could be doing it a better, safer, cleaner way.
To give an all-too-typical example, to set the colour of an element with an id stored in the variable 'potato':
eval('document.' + potato + '.style.color = "red"');
If the authors of the kind of code above had a clue about the basics of how JavaScript objects work, they'd have realised that square brackets can be used instead of literal dot-names, obviating the need for eval:
document[potato].style.color = 'red';
...which is much easier to read as well as less potentially buggy.
(But then, someone who /really/ knew what they were doing would say:
document.getElementById(potato).style.color = 'red';
which is more reliable than the dodgy old trick of accessing DOM elements straight out of the document object.)
I believe it's because it can execute any JavaScript function from a string. Using it makes it easier for people to inject rogue code into the application.
It's generally only an issue if you're passing eval user input.
Two points come to mind:
Security (but as long as you generate the string to be evaluated yourself, this might be a non-issue)
Performance: until the code to be executed is unknown, it cannot be optimized. (about javascript and performance, certainly Steve Yegge's presentation)
Passing user input to eval() is a security risk, but also each invocation of eval() creates a new instance of the JavaScript interpreter. This can be a resource hog.
Mainly, it's a lot harder to maintain and debug. It's like a goto. You can use it, but it makes it harder to find problems and harder on the people who may need to make changes later.
One thing to keep in mind is that you can often use eval() to execute code in an otherwise restricted environment - social networking sites that block specific JavaScript functions can sometimes be fooled by breaking them up in an eval block -
eval('al' + 'er' + 't(\'' + 'hi there!' + '\')');
So if you're looking to run some JavaScript code where it might not otherwise be allowed (Myspace, I'm looking at you...) then eval() can be a useful trick.
However, for all the reasons mentioned above, you shouldn't use it for your own code, where you have complete control - it's just not necessary, and better-off relegated to the 'tricky JavaScript hacks' shelf.
Unless you let eval() a dynamic content (through cgi or input), it is as safe and solid as all other JavaScript in your page.
Along with the rest of the answers, I don't think eval statements can have advanced minimization.
It is a possible security risk, it has a different scope of execution, and is quite inefficient, as it creates an entirely new scripting environment for the execution of the code. See here for some more info: eval.
It is quite useful, though, and used with moderation can add a lot of good functionality.
Unless you are 100% sure that the code being evaluated is from a trusted source (usually your own application) then it's a surefire way of exposing your system to a cross-site scripting attack.
It's not necessarily that bad provided you know what context you're using it in.
If your application is using eval() to create an object from some JSON which has come back from an XMLHttpRequest to your own site, created by your trusted server-side code, it's probably not a problem.
Untrusted client-side JavaScript code can't do that much anyway. Provided the thing you're executing eval() on has come from a reasonable source, you're fine.
It greatly reduces your level of confidence about security.
If you want the user to input some logical functions and evaluate for AND the OR then the JavaScript eval function is perfect. I can accept two strings and eval(uate) string1 === string2, etc.
If you spot the use of eval() in your code, remember the mantra “eval() is evil.”
This
function takes an arbitrary string and executes it as JavaScript code. When the code in
question is known beforehand (not determined at runtime), there’s no reason to use
eval().
If the code is dynamically generated at runtime, there’s often a better way to
achieve the goal without eval().
For example, just using square bracket notation to
access dynamic properties is better and simpler:
// antipattern
var property = "name";
alert(eval("obj." + property));
// preferred
var property = "name";
alert(obj[property]);
Using eval() also has security implications, because you might be executing code (for
example coming from the network) that has been tampered with.
This is a common antipattern when dealing with a JSON response from an Ajax request.
In those cases
it’s better to use the browsers’ built-in methods to parse the JSON response to make
sure it’s safe and valid. For browsers that don’t support JSON.parse() natively, you can
use a library from JSON.org.
It’s also important to remember that passing strings to setInterval(), setTimeout(),
and the Function() constructor is, for the most part, similar to using eval() and therefore
should be avoided.
Behind the scenes, JavaScript still has to evaluate and execute
the string you pass as programming code:
// antipatterns
setTimeout("myFunc()", 1000);
setTimeout("myFunc(1, 2, 3)", 1000);
// preferred
setTimeout(myFunc, 1000);
setTimeout(function () {
myFunc(1, 2, 3);
}, 1000);
Using the new Function() constructor is similar to eval() and should be approached
with care. It could be a powerful construct but is often misused.
If you absolutely must
use eval(), you can consider using new Function() instead.
There is a small potential
benefit because the code evaluated in new Function() will be running in a local function
scope, so any variables defined with var in the code being evaluated will not become
globals automatically.
Another way to prevent automatic globals is to wrap the
eval() call into an immediate function.
EDIT: As Benjie's comment suggests, this no longer seems to be the case in chrome v108, it would seem that chrome can now handle garbage collection of evaled scripts.
VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
Garbage collection
The browsers garbage collection has no idea if the code that's eval'ed can be removed from memory so it just keeps it stored until the page is reloaded.
Not too bad if your users are only on your page shortly, but it can be a problem for webapp's.
Here's a script to demo the problem
https://jsfiddle.net/CynderRnAsh/qux1osnw/
document.getElementById("evalLeak").onclick = (e) => {
for(let x = 0; x < 100; x++) {
eval(x.toString());
}
};
Something as simple as the above code causes a small amount of memory to be store until the app dies.
This is worse when the evaled script is a giant function, and called on interval.
Besides the possible security issues if you are executing user-submitted code, most of the time there's a better way that doesn't involve re-parsing the code every time it's executed. Anonymous functions or object properties can replace most uses of eval and are much safer and faster.
This may become more of an issue as the next generation of browsers come out with some flavor of a JavaScript compiler. Code executed via Eval may not perform as well as the rest of your JavaScript against these newer browsers. Someone should do some profiling.
This is one of good articles talking about eval and how it is not an evil:
http://www.nczonline.net/blog/2013/06/25/eval-isnt-evil-just-misunderstood/
I’m not saying you should go run out and start using eval()
everywhere. In fact, there are very few good use cases for running
eval() at all. There are definitely concerns with code clarity,
debugability, and certainly performance that should not be overlooked.
But you shouldn’t be afraid to use it when you have a case where
eval() makes sense. Try not using it first, but don’t let anyone scare
you into thinking your code is more fragile or less secure when eval()
is used appropriately.
eval() is very powerful and can be used to execute a JS statement or evaluate an expression. But the question isn't about the uses of eval() but lets just say some how the string you running with eval() is affected by a malicious party. At the end you will be running malicious code. With power comes great responsibility. So use it wisely is you are using it.
This isn't related much to eval() function but this article has pretty good information:
http://blogs.popart.com/2009/07/javascript-injection-attacks/
If you are looking for the basics of eval() look here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval
The JavaScript Engine has a number of performance optimizations that it performs during the compilation phase. Some of these boil down to being able to essentially statically analyze the code as it lexes, and pre-determine where all the variable and function declarations are, so that it takes less effort to resolve identifiers during execution.
But if the Engine finds an eval(..) in the code, it essentially has to assume that all its awareness of identifier location may be invalid, because it cannot know at lexing time exactly what code you may pass to eval(..) to modify the lexical scope, or the contents of the object you may pass to with to create a new lexical scope to be consulted.
In other words, in the pessimistic sense, most of those optimizations it would make are pointless if eval(..) is present, so it simply doesn't perform the optimizations at all.
This explains it all.
Reference :
https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20&%20closures/ch2.md#eval
https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20&%20closures/ch2.md#performance
It's not always a bad idea. Take for example, code generation. I recently wrote a library called Hyperbars which bridges the gap between virtual-dom and handlebars. It does this by parsing a handlebars template and converting it to hyperscript which is subsequently used by virtual-dom. The hyperscript is generated as a string first and before returning it, eval() it to turn it into executable code. I have found eval() in this particular situation the exact opposite of evil.
Basically from
<div>
{{#each names}}
<span>{{this}}</span>
{{/each}}
</div>
To this
(function (state) {
var Runtime = Hyperbars.Runtime;
var context = state;
return h('div', {}, [Runtime.each(context['names'], context, function (context, parent, options) {
return [h('span', {}, [options['#index'], context])]
})])
}.bind({}))
The performance of eval() isn't an issue in a situation like this because you only need to interpret the generated string once and then reuse the executable output many times over.
You can see how the code generation was achieved if you're curious here.
I would go as far as to say that it doesn't really matter if you use eval() in javascript which is run in browsers.*(caveat)
All modern browsers have a developer console where you can execute arbitrary javascript anyway and any semi-smart developer can look at your JS source and put whatever bits of it they need to into the dev console to do what they wish.
*As long as your server endpoints have the correct validation & sanitisation of user supplied values, it should not matter what gets parsed and eval'd in your client side javascript.
If you were to ask if it's suitable to use eval() in PHP however, the answer is NO, unless you whitelist any values which may be passed to your eval statement.
I won't attempt to refute anything said heretofore, but i will offer this use of eval() that (as far as I know) can't be done any other way. There's probably other ways to code this, and probably ways to optimize it, but this is done longhand and without any bells and whistles for clarity sake to illustrate a use of eval that really doesn't have any other alternatives. That is: dynamical (or more accurately) programmically-created object names (as opposed to values).
//Place this in a common/global JS lib:
var NS = function(namespace){
var namespaceParts = String(namespace).split(".");
var namespaceToTest = "";
for(var i = 0; i < namespaceParts.length; i++){
if(i === 0){
namespaceToTest = namespaceParts[i];
}
else{
namespaceToTest = namespaceToTest + "." + namespaceParts[i];
}
if(eval('typeof ' + namespaceToTest) === "undefined"){
eval(namespaceToTest + ' = {}');
}
}
return eval(namespace);
}
//Then, use this in your class definition libs:
NS('Root.Namespace').Class = function(settings){
//Class constructor code here
}
//some generic method:
Root.Namespace.Class.prototype.Method = function(args){
//Code goes here
//this.MyOtherMethod("foo")); // => "foo"
return true;
}
//Then, in your applications, use this to instantiate an instance of your class:
var anInstanceOfClass = new Root.Namespace.Class(settings);
EDIT: by the way, I wouldn't suggest (for all the security reasons pointed out heretofore) that you base you object names on user input. I can't imagine any good reason you'd want to do that though. Still, thought I'd point it out that it wouldn't be a good idea :)

Adding methods to a Prototype vs existing Prototype Javascript [duplicate]

In JavaScript, we have two ways of making a "class" and giving it public functions.
Method 1:
function MyClass() {
var privateInstanceVariable = 'foo';
this.myFunc = function() { alert(privateInstanceVariable ); }
}
Method 2:
function MyClass() { }
MyClass.prototype.myFunc = function() {
alert("I can't use private instance variables. :(");
}
I've read numerous times people saying that using Method 2 is more efficient as all instances share the same copy of the function rather than each getting their own. Defining functions via the prototype has a huge disadvantage though - it makes it impossible to have private instance variables.
Even though, in theory, using Method 1 gives each instance of an object its own copy of the function (and thus uses way more memory, not to mention the time required for allocations) - is that what actually happens in practice? It seems like an optimization web browsers could easily make is to recognize this extremely common pattern, and actually have all instances of the object reference the same copy of functions defined via these "constructor functions". Then it could only give an instance its own copy of the function if it is explicitly changed later on.
Any insight - or, even better, real world experience - about performance differences between the two, would be extremely helpful.
See http://jsperf.com/prototype-vs-this
Declaring your methods via the prototype is faster, but whether or not this is relevant is debatable.
If you have a performance bottleneck in your app it is unlikely to be this, unless you happen to be instantiating 10000+ objects on every step of some arbitrary animation, for example.
If performance is a serious concern, and you'd like to micro-optimise, then I would suggest declaring via prototype. Otherwise, just use the pattern that makes most sense to you.
I'll add that, in JavaScript, there is a convention of prefixing properties that are intended to be seen as private with an underscore (e.g. _process()). Most developers will understand and avoid these properties, unless they're willing to forgo the social contract, but in that case you might as well not cater to them. What I mean to say is that: you probably don't really need true private variables...
In the new version of Chrome, this.method is about 20% faster than prototype.method, but creating new object is still slower.
If you can reuse the object instead of always creating an new one, this can be 50% - 90% faster than creating new objects. Plus the benefit of no garbage collection, which is huge:
http://jsperf.com/prototype-vs-this/59
It only makes a difference when you're creating lots of instances. Otherwise, the performance of calling the member function is exactly the same in both cases.
I've created a test case on jsperf to demonstrate this:
http://jsperf.com/prototype-vs-this/10
You might not have considered this, but putting the method directly on the object is actually better in one way:
Method invocations are very slightly faster (jsperf) since the prototype chain does not have to be consulted to resolve the method.
However, the speed difference is almost negligible. On top of that, putting a method on a prototype is better in two more impactful ways:
Faster to create instances (jsperf)
Uses less memory
Like James said, this difference can be important if you are instantiating thousands of instances of a class.
That said, I can certainly imagine a JavaScript engine that recognizes that the function you are attaching to each object does not change across instances and thus only keeps one copy of the function in memory, with all instance methods pointing to the shared function. In fact, it seems that Firefox is doing some special optimization like this but Chrome is not.
ASIDE:
You are right that it is impossible to access private instance variables from inside methods on prototypes. So I guess the question you must ask yourself is do you value being able to make instance variables truly private over utilizing inheritance and prototyping? I personally think that making variables truly private is not that important and would just use the underscore prefix (e.g., "this._myVar") to signify that although the variable is public, it should be considered to be private. That said, in ES6, there is apparently a way to have the both of both worlds!
You may use this approach and it will allow you to use prototype and access instance variables.
var Person = (function () {
function Person(age, name) {
this.age = age;
this.name = name;
}
Person.prototype.showDetails = function () {
alert('Age: ' + this.age + ' Name: ' + this.name);
};
return Person; // This is not referencing `var Person` but the Person function
}()); // See Note1 below
Note1:
The parenthesis will call the function (self invoking function) and assign the result to the var Person.
Usage
var p1 = new Person(40, 'George');
var p2 = new Person(55, 'Jerry');
p1.showDetails();
p2.showDetails();
In short, use method 2 for creating properties/methods that all instances will share. Those will be "global" and any change to it will be reflected across all instances. Use method 1 for creating instance specific properties/methods.
I wish I had a better reference but for now take a look at this. You can see how I used both methods in the same project for different purposes.
Hope this helps. :)
This answer should be considered an expansion of the rest of the answers filling in missing points. Both personal experience and benchmarks are incorporated.
As far as my experience goes, I use constructors to literally construct my objects religiously, whether methods are private or not. The main reason being that when I started that was the easiest immediate approach to me so it's not a special preference. It might have been as simple as that I like visible encapsulation and prototypes are a bit disembodied. My private methods will be assigned as variables in the scope as well. Although this is my habit and keeps things nicely self contained, it's not always the best habit and I do sometimes hit walls. Apart from wacky scenarios with highly dynamic self assembling according to configuration objects and code layout it tends to be the weaker approach in my opinion particularly if performance is a concern. Knowing that the internals are private is useful but you can achieve that via other means with the right discipline. Unless performance is a serious consideration, use whatever works best otherwise for the task at hand.
Using prototype inheritance and a convention to mark items as private does make debugging easier as you can then traverse the object graph easily from the console or debugger. On the other hand, such a convention makes obfuscation somewhat harder and makes it easier for others to bolt on their own scripts onto your site. This is one of the reasons the private scope approach gained popularity. It's not true security but instead adds resistance. Unfortunately a lot of people do still think it's a genuinely way to program secure JavaScript. Since debuggers have gotten really good, code obfuscation takes its place. If you're looking for security flaws where too much is on the client, it's a design pattern your might want to look out for.
A convention allows you to have protected properties with little fuss. That can be a blessing and a curse. It does ease some inheritance issues as it is less restrictive. You still do have the risk of collision or increased cognitive load in considering where else a property might be accessed. Self assembling objects let you do some strange things where you can get around a number of inheritance problems but they can be unconventional. My modules tend to have a rich inner structure where things don't get pulled out until the functionality is needed elsewhere (shared) or exposed unless needed externally. The constructor pattern tends to lead to creating self contained sophisticated modules more so than simply piecemeal objects. If you want that then it's fine. Otherwise if you want a more traditional OOP structure and layout then I would probably suggest regulating access by convention. In my usage scenarios complex OOP isn't often justified and modules do the trick.
All of the tests here are minimal. In real world usage it is likely that modules will be more complex making the hit a lot greater than tests here will indicate. It's quite common to have a private variable with multiple methods working on it and each of those methods will add more overhead on initialisation that you wont get with prototype inheritance. In most cases is doesn't matter because only a few instances of such objects float around although cumulatively it might add up.
There is an assumption that prototype methods are slower to call because of prototype lookup. It's not an unfair assumption, I made the same myself until I tested it. In reality it's complex and some tests suggest that aspect is trivial. Between, prototype.m = f, this.m = f and this.m = function... the latter performs significantly better than the first two which perform around the same. If the prototype lookup alone were a significant issue then the last two functions instead would out perform the first significantly. Instead something else strange is going on at least where Canary is concerned. It's possible functions are optimised according to what they are members of. A multitude of performance considerations come into play. You also have differences for parameter access and variable access.
Memory Capacity. It's not well discussed here. An assumption you can make up front that's likely to be true is that prototype inheritance will usually be far more memory efficient and according to my tests it is in general. When you build up your object in your constructor you can assume that each object will probably have its own instance of each function rather than shared, a larger property map for its own personal properties and likely some overhead to keep the constructor scope open as well. Functions that operate on the private scope are extremely and disproportionately demanding of memory. I find that in a lot of scenarios the proportionate difference in memory will be much more significant than the proportionate difference in CPU cycles.
Memory Graph. You also can jam up the engine making GC more expensive. Profilers do tend to show time spent in GC these days. It's not only a problem when it comes to allocating and freeing more. You'll also create a larger object graph to traverse and things like that so the GC consumes more cycles. If you create a million objects and then hardly touch them, depending on the engine it might turn out to have more of an ambient performance impact than you expected. I have proven that this does at least make the gc run for longer when objects are disposed of. That is there tends to be a correlation with memory used and the time it takes to GC. However there are cases where the time is the same regardless of the memory. This indicates that the graph makeup (layers of indirection, item count, etc) has more impact. That's not something that is always easy to predict.
Not many people use chained prototypes extensively, myself included I have to admit. Prototype chains can be expensive in theory. Someone will but I've not measured the cost. If you instead build your objects entirely in the constructor and then have a chain of inheritance as each constructor calls a parent constructor upon itself, in theory method access should be much faster. On the other hand you can accomplish the equivalent if it matters (such as flatten the prototypes down the ancestor chain) and you don't mind breaking things like hasOwnProperty, perhaps instanceof, etc if you really need it. In either case things start to get complex once you down this road when it comes to performance hacks. You'll probably end up doing things you shouldn't be doing.
Many people don't directly use either approach you've presented. Instead they make their own things using anonymous objects allowing method sharing any which way (mixins for example). There are a number of frameworks as well that implement their own strategies for organising modules and objects. These are heavily convention based custom approaches. For most people and for you your first challenge should be organisation rather than performance. This is often complicated in that Javascript gives many ways of achieving things versus languages or platforms with more explicit OOP/namespace/module support. When it comes to performance I would say instead to avoid major pitfalls first and foremost.
There's a new Symbol type that's supposed to work for private variables and methods. There are a number of ways to use this and it raises a host of questions related to performance and access. In my tests the performance of Symbols wasn't great compared to everything else but I never tested them thoroughly.
Disclaimers:
There are lots of discussions about performance and there isn't always a permanently correct answer for this as usage scenarios and engines change. Always profile but also always measure in more than one way as profiles aren't always accurate or reliable. Avoid significant effort into optimisation unless there's definitely a demonstrable problem.
It's probably better instead to include performance checks for sensitive areas in automated testing and to run when browsers update.
Remember sometimes battery life matters as well as perceptible performance. The slowest solution might turn out faster after running an optimising compiler on it (IE, a compiler might have a better idea of when restricted scope variables are accessed than properties marked as private by convention). Consider backend such as node.js. This can require better latency and throughput than you would often find on the browser. Most people wont need to worry about these things with something like validation for a registration form but the number of diverse scenarios where such things might matter is growing.
You have to be careful with memory allocation tracking tools in to persist the result. In some cases where I didn't return and persist the data it was optimised out entirely or the sample rate was not sufficient between instantiated/unreferenced, leaving me scratching my head as to how an array initialised and filled to a million registered as 3.4KiB in the allocation profile.
In the real world in most cases the only way to really optimise an application is to write it in the first place so you can measure it. There are dozens to hundreds of factors that can come into play if not thousands in any given scenario. Engines also do things that can lead to asymmetric or non-linear performance characteristics. If you define functions in a constructor, they might be arrow functions or traditional, each behaves differently in certain situations and I have no idea about the other function types. Classes also don't behave the same in terms as performance for prototyped constructors that should be equivalent. You need to be really careful with benchmarks as well. Prototyped classes can have deferred initialisation in various ways, especially if your prototyped your properties as well (advice, don't). This means that you can understate initialisation cost and overstate access/property mutation cost. I have also seen indications of progressive optimisation. In these cases I have filled a large array with instances of objects that are identical and as the number of instances increase the objects appear to be incrementally optimised for memory up to a point where the remainder is the same. It is also possible that those optimisations can also impact CPU performance significantly. These things are heavily dependent not merely on the code you write but what happens in runtime such as number of objects, variance between objects, etc.

Why are using exceptions more acceptable in python than javascript

You can replace "javascript" with other languages here. Basically what I've found from reading is that python actively encourages the use of exceptions and a series of if tests to manage code. Often readability is cited as well as cleaner looking code when 'duck-typing'
However, often when working in javascript or some other languages, it seems that best practices suggest trying to 'code defensively' and cover as much as you can in if statements and return types in order to avoid using exceptions. The reason most often cited is because exceptions are a very expensive operation.
Here's an example:
https://stackoverflow.com/a/8987401/2668545
Does python face the same exceptions cost as javascript and the best
practices are such because there's more emphasis about
readability/debuggability than performance?
Does python have a different way of handling exceptions than javascript or other languages that don't recommend using exceptions?
Am I misinterpreting the advice?
Or is it something else?
My take on this would be that though Python is a dynamically typed language, it is strongly-typed at the same time, see the explanation here. This means that if something goes wrong deep down the call hierarchy (like trying to convert an empty string to an integer, division by zero, etc.), the interpreter raises an interrupt that bubbles up the call graph.
Javascript and many other interpreted languages tend to gloss such things over and continue silently computing (rubbish) as long as possible. Essentially, the programmer has to defend against Javascript itself.
It is thus consistent when a user-defined Python module behaves in the same way as the standard library modules and the interpreter itself: achieve the expected result or raise an exception.
The advantages are:
Improved readability: the expected sequence of actions is not mixed with error handling.
Extra polymorphism. It can be possible to pass any object as a function argument, and things will work out if the object has the properties/members used by the function. The programmer writing the function does not have to know in advance the exact type of the argument (dynamic typing). If something goes wrong, there will be a call trace to investigate. Defensive checks are likely to be too restrictive and not 100% bullet-proof at the same time.
The considerations of readability and extensibility are probably more important than the ones of performance.

Any way to check whether a function has external side-effects (or enforce that it won't)?

I am interested in Javascript frameworks like Rivertrail, which provides a parallel programming abstraction, but only provides well defined semantics if the functions you pass in to do the work have no external side effects.
I'm wondering if there is any way to either write a check that a passed in function is side-effect free? or to hide everything in the environment from a function (so that it has nothing to modify)? or temporarily declare everything in the environment "const" (but just for the scope of the definition of the function?) (Or any other crazy idea that would give me the equivalent of simply declaring __attribute__ ((pure)) in gcc.)
Parallel programming is one of those places where "pure" functional programming can really make a difference. Without pure functions defining reasonable semantics for parallel programs gets really hard (check out the semantics of C++ where they had to define what a data-race is so that they could declare that any program that has one is undefined.)
I can imagine other places where someone might find this useful, if it were possible. It might be useful to allow users of a web-page to pass in guaranteed pure no-side-effect Javascript functions that check for certain conditions during a query, for example.
write a check that a passed in function is side-effect free?
This is "effect analysis" - a form of type checking.
Since Javascript doesn't have a type and effect system, any approach to detecting side effects will necessarily be approximate and rely on heuristics. There's no way to do this in general for Javascript.

JavaScript - What Level of Code Optimization can one expect?

So, I am fairly new to JavaScript coding, though not new to coding in general. When writing source code I generally have in mind the environment my code will run in (e.g. a virtual machine of some sort) - and with it the level of code optimization one can expect. (1)
In Java for example, I might write something like this,
Foo foo = FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42);
blub.doSomethingImportantWithAFooObject(foo);
even if the foo object only used at this very location (thus introducing an needless variable declaration). Firstly it is my opinion that the code above is way better readable than the inlined version
blub.doSomethingImportantWithAFooObject(FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
and secondly I know that Java compiler code optimization will take care of this anyway, i.e. the actual Java VM code will end up being inlined - so performance wise, there is no diffence between the two. (2)
Now to my actual Question:
What Level of Code Optimization can I expect in JavaScript in general?
I assume this depends on the JavaScript engine - but as my code will end up running in many different browsers lets just assume the worst and look at the worst case. Can I expect a moderate level of code optimization? What are some cases I still have to worry about?
(1) I do realize that finding good/the best algorithms and writing well organized code is more important and has a bigger impact on performance than a bit of code optimization. But that would be a different question.
(2) Now, I realize that the actual difference were there no optimization is small. But that is beside the point. There are easily features which are optimized quite efficiently, I was just kind of too lazy to write one down. Just imagine the above snippet inside a for loop which is called 100'000 times.
Don't expect much on the optimization, there won't be
the tail-recursive optimization,
loop unfolding,
inline function
etc
As javascript on client is not designed to do heavy CPU work, the optimization won't make a huge difference.
There are some guidelines for writing hi-performance javascript code, most are minor and technics, like:
Not use certain functions like eval(), arguments.callee and etc, which will prevent the js engine from generating hi-performance code.
Use native features over hand writing ones, like don't write your own containers, json parser etc.
Use local variable instead of global ones.
Never use for-each loop for array.
Use parseInt() rather than Math.floor.
AND stay away from jQuery.
All these technics are more like experience things, and may have some reasonable explanations behind. So you will have to spend some time search around or try jsPerf to help you decide which approach is better.
When you release the code, use closure compiler to take care of dead-branch and unnecessary-variable things, which will not boost up your performance a lot, but will make your code smaller.
Generally speaking, the final performance is highly depending on how well your code organized, how carefully your algorithm designed rather than how the optimizer performed.
Take your example above (by assuming FooFactory.getFoo() and Bar.someStaticStuff("qux","gak",42) is always returning the same result, and Bar, FooFactory are stateless, that someStaticStuff() and getFoo() won't change anything.)
for (int i = 0; i < 10000000; i++)
blub.doSomethingImportantWithAFooObject(
FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
Even the g++ with -O3 flag can't make that code faster, for compiler can't tell if Bar and FooFactory are stateless or not. So these kind of code should be avoided in any language.
You are right, the level of optimization is different from JS VM to VM. But! there is a way of working around that. There are several tools that will optimize/minimize your code for you. One of the most popular ones is by Google. It's called the Closure-Compiler. You can try out the web-version and there is a cmd-line version for build-script etc. Besides that there is not much I would try about optimization, because after all Javascript is sort of fast enough.
In general, I would posit that unless you're playing really dirty with your code (leaving all your vars at global scope, creating a lot of DOM objects, making expensive AJAX calls to non-optimal datasources, etc.), the real trick with optimizing performance will be in managing all the other things you're loading in at run-time.
Loading dozens on dozens of images, or animating huge background images, and pulling in large numbers of scripts and css files can all have much greater impact on performance than even moderately-complex Javascript that is written well.
That said, a quick Google search turns up several sources on Javascript performance optimization:
http://www.developer.nokia.com/Community/Wiki/JavaScript_Performance_Best_Practices
http://www.nczonline.net/blog/2009/02/03/speed-up-your-javascript-part-4/
http://mir.aculo.us/2010/08/17/when-does-javascript-trigger-reflows-and-rendering/
As two of those links point out, the most expensive operations in a browser are reflows (where the browser has to redraw the interface due to DOM manipulation), so that's where you're going to want to be the most cautious in terms of performance. Some of that can be alleviated by being smart about what you're modifying on the fly (for example, it's less expensive to apply a class than modify inline styles ad hoc,) so separating your concerns (style from data) will be really important.
Making only the modifications you have to, in order to get the job done, (ie. rather than doing the "HULK SMASH (DOM)!" method of replacing entire chunks of pages with AJAX calls to screen-scraping remote sources, instead calling for JSON data to update only the minimum number of elements needed) and other common-sense approaches will get you a lot farther than hours of minor tweaking of a for-loop (though, again, common sense will get you pretty far, there, too).
Good luck!

Categories