Why were namespaces removed from ECMAScript consideration? - javascript

Namespaces were once a consideration for ECMAScript (the old ECMAScript 4) but were taken out. As Brendan Eich says in this message:
One of the use-cases for namespaces in
ES4 was early binding (use namespace
intrinsic), both for performance and
for programmer comprehension -- no
chance of runtime name binding
disagreeing with any earlier
binding. But early binding in any
dynamic code loading scenario like
the web requires a prioritization or
reservation mechanism to avoid early
versus late binding conflicts.
Plus, as some JS implementors have
noted with concern, multiple open
namespaces impose runtime cost unless
an implementation works
significantly harder.
For these reasons, namespaces and
early binding (like packages before
them, this past April) must go.
But I'm not sure I understand all of that. What exactly is a prioritization or reservation mechanism and why would either of those be needed? Also, must early binding and namespaces go hand-in-hand? For some reason I can't wrap my head around the issues involved. Can anyone attempt a more fleshed out explanation?
Also, why would namespaces impose runtime costs? In my mind I can't help but see little difference in concept between a namespace and a function using closures. For instance, Yahoo and Google both have YAHOO and google objects that "act like" namespaces in that they contain all of their public and private variables, functions, and objects within a single access point. So why, then, would a namespace be so significantly different in implementation? Maybe I just have a misconception as to what a namespace is exactly.
For the sake of the bounty I'd like to know two things:
Does a namespace need early binding?
How does a namespace implementation
differ from an object with
private members?

If you declare a variable within a closure after a definition of a function will call that variable, it still uses the scoped variable.
function ShowMe() {
alert(myVar); //alerts "cool"
}
var myVar = "cool";
This process would get another layer of complicated with regards to namespacing.
Beyond this there are numerous namespace methods along with expand/applyIf etc that can do a lot of the same functionality. namespace() in ExtJS, or $.extend in jQuery for example. So, it may be a nice to have, but isn't an absolute need within the constructs of the language. I think that formalizing some of the extensions for Array, and support for ISO-8601 dates in Date are far more important myself. Over having to simply check each namespace layer for definition...
window.localization = $.extend(window.localization || {}, {
...
});
window.localization.en = $.extend(window.localization.en || {}, {
...
});

OK first Some of the terminology:
Early binding -- check/validate when the line of code is parsed.
Late binding -- check/validate when the line of code is executed.
Prioritisation/reservation --I think they mean if you do early binding and check the name spaces as the code is parsed there must be a process to validate a variable name when the variable is added dynamically later on and a different set of rules for what is valid as there may be several scopes involved at this point.
I am frankly surprised that only these quite technical reasons were given. Take the common case:-
onclick=' var mymark = "donethat";'
What namespace should "mymark" belong to? There doesnt seem to be any easy answer to that question. Especially given that the code snippet:-
window.forms.myform.mybutton.onClick = ' var mymark = "donethat";'
Should put the variable in the same name space.

Related

Why use of eval() is not recommended [duplicate]

The eval function is a powerful and easy way to dynamically generate code, so what are the caveats?
Improper use of eval opens up your
code for injection attacks
Debugging can be more challenging
(no line numbers, etc.)
eval'd code executes slower (no opportunity to compile/cache eval'd code)
Edit: As #Jeff Walden points out in comments, #3 is less true today than it was in 2008. However, while some caching of compiled scripts may happen this will only be limited to scripts that are eval'd repeated with no modification. A more likely scenario is that you are eval'ing scripts that have undergone slight modification each time and as such could not be cached. Let's just say that SOME eval'd code executes more slowly.
eval isn't always evil. There are times where it's perfectly appropriate.
However, eval is currently and historically massively over-used by people who don't know what they're doing. That includes people writing JavaScript tutorials, unfortunately, and in some cases this can indeed have security consequences - or, more often, simple bugs. So the more we can do to throw a question mark over eval, the better. Any time you use eval you need to sanity-check what you're doing, because chances are you could be doing it a better, safer, cleaner way.
To give an all-too-typical example, to set the colour of an element with an id stored in the variable 'potato':
eval('document.' + potato + '.style.color = "red"');
If the authors of the kind of code above had a clue about the basics of how JavaScript objects work, they'd have realised that square brackets can be used instead of literal dot-names, obviating the need for eval:
document[potato].style.color = 'red';
...which is much easier to read as well as less potentially buggy.
(But then, someone who /really/ knew what they were doing would say:
document.getElementById(potato).style.color = 'red';
which is more reliable than the dodgy old trick of accessing DOM elements straight out of the document object.)
I believe it's because it can execute any JavaScript function from a string. Using it makes it easier for people to inject rogue code into the application.
It's generally only an issue if you're passing eval user input.
Two points come to mind:
Security (but as long as you generate the string to be evaluated yourself, this might be a non-issue)
Performance: until the code to be executed is unknown, it cannot be optimized. (about javascript and performance, certainly Steve Yegge's presentation)
Passing user input to eval() is a security risk, but also each invocation of eval() creates a new instance of the JavaScript interpreter. This can be a resource hog.
Mainly, it's a lot harder to maintain and debug. It's like a goto. You can use it, but it makes it harder to find problems and harder on the people who may need to make changes later.
One thing to keep in mind is that you can often use eval() to execute code in an otherwise restricted environment - social networking sites that block specific JavaScript functions can sometimes be fooled by breaking them up in an eval block -
eval('al' + 'er' + 't(\'' + 'hi there!' + '\')');
So if you're looking to run some JavaScript code where it might not otherwise be allowed (Myspace, I'm looking at you...) then eval() can be a useful trick.
However, for all the reasons mentioned above, you shouldn't use it for your own code, where you have complete control - it's just not necessary, and better-off relegated to the 'tricky JavaScript hacks' shelf.
Unless you let eval() a dynamic content (through cgi or input), it is as safe and solid as all other JavaScript in your page.
Along with the rest of the answers, I don't think eval statements can have advanced minimization.
It is a possible security risk, it has a different scope of execution, and is quite inefficient, as it creates an entirely new scripting environment for the execution of the code. See here for some more info: eval.
It is quite useful, though, and used with moderation can add a lot of good functionality.
Unless you are 100% sure that the code being evaluated is from a trusted source (usually your own application) then it's a surefire way of exposing your system to a cross-site scripting attack.
It's not necessarily that bad provided you know what context you're using it in.
If your application is using eval() to create an object from some JSON which has come back from an XMLHttpRequest to your own site, created by your trusted server-side code, it's probably not a problem.
Untrusted client-side JavaScript code can't do that much anyway. Provided the thing you're executing eval() on has come from a reasonable source, you're fine.
It greatly reduces your level of confidence about security.
If you want the user to input some logical functions and evaluate for AND the OR then the JavaScript eval function is perfect. I can accept two strings and eval(uate) string1 === string2, etc.
If you spot the use of eval() in your code, remember the mantra “eval() is evil.”
This
function takes an arbitrary string and executes it as JavaScript code. When the code in
question is known beforehand (not determined at runtime), there’s no reason to use
eval().
If the code is dynamically generated at runtime, there’s often a better way to
achieve the goal without eval().
For example, just using square bracket notation to
access dynamic properties is better and simpler:
// antipattern
var property = "name";
alert(eval("obj." + property));
// preferred
var property = "name";
alert(obj[property]);
Using eval() also has security implications, because you might be executing code (for
example coming from the network) that has been tampered with.
This is a common antipattern when dealing with a JSON response from an Ajax request.
In those cases
it’s better to use the browsers’ built-in methods to parse the JSON response to make
sure it’s safe and valid. For browsers that don’t support JSON.parse() natively, you can
use a library from JSON.org.
It’s also important to remember that passing strings to setInterval(), setTimeout(),
and the Function() constructor is, for the most part, similar to using eval() and therefore
should be avoided.
Behind the scenes, JavaScript still has to evaluate and execute
the string you pass as programming code:
// antipatterns
setTimeout("myFunc()", 1000);
setTimeout("myFunc(1, 2, 3)", 1000);
// preferred
setTimeout(myFunc, 1000);
setTimeout(function () {
myFunc(1, 2, 3);
}, 1000);
Using the new Function() constructor is similar to eval() and should be approached
with care. It could be a powerful construct but is often misused.
If you absolutely must
use eval(), you can consider using new Function() instead.
There is a small potential
benefit because the code evaluated in new Function() will be running in a local function
scope, so any variables defined with var in the code being evaluated will not become
globals automatically.
Another way to prevent automatic globals is to wrap the
eval() call into an immediate function.
EDIT: As Benjie's comment suggests, this no longer seems to be the case in chrome v108, it would seem that chrome can now handle garbage collection of evaled scripts.
VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
Garbage collection
The browsers garbage collection has no idea if the code that's eval'ed can be removed from memory so it just keeps it stored until the page is reloaded.
Not too bad if your users are only on your page shortly, but it can be a problem for webapp's.
Here's a script to demo the problem
https://jsfiddle.net/CynderRnAsh/qux1osnw/
document.getElementById("evalLeak").onclick = (e) => {
for(let x = 0; x < 100; x++) {
eval(x.toString());
}
};
Something as simple as the above code causes a small amount of memory to be store until the app dies.
This is worse when the evaled script is a giant function, and called on interval.
Besides the possible security issues if you are executing user-submitted code, most of the time there's a better way that doesn't involve re-parsing the code every time it's executed. Anonymous functions or object properties can replace most uses of eval and are much safer and faster.
This may become more of an issue as the next generation of browsers come out with some flavor of a JavaScript compiler. Code executed via Eval may not perform as well as the rest of your JavaScript against these newer browsers. Someone should do some profiling.
This is one of good articles talking about eval and how it is not an evil:
http://www.nczonline.net/blog/2013/06/25/eval-isnt-evil-just-misunderstood/
I’m not saying you should go run out and start using eval()
everywhere. In fact, there are very few good use cases for running
eval() at all. There are definitely concerns with code clarity,
debugability, and certainly performance that should not be overlooked.
But you shouldn’t be afraid to use it when you have a case where
eval() makes sense. Try not using it first, but don’t let anyone scare
you into thinking your code is more fragile or less secure when eval()
is used appropriately.
eval() is very powerful and can be used to execute a JS statement or evaluate an expression. But the question isn't about the uses of eval() but lets just say some how the string you running with eval() is affected by a malicious party. At the end you will be running malicious code. With power comes great responsibility. So use it wisely is you are using it.
This isn't related much to eval() function but this article has pretty good information:
http://blogs.popart.com/2009/07/javascript-injection-attacks/
If you are looking for the basics of eval() look here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval
The JavaScript Engine has a number of performance optimizations that it performs during the compilation phase. Some of these boil down to being able to essentially statically analyze the code as it lexes, and pre-determine where all the variable and function declarations are, so that it takes less effort to resolve identifiers during execution.
But if the Engine finds an eval(..) in the code, it essentially has to assume that all its awareness of identifier location may be invalid, because it cannot know at lexing time exactly what code you may pass to eval(..) to modify the lexical scope, or the contents of the object you may pass to with to create a new lexical scope to be consulted.
In other words, in the pessimistic sense, most of those optimizations it would make are pointless if eval(..) is present, so it simply doesn't perform the optimizations at all.
This explains it all.
Reference :
https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20&%20closures/ch2.md#eval
https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20&%20closures/ch2.md#performance
It's not always a bad idea. Take for example, code generation. I recently wrote a library called Hyperbars which bridges the gap between virtual-dom and handlebars. It does this by parsing a handlebars template and converting it to hyperscript which is subsequently used by virtual-dom. The hyperscript is generated as a string first and before returning it, eval() it to turn it into executable code. I have found eval() in this particular situation the exact opposite of evil.
Basically from
<div>
{{#each names}}
<span>{{this}}</span>
{{/each}}
</div>
To this
(function (state) {
var Runtime = Hyperbars.Runtime;
var context = state;
return h('div', {}, [Runtime.each(context['names'], context, function (context, parent, options) {
return [h('span', {}, [options['#index'], context])]
})])
}.bind({}))
The performance of eval() isn't an issue in a situation like this because you only need to interpret the generated string once and then reuse the executable output many times over.
You can see how the code generation was achieved if you're curious here.
I would go as far as to say that it doesn't really matter if you use eval() in javascript which is run in browsers.*(caveat)
All modern browsers have a developer console where you can execute arbitrary javascript anyway and any semi-smart developer can look at your JS source and put whatever bits of it they need to into the dev console to do what they wish.
*As long as your server endpoints have the correct validation & sanitisation of user supplied values, it should not matter what gets parsed and eval'd in your client side javascript.
If you were to ask if it's suitable to use eval() in PHP however, the answer is NO, unless you whitelist any values which may be passed to your eval statement.
I won't attempt to refute anything said heretofore, but i will offer this use of eval() that (as far as I know) can't be done any other way. There's probably other ways to code this, and probably ways to optimize it, but this is done longhand and without any bells and whistles for clarity sake to illustrate a use of eval that really doesn't have any other alternatives. That is: dynamical (or more accurately) programmically-created object names (as opposed to values).
//Place this in a common/global JS lib:
var NS = function(namespace){
var namespaceParts = String(namespace).split(".");
var namespaceToTest = "";
for(var i = 0; i < namespaceParts.length; i++){
if(i === 0){
namespaceToTest = namespaceParts[i];
}
else{
namespaceToTest = namespaceToTest + "." + namespaceParts[i];
}
if(eval('typeof ' + namespaceToTest) === "undefined"){
eval(namespaceToTest + ' = {}');
}
}
return eval(namespace);
}
//Then, use this in your class definition libs:
NS('Root.Namespace').Class = function(settings){
//Class constructor code here
}
//some generic method:
Root.Namespace.Class.prototype.Method = function(args){
//Code goes here
//this.MyOtherMethod("foo")); // => "foo"
return true;
}
//Then, in your applications, use this to instantiate an instance of your class:
var anInstanceOfClass = new Root.Namespace.Class(settings);
EDIT: by the way, I wouldn't suggest (for all the security reasons pointed out heretofore) that you base you object names on user input. I can't imagine any good reason you'd want to do that though. Still, thought I'd point it out that it wouldn't be a good idea :)

Simple Js program comes up NaN and objectHtmlSpanElement [duplicate]

This question already has answers here:
Do DOM tree elements with IDs become global properties?
(5 answers)
Closed 5 years ago.
Just today after a couple of years of javascript programming I came across something that left me startled. Browsers create objects for every element with an id. The name of the object will match the id.
So if you have:
<div id ="box"></div>
You can do:
alert(box); //[object HTMLDivElement]
Without first assigning anything to that variable. See the demo.
This for some reason seems to be in the standards even though it can break the code in some cases. There is an open bug to end this behavior but I'm more interested in getting rid of it now.
Do you guys know if there is a way to disable this (strict mode maybe)? Am I giving this too much importance? Because it certainly seems like a bad idea. (It was introduced by IE to give you a hint).
Update: It seems FF only does this in quirks mode. Other browsers like IE6+ and Chrome do it right off the bat.
ECMAScript 5 strict should help with this as you cannot use undeclared variables. I'm not sure which browsers currently support strict mode but I know Firefox 4 does.
The HTML spec you linked mentions a proposal to reduce pollution of the global scope by limiting this behavior to quirks-only.
I don't know if this feature is in the original spec but I do expect it to be removed, prohibited or otherwise nullified in subsequent versions of ECMAScript. ES6 will be based on ES5 strict.
JavaScript has many features that make it easier to use for beginners and novices, I suspect this is one such feature. If you're a professional and you want quality code use "use strict"; and always JSLint your code. If you use these guidelines this feature should never bother you.
Here is a useful video about ES5 courtesy of YUI Theater (it's already 2 years old though, but still relevant currently as there is no ES6 yet).
I don't think this is much of a big deal. It seems messy especially to those of us who think about global namespace pollution and conflicts, but in practice it doesn't really cause a problem.
If you declare your own global variable, it will just override anything the browser created for you so there's not really any conflict. The only place I could see it potentially causing a problem is if you were testing for the existence of a global declaration and an "auto" global based on an object ID got in the way of that and confused you.
In practice, I've never seen this to be a problem. But, I'd agree it seems like something they should get rid of or allow you to turn off.
Yes most browsers do this but then again like you said some don't (firefox) so don't count on it. It's also easy to overwrite these variables in js, I can imagine something like container might be overwritten right of the bat by someone using that variable without declaring it first.
There is no way to turn this of in chrome afaik but even then it might be a hassle to figure this out and fix it for all browsers.
Don't give it too much importance, but beware of it. This is one of those reasons why you would evade the global scope for variables.
For the sake of completion, these browsers definitly do it by default: Chrome, IE9 & compat, Opera
Update: Future versions of ECMAScript might include an option of some sort since yes discussion is going on, but that will not fix the 'problem' in older browsers.
I don't think there's a way to disable it, but you don't need to put much importance to it. If you are afraid of unpredictable bugs, you could avoid them by using JSHint or JSLint. They will help you avoid mistakes. For example, they will warn you if you use an undeclared variable.
The problem here is that the global scope has objects defined in it at runtime by the browser, and you would like to prevent these definitions from interfering with your code. I'm not aware of a way to turn off this behaviour, but I do have two workarounds for you:
1) As suggested in the article you linked to, you can work around this by ensuring that you define each variable before you use it. I would achieve this by running my code through JSLint, which warns about this sort of thing (in addition to a bunch of other common errors).
2) However, since it's possible to forget to run your code through JSLint, you might prefer a step in the tool chain that you can't forget. In that case, have a look at CoffeeScript - it's a langauge very similar to javascript that you compile into javascript before use, and it will insert the correct var definitions for you. In fact, I suspect that you can't write code that relies on the automatic element variable creation using CoffeeScript.
This is what I've been able to come up with to remove global variables that are automatically created for DOM objects with an ID value:
function clearElementGlobals() {
function clearItem(iden, item) {
if (iden && window[iden] && (window[iden] === item)) {
window[iden] = undefined;
}
}
var list = document.getElementsByTagName("*");
for (var i = 0, len = list.length; i < len; i++) {
var item = list[i];
clearItem(item.id, item);
clearItem(item.name, item);
}
}
This gets a list of all objects in the page. It loops through looking for ones with an id value and when there's an id value and a global variable exists for it and that global variable points to that DOM object, that global variable is set to undefined. As it turns out browsers also do this same auto-global for some types of tags with a name attribute (like form elements) so we clear those too.
Of course, this code can't know whether your own code makes a global variable with the same name as the id so it would obviously be best to either not do that in your own code or call this function before your global variables are initialized.
Unfortunately, you cannot delete global variables in javascript so setting it to undefined is about the best that can be done.
FYI, I tried doing this the other way around where you enumerate global variables looking for variables that are an instance of HTMLElement and that have a name that matches the id of the element they point to, but I couldn't find a reliable way to enumerate global variables. In Chrome, you can't enumerate them on the window object even though you can access them through the window object. So, I had to go the other way around by getting all DOM objects with an id and looking for globals that match them.
FYI, you asked about strict mode in your question. strict mode only applies to a given scope of code so there would not be any way to cause it to affect the way the global namespace was set up. To affect something like this, it would have to be something at the document level before the document was parsed like a DOCTYPE option or something like that.
Caveats with this function.
Run it before you create any of your own globals or don't create any of your own globals with the same name as the ID or name attribute that also point to that DOM object.
This is a one-time shot, not continuous. If you dynamically create new DOM objects, you would have to rerun this function to clear any new globals that might have been made from the new DOM objects.
The globals are set to undefined which is slightly different than if they were never there in the first place. I can't think of a programming case where it would really matter, but it isn't identical. Unfortunately, you can't delete global variables.

What's wrong with this style of coding JavaScript? (closures vs. prototypes)

We have been debating how best to handle objects in our JS app, studying Stoyan Stefanov's book, reading endless SO posts on 'new', 'this', 'prototype', closures etc. (The fact that there are so many, and they have so many competing theories, suggests there is no completely obvious answer).
So let's assume the we don't care about private data. We are content to trust users and developers not to mess around in objects outside the ways we define.
Given this, what (other than it seeming to defy decades of OO style and history) would be wrong with this technique?
// namespace to isolate all PERSON's logic
var PERSON = {};
// return an object which should only ever contain data.
// The Catch: it's 100% public
PERSON.constructor = function (name) {
return {
name: name
}
}
// methods that operate on a Person
// the thing we're operating on gets passed in
PERSON.sayHello = function (person) {
alert (person.name);
}
var p = PERSON.constructor ("Fred");
var q = PERSON.constructor ("Me");
// normally this coded like 'p.sayHello()'
PERSON.sayHello(p);
PERSON.sayHello(q);
Obviously:
There would be nothing to stop someone from mutating 'p' in unholy
ways, or simply the logic of PERSON ending up spread all over the place. (That is true with the canonical 'new' technique as well).
It would be a minor hassle to pass 'p' in to every function that you
wanted to use it.
This is a weird approach.
But are those good enough reasons to dismiss it? On the positive side:
It is efficient, as (arguably) opposed to closures with repetitive function declaration.
It seems very simple and understandable, as opposed to fiddling with
'this' everywhere.
The key point is the foregoing of privacy. I know I will get slammed for this, but, looking for any feedback. Cheers.
There's nothing inherently wrong with it. But it does forgo many advantages inherent in using Javascript's prototype system.
Your object does not know anything about itself other than that it is an object literal. So instanceof will not help you to identify its origin. You'll be stuck using only duck typing.
Your methods are essentially namespaced static functions, where you have to repeat yourself by passing in the object as the first argument. By having a prototyped object, you can take advantage of dynamic dispatch, so that p.sayHello() can do different things for PERSON or ANIMAL depending on the type Javascript knows about. This is a form of polymorphism. Your approach requires you to name (and possibly make a mistake about) the type each time you call a method.
You don't actually need a constructor function, since functions are already objects. Your PERSON variable may as well be the constructor function.
What you've done here is create a module pattern (like a namespace).
Here is another pattern that keeps what you have but supplies the above advantages:
function Person(name)
{
var p = Object.create(Person.prototype);
p.name = name; // or other means of initialization, use of overloaded arguments, etc.
return p;
}
Person.prototype.sayHello = function () { alert (this.name); }
var p = Person("Fred"); // you can omit "new"
var q = Person("Me");
p.sayHello();
q.sayHello();
console.log(p instanceof Person); // true
var people = ["Bob", "Will", "Mary", "Alandra"].map(Person);
// people contains array of Person objects
Yeah, I'm not really understanding why you're trying to dodge the constructor approach or why they even felt a need to layer syntactical sugar over function constructors (Object.create and soon classes) when constructors by themselves are an elegant, flexible, and perfectly reasonable approach to OOP no matter how many lame reasons are given by people like Crockford for not liking them (because people forget to use the new keyword - seriously?). JS is heavily function-driven and its OOP mechanics are no different. It's better to embrace this than hide from it, IMO.
First of all, your points listed under "Obviously"
Hardly even worth mentioning in JavaScript. High degrees of mutability is by-design. We're not afraid of ourselves or other developers in JavaScript. The private vs. public paradigm isn't useful because it protects us from stupidity but rather because it makes it easier to understand the intention behind the other dev's code.
The effort in invoking isn't the problem. The hassle comes later when it's unclear why you've done what you've done there. I don't really see what you're trying to achieve that the core language approaches don't do better for you.
This is JavaScript. It's been weird to all but JS devs for years now. Don't sweat that if you find a better way to do something that works better at solving a problem in a given domain than a more typical solution might. Just make sure you understand the point of the more typical approach before trying to replace it as so many have when coming to JS from other language paradigms. It's easy to do trivial stuff with JS but once you're at the point where you want to get more OOP-driven learn everything you can about how the core language stuff works so you can apply a bit more skepticism to popular opinions out there spread by people who make a side-living making JavaScript out to be scarier and more riddled with deadly booby traps than it really is.
Now your points under "positive side,"
First of all, repetitive function definition was really only something to worry about in heavy looping scenario. If you were regularly producing objects in large enough quantity fast enough for the non-prototyped public method definitions to be a perf problem, you'd probably be running into memory usage issues with non-trivial objects in short order regardless. I speak in the past tense, however, because it's no longer really a relevant issue either way. In modern browsers, functions defined inside other functions are actually typically performance enhancing due to the way modern JIT compilers work. Regardless of what browsers you support, a few funcs defined per object is a non-issue unless you're expecting tens of thousands of objects.
On the question of simple and understandable, it's not to me because I don't see what win you've garnered here. Now instead of having one object to use, I have to use both the object and it's pseudo-constructor together which if I weren't looking at the definition would imply to me the function that you use with a 'new' keyword to build objects. If I were new to your codebase I'd be wasting a lot of time trying to figure out why you did it this way to avoid breaking some other concern I didn't understand.
My questions would be:
Why not just add all the methods in the object literal in the constructor in the first place? There's no performance issue there and there never really has been so the only other possible win is that you want to be able to add new methods to person after you've created new objects with it, but that's what we use prototype for on proper constructors (prototype methods btw are great for memory in older browsers because they are only defined once).
And if you have to keep passing the object in for the methods to know what the properties are, why do you even want objects? Why not just functions that expect simple data structure-type objects with certain properties? It's not really OOP anymore.
But my main point of criticism
You're missing the main point of OOP which is something JavaScript does a better job of not hiding from people than most languages. Consider the following:
function Person(name){
//var name = name; //<--this might be more clear but it would be redundant
this.identifySelf = function(){ alert(name); }
}
var bob = new Person();
bob.identifySelf();
Now, change the name bob identifies with, without overwriting the object or the method, which are both things you'd only do if it were clear you didn't want to work with the object as originally designed and constructed. You of course can't. That makes it crystal clear to anybody who sees this definition that the name is effectively a constant in this case. In a more complex constructor it would establish that the only thing allowed to alter or modify name is the instance itself unless the user added a non-validating setter method which would be silly because that would basically (looking at you Java Enterprise Beans) MURDER THE CENTRAL PURPOSE OF OOP.
Clear Division of Responsibility is the Key
Forget the key words they put in every book for a second and think about what the whole point is. Before OOP, everything was just a pile of functions and data structures all those functions acted on. With OOP you mostly have a set of methods bundled with a set of data that only the object itself actually ever changes.
So let's say something's gone wrong with output:
In our strictly procedural pile of functions there's no real limit to the number of hands that could have messed up that data. We might have good error-handling but one function could branch in such a way that the original culprit is hard to track down.
In a proper OOP design where data is typically behind an object gatekeeper I know that only one object can actually make the changes responsible.
Objects exposing all of their data most of the time is really only marginally better than the old procedural approach. All that really does is give you a name to categorize loosely related methods with.
Much Ado About 'this'
I've never understood the undue attention assigned to the 'this' keyword being messy and confusing. It's really not that big of a deal. 'this' identifies the instance you're working with. That's it. If the method isn't called as a property it's not going to know what instance to look for so it defaults to the global object. That was dumb (undefined would have been better), but it not working properly in that scenario should be expected in a language where functions are also portable like data and can be attached to other objects very easily. Use 'this' in a function when:
It's defined and called as a property of an instance.
It's passed as an event handler (which will call it as a member of the thing being listened to).
You're using call or apply methods to call it as a property of some other object temporarily without assigning it as such.
But remember, it's the calling that really matters. Assigning a public method to some var and calling from that var will do the global thing or throw an error in strict mode. Without being referenced as object properties, functions only really care about the scope they were defined in (their closures) and what args you pass them.

jQuery code organization and performance

After doing some research on the subject, I've been experimenting a lot with patterns to organize my jQuery code (Rebecca Murphy did a presentation on this at the jQuery Conference for example).
Yesterday I checked the (revealing) module pattern. The outcome looks a bit reminiscent of the YUI syntax I think:
//global namespace MyNameSpace
if(typeof MNS=="undefined"||!MNS){var MNS={};}
//obfuscate module, just serving as a very simple example
MNS.obfuscate = function(){
//function to create an email address from obfuscated '#'
var email = function(){
$('span.email').each(function(){
var emailAddress = $(this).html().replace(' [ # ] ','#');
$(this).html('' + emailAddress + '');
});
};
return {
email: email
};
}();
//using the module when the dom's ready
jQuery(document).ready(function($){
MNS.obfuscate.email();
});
In the end I had several modules. Some naturally included "private members", which in this case means variables and/or functions which were only of importance for other functions within this module, and thus didn't end up in the return statement.
I thought having connected parts of my code (everything which has to do with the search for example) combined in a module makes sense, gives the whole thing structure.
But after writing this, I read an article by John (Resig), where he also writes about the performance of the module pattern:
"Instantiating a function with a bunch of prototype properties is very, very, fast. It completely blows the Module pattern, and similar, out of the water. Thus, if you have a frequently-accessed function (returning an object) that you want people to interact with, then it's to your advantage to have the object properties be in the prototype chain and instantiate it. Here it is, in code:
// Very fast
function User(){}
User.prototype = { /* Lots of properties ... */ };
// Very slow
function User(){
return { /* Lots of properties */ };
}
(John mentions he is not against the module pattern "per se" though - just to let you know :)
Then I wasn't sure anymore if I was going into the right direction with my code. The thing is: I don't really need any private members, and I also don't think I need inheritance for the time being.
All I want for now is a readable/maintainable pattern. I guess this boils down to personal preference to a certain extend, but I don't wanna end up with readable code which has (rather serious) performance issues.
I'm no JavaScript expert and thus even less of an expert when it comes to performance testing. So first, I don't really know in how far the things John mentioned ("frequently-accessed function (returning an object) that you want people to interact with", lots of properties, etc.) apply to my code. The documents my code interacts with are not huge, with 100s or 1000s of elements. So maybe it's not an issue at all.
But one thing which came to my mind is that instead of just having
$('span.email').each(function(){
var emailAddress = $(this).html().replace(' [ # ] ','#');
$(this).html('' + emailAddress + '');
});
(inside the domready function), I create two "extra" functions, obfuscate and email, due to the use of the module pattern. The creation of additional functions costs some time. The question is: will it be measurable in my case?
I'm not sure if a closure is created in my example above (in an interesting post on the jQuery forum I read the following: "There is a philosophical debate on whether an inner function creates a closure if it doesn't reference anything on an outer function's variable object..."), but I did have closures in my real code. And even though I don't think I had any circular references in there, I don't know in how far this could lead to high(er) memory consumption/problems with garbage collection.
I'd really like to hear your input on this, and maybe see some examples of your code. Also, which tools do you prefer to get information on execution time, memory and CPU usage?
I also don't think I need inheritance for the time being
Indeed. This doesn't really apply to using modules as a namespace. It's about class instance analogues.
Objects you create by making every instance from a completely new {name: member} object are less efficient than objects you create using new Class with Class.prototype.name= member. In the prototype case the member value is shared, resulting in lighter-weight instances.
In your example MNS is a singleton, so there is no benefit to be had by sharing members through a prototype.
I'm not sure if a closure is created in my example above
Yes, it is. Even when no variables are defined in the outer function, there is still a LexicalEnvironment and scope object created for the outer function, with this and arguments bound in it. A clever JS engine might be able to optimise it away, since every inner function must hide this and arguments with their own copies, but I'm not sure that any of the current JS implementations actually do that.
The performance difference, in any case, should be undetectable, since you aren't putting anything significant in the arguments. For a simple module pattern I don't think there's any harm.
Also, which tools do you prefer to get information on execution time, memory and CPU usage?
The place to start is simply to execute the code 10000 times in a for-loop and see how much bigger new Date().getTime() has got, executed several times on as many different browsers as you can get hold of.
$(this).html().replace(' [ # ] ','#');
(What is this supposed to do? At the moment it will read the HTML of the span into a new String, replace only the first instance of [ # ] with #, and return a new String value. It won't change the existing content in the DOM.)
How much Javascript do you have? In my experience, on sites with lots of Javascript code on the pages, performance problems generally come from code that actually does things. Generally, problems stem from trying to do too many things, or trying to do some particular thing really badly. An example would be trying to do stuff like bind handlers to elements in table rows (big tables) instead of using something like "live".
My point is that things like how your modules or functions or whatever get organized is almost certainly not going to pose any sort of real performance issue on your pages. What is motivating you to go to all this trouble?

What anti-patterns exist for JavaScript? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I find that what not to do is a harder lesson to learn than what should be done.
From my experience, what separates an expert from an intermediate is the ability to select from among various seemingly equivalent ways of doing the same thing.
So, when it comes to JavaScript what kinds of things should you not do and why?
I'm able to find lots of these for Java, but since JavaScript's typical context (in a browser) is very different from Java's I'm curious to see what comes out.
Language:
Namespace polluting by creating a large footprint of variables in the global context.
Binding event handlers in the form 'foo.onclick = myFunc' (inextensible, should be using attachEvent/addEventListener).
Using eval in almost any non-JSON context
Almost every use of document.write (use the DOM methods like document.createElement)
Prototyping against the Object object (BOOM!)
A small one this, but doing large numbers of string concats with '+' (creating an array and joining it is much more efficient)
Referring to the non-existent undefined constant
Design/Deployment:
(Generally) not providing noscript support.
Not packaging your code into a single resource
Putting inline (i.e. body) scripts near the top of the body (they block loading)
Ajax specific:
not indicating the start, end, or error of a request to the user
polling
passing and parsing XML instead of JSON or HTML (where appropriate)
Many of these were sourced from the book Learning JavaScript Design by Addy Osmati: https://www.oreilly.com/library/view/learning-javascript-design/9781449334840/ch06.html
edit: I keep thinking of more!
Besides those already mentioned...
Using the for..in construct to iterate over arrays
(iterates over array methods AND indices)
Using Javascript inline like <body onload="doThis();">
(inflexible and prevents multiple event listeners)
Using the 'Function()' constructor
(bad for the same reasons eval() is bad)
Passing strings instead of functions to setTimeout or setInterval
(also uses eval() internally)
Relying on implicit statements by not using semicolons
(bad habit to pick up, and can lead to unexpected behavior)
Using /* .. */ to block out lines of code
(can interfere with regex literals, e.g.: /* /.*/ */)
<evangelism>
And of course, not using Prototype ;)
</evangelism>
The biggest for me is not understanding the JavaScript programming language itself.
Overusing object hierarchies and building very deep inheritance chains. Shallow hierarchies work fine in most cases in JS.
Not understanding prototype based object orientation, and instead building huge amounts of scaffolding to make JS behave like traditional OO languages.
Unnecessarily using OO paradigms when procedural / functional programming could be more concise and efficient.
Then there are those for the browser runtime:
Not using good established event patterns like event delegation or the observer pattern (pub/sub) to optimize event handling.
Making frequent DOM updates (like .appendChild in a loop), when the DOM nodes can be in memory and appended in one go. (HUGE performance benefit).
Overusing libraries for selecting nodes with complex selectors when native methods can be used (getElementById, getElementByTagName, etc.). This is becoming lesser of an issue these days, but it's worth mentioning.
Extending DOM objects when you expect third-party scripts to be on the same page as yours (you will end up clobbering each other's code).
And finally the deployment issues.
Not minifying your files.
Web-server configs - not gzipping your files, not caching them sensibly.
<plug> I've got some client-side optimization tips which cover some of the things I've mentioned above, and more, on my blog.</plug>
browser detection (instead of testing whether the specific methods/fields you want to use exist)
using alert() in most cases
see also Crockford's "Javascript: The Good Parts" for various other things to avoid. (edit: warning, he's a bit strict in some of his suggestions like the use of "===" over "==" so take them with whatever grain of salt works for you)
A few things right on top of my head. I'll edit this list when I think of more.
Don't pollute global namespace. Organize things in objects instead;
Don't omit 'var' for variables. That pollutes global namespace and might get you in trouble with other such scripts.
any use of 'with'
with (document.forms["mainForm"].elements) {
input1.value = "junk";
input2.value = "junk";
}
any reference to
document.all
in your code, unless it is within special code, just for IE to overcome an IE bug. (cough document.getElementById() cough)
Bad use of brace positioning when creating statements
You should always put a brace after the statement due to automatic semicolon insertion.
For example this:
function()
{
return
{
price: 10
}
}
differs greatly from this:
function(){
return{
price: 10
}
}
Becuase in the first example, javascript will insert a semicolon for you actually leaving you with this:
function()
{
return; // oh dear!
{
price: 10
}
}
Using setInterval for potentially long running tasks.
You should use setTimeout rather than setInterval for occasions where you need to do something repeatedly.
If you use setInterval, but the function that is executed in the timer is not finished by the time the timer next ticks, this is bad. Instead use the following pattern using setTimeout
function doThisManyTimes(){
alert("It's happening again!");
}
(function repeat(){
doThisManyTimes();
setTimeout(repeat, 500);
})();
This is explained very well by Paul Irish on his 10 things I learned from the jQuery source video
Not using a community based framework to do repetitive tasks like DOM manipulation, event handling, etc.
Effective caching is seldomly done:
Don't store a copy of the library (jQuery, Prototype, Dojo) on your server when you can use a shared API like Google's Libraries API to speed up page loads
Combine and minify all your script that you can into one
Use mod_expires to give all your scripts infinite cache lifetime (never redownload)
Version your javascript filenames so that a new update will be taken by clients without need to reload/restart
(i.e. myFile_r231.js, or myFile.js?r=231)

Categories