Using the Google closure compiler and library for inheritance I found two different calls of super constructor in a lot of closure based libraries (forgot where I found it). Not sure if I got it wrong at all.
What is the difference and what is the correct one to use?
// Xhrio extends EventTarget
goog.events.EventTarget.call(this);
goog.net.XhrIo.base(this, 'constructor');
Either one is fine. I suppose you might say the second one is slightly better because if you later changed XhrIo to extend something other than EventTarget you might not have to change that line.
You can also use goog.base(this, 'constructor') but that is incompatible with strict mode.
Related
for personal learning purposes, I'm trying to understand Winston's package design structure and the purpose behind each of its modules but I can't figure this one out.
In Winstons package, there is the core Logger class in the logger.js module which implements the main functionality of the logger and provides some public methods such as the logger.log method. It also implements the transform stream methods for internal use.
Then there is a derived class in create-logger.js module called DerivedLogger that extends the Logger class and it seems that its sole purpose is to add optimized level methods to the loggers prototype. This DerivedLogger class is then instantiated and exported in a factory function at the bottom of the module.
My question is, Why was the DerivedLogger class needed? would there be any difference in performance if those level methods were instead added on the Logger class prototype itself and then have the factory function instantiate the Logger class directly? the only reason I could think of is that perhaps the DerivedLogger class is added for modularity purposes only? can someone help me understand the reason?
Thanks!
This one was very interesting, thanks for pointing it out!
In short: It has nothing to do with code-structure, it's a performance optimization. The comment states as much:
Create a new class derived logger for which the levels can be attached to the prototype of. This is a V8 optimization that is well know to increase performance of prototype functions.
Personally, I think this needs citation (and I wouldn't accept it in a code-review). Luckily, I think I found the "optimization" that the author was talking about:
This article by Mathias (a Google engineer that works on V8) talks about speeding up JavaScript execution with correct usage of prototype. The article has a lot of details and is really worth the read if you're learning.
The optimization found in Winston boils down to this:
The getAttribute() method is found on the Element.prototype. That means each time we call anchor.getAttribute(), the JavaScript engine needs to…
check that getAttribute is not on the anchor object itself,
check that the direct prototype is HTMLAnchorElement.prototype,
assert absence of getAttribute there,
check that the next prototype is HTMLElement.prototype,
assert absence of getAttribute there as well,
eventually check that the next prototype is Element.prototype,
and that getAttribute is present there.
That’s a total of 7 checks! Since this kind of code is pretty common on the web, engines apply tricks to reduce the number of checks necessary for property loads on prototypes.
This is roughly applicable to Winston as follows:
Methods on Classes are defined on the prototype-object of said class
Every time a method is called on an instance, the engine needs to find the prototype that the called method is attached to.
In doing so, it takes the prototype for the instance-class that you're calling the method on and checks to find the called method on it's prototype
If it can't find it (e.g. because the method is inherited), it walks up the prototype-chain to the next class and looks there
This continues until either the method is found (and executed sub-sequentially) or the end of the prototype-chain is reached (and an error is thrown).
By running _setupLevels() in the constructor, the level-methods are directly attached to the prototype of the specific logger-implementations instance. This means that the class-hierarchy can grow arbitrarily large: The prototype-chain lookup only takes 1 step to find the method
Here is another (simplified) example:
class A {
constructor() {
this.setup();
}
testInherit() {
console.log("Inherited method called");
}
setup() {
this["testDirect"] = () => console.log("Directly attached method called");
}
}
class B extends A {
constructor() {
super();
}
}
const test = new B();
test.testInherit();
test.testDirect();
If we set a breakpoint after test is instantiated, we see the following:
As you can see, the testDirect-method is directly attached to test, while testInherit is multiple levels down.
I personally think this is bad practice:
This might be true now, but it might not be true in the future. If V8 optimizes this internally, the current "optimization" might be significantly slower.
The claim that it's an optimization holds no merit without profiling and sources.
It's complicated to understand (see this question)
Doing this can actually hurt performance. The linked article states as it's conclusion:
don’t mess with prototypes
As for modularity: There is something to be said for having a clear base-class for all extension.
In a stricter language than JavaScript, such a class could offer specific methods that are only meant for extending, which are hidden from the public API for consumers. In this specific case however, Logger would have been fine on its own.
ES6 class constructors can't be called as normal functions. According to ES6 a TypeError should be raised when this is done. I used to think that classes were just syntactic sugar for a constructor function + functions in the prototype, but this makes it slightly not so.
I'm wondering, what was the rationale behind this? Unless I missed something, it prevents calling the function with a custom this, which could be desirable for some patterns.
A revisit to the ES6 spec shows how calling a Class function object without new is disabled by combining sections 9.2.9 and 9.2.1:
9.2.9 MakClassConstructor (F)
...
3. Set F’s [[FunctionKind]] internal slot to "classConstructor".
and when specifying the [[call]] method as opposed to the [[contruct]] method of a function:
(9.2.1) 2. If F’s [[FunctionKind]] internal slot is "classConstructor", throw a TypeError exception.
No restrictions are placed on calling function in section "11.2.3 "Function calls" of ES5.1.
So you are not missing anything: you can't use apply on a class constructor function.
The major rationale is probably both to make class extensions a fairly rigorous exercise, and to detect some early forms of error. For example you can't call Promise except as a constructor - and leaving out new before a call to Promise is a programming error. In regards extending classes, note that the constructor property of class instances is correctly set (the last class after possibly multiple extensions) and the the .prototype property of the class constructor is read only - you can't dynamically change the prototype object used to construct class instances, even though you could change the prototype property of a constructor function.
I used to think classes were syntactic sugar but have moved away from the concept.
To recap, your two main points are
ES6 class constructors can't be called as normal functions.
It prevents calling the function with a custom this
The first thing to note is that from the standpoint of the runtime behavior of a class, those are to points are not functionally tied together. You could for instance allow Foo() without new but still have Foo.call({}) behave like it had been newed. The ability to call as a function can allow setting this, but it doesn't have to, the same way Foo.bind({})() would bind a this and then call the function, but the bound this would be ignored.
For the rationale behind the decision, I can't give you a primary source, but I can tell you there is one solid reason. ES6 class syntax is "syntax sugar", but not for the simplified code you likely have in your head. Take for example this snippet, given your goal.
class Parent {}
class Child extends Parent {
constructor() {
// What is "this" here?
super();
}
}
Child.call({});
What should this do? In ES6, super() is what actually sets this. If you try to access this before having called super(), it will throw an exception. Your example code could work with Base.call({}) since it has no parent constructor, so this is initialized up front, but as soon as you're calling a child class, this doesn't even have a value up front. If you use .call there is no where to put that value.
So then the next question is, why do child classes not get this before super()? This is because it allows ES6 class syntax to extend builtin types like Array and Error and Map and any other builtin constructor type. In standard ES5 this was impossible, though with the non-standard __proto__ in ES5 it could be simulated roughly. Even with __proto__ it is generally a performance issue to extend builtin types. By including this behavior in ES6 classes, JS engines can optimize the code so that extending builtin types works without a performance hit.
So for your questions, yes, they could allow Foo.call(), or Foo(), but it would have to ignore this either way in order to allow for extending builtin types.
what was the rationale behind this?
It's a safeguard. When you called an ES5 function constructor without new, it did very undesirable things, failing silently. Throwing an exception helps you to notice the mistake.
Of course they could have opted for the call syntax to just work the same as construction, but enforcing the new keyword is a good thing that helps us to easily recognise instantiations.
It prevents calling the function with a custom this, which could be desirable for some patterns.
Yes, this is what fundamentally changed in ES6. The this value is initialised by the superclass, which allows subclass builtins with internal slots - see here for details. This conflicts with passing a custom this argument, and for consistency one must never allow that.
Coming from a C# background, I used interfaces to base my mock objects off of. I created custom mock objects myself and created the mock implementation off a C# interface.
How do you do something like this in JS or node? Create an interface that you can "mock" off of and also the interface would serve for the real class that would also be able to implement that interface? Does this even make sense in JS or or the Node World?
For example in Java, same deal, define an interface with method stubs and use that as the basis to create real class or mock class off of.
Unfortunately you're not going to find the standard interface as a part of JavaScript. I've never used C#, but I've used Java, and correct me if I'm wrong, but it looks like you're talking about creating interfaces and overriding methods for both mock testing purposes, as well as being able to implement those interfaces in other classes.
Because this isn't a standard JavaScript feature, I think you'll find that there are going to be a lot of very broad answers here. However, to get an idea of how some popular libraries implement this, I might suggest looking at how AngularJS looks at mock testing (there are many resources online, just Google it. As a starting point, look at how they use the ngMock module with Karma and Jasmine.)
Also, because of JavaScript's very flexible nature, you'll find that you can override any sort of "class method" (that is, any function object that is a member of another object, whether that be a new'ed "class" or a plain object) by simply re-implementing it wherever you need to... there's no special syntax for it. To understand where and how you would accomplish this, I'd suggest looking from the ground up at how JavaScript using prototypal/prototypical inheritance. A starting point might be considering an example like this:
function Cat(config) {
if(typeof config !== 'undefined') {
this.meow = config.meow; // where config can possibly implement certain mock methods
}
}
Cat.prototype = {
this.meow = function() {
// the meow that you want to use as part of your "production" code
};
};
var config = {};
config.meow = function() {
// some mock "meow" stuff
};
var testCat = new Cat(config); // this one will use the mock "Cat#meow"
var realCat = new Cat(); // this one will use the prototype "Cat#meow"
In the above example, because of how JavaScript looks up the prototype chain, if it sees an implementation in the class itself, it'll stop there and use that method (thus, you've "overridden" the prototype method). However, in that example, if you don't pass in a config, then it'll look all the way up to the prototype for the Cat#meow method, and use that one.
TL;DR: there's not one good way to implement JavaScript interfaces, especially ones that double as mocks (there's not even a best way to implement dependency injection... that's also a foreign concept to JavaScript itself, even though many libraries do successfully implement it for cetain use-cases.)
If I implemented a method x on a String like :
String.prototype.x = function (a) {...}
And then the new version of javascript actually implements the x method, but on the another way, either returning something different than my implementation or function with more/less arguments than my implementation. Will this break my implementation and override it?
You'll overwrite the default implementation.
Any code that uses it will use yours instead.
There was a proposal for scoped extension methods and it was rejected because it was too expensive computationally to implement in JS engines. There is talk about a new proposal (protocols) to address the issue. ES6 symbols will also give you a way around that (but with ugly syntax).
However, that's not the punch - here's a fun fact no one is going to tell you.
No one is ever going to implement a method called x on String.prototype
You can implement it and get away with it. Seriously, prollyfilling and polyfilling is a viable, expressive and interesting solution to many use cases. If you're not writing a library I think it's acceptable.
No, you'll be overriding the default implementation of said function, from the point at which you've declared/defined it. The "new" implementation will function in its native behavior, until your implementation's defined.
var foo = 'some arbitrary string';
console.log(foo.indexOf('s')); // logs [0]
String.prototype.indexOf = function(foo, bar) { return 'foo'; };
console.log(foo.indexOf()); // logs [foo]
Illustration: http://jsfiddle.net/Z4Fq9/
Your code will be overriding the default implementation.
However if the interface of your method is not compatible with the standard one the libraries you may use could depend on the standard behavior so the program as a whole could break anyway with newer versions of the libraries.
In general is a bad idea doing something that could break if others do the same: what if another library thinks it's a good idea to add a method x to the standard string object prototype? Trying to avoid conflicts is a must for libraries but it's also good for applications (and if an application is written nicely then a lot of its code is probably quite similar to a library, and may evolve in a library later).
This kind of "patching" makes sense only to provide the a standard method for broken or old javascript implementations where that method is absent. Patching standard prototypes just because you can is a bad idea and will make your code a bad neighbor with which is difficult to share a page.
If the implementation of x is from a new version of Javascript, it's part of the core, therefore when you write String.prototype.x... it will be already there, and you will overwrite it.
Best practice in this kind of things is to write
if( !String.prototype.x ){
String.prototype.x = function ...
//your
We have been debating how best to handle objects in our JS app, studying Stoyan Stefanov's book, reading endless SO posts on 'new', 'this', 'prototype', closures etc. (The fact that there are so many, and they have so many competing theories, suggests there is no completely obvious answer).
So let's assume the we don't care about private data. We are content to trust users and developers not to mess around in objects outside the ways we define.
Given this, what (other than it seeming to defy decades of OO style and history) would be wrong with this technique?
// namespace to isolate all PERSON's logic
var PERSON = {};
// return an object which should only ever contain data.
// The Catch: it's 100% public
PERSON.constructor = function (name) {
return {
name: name
}
}
// methods that operate on a Person
// the thing we're operating on gets passed in
PERSON.sayHello = function (person) {
alert (person.name);
}
var p = PERSON.constructor ("Fred");
var q = PERSON.constructor ("Me");
// normally this coded like 'p.sayHello()'
PERSON.sayHello(p);
PERSON.sayHello(q);
Obviously:
There would be nothing to stop someone from mutating 'p' in unholy
ways, or simply the logic of PERSON ending up spread all over the place. (That is true with the canonical 'new' technique as well).
It would be a minor hassle to pass 'p' in to every function that you
wanted to use it.
This is a weird approach.
But are those good enough reasons to dismiss it? On the positive side:
It is efficient, as (arguably) opposed to closures with repetitive function declaration.
It seems very simple and understandable, as opposed to fiddling with
'this' everywhere.
The key point is the foregoing of privacy. I know I will get slammed for this, but, looking for any feedback. Cheers.
There's nothing inherently wrong with it. But it does forgo many advantages inherent in using Javascript's prototype system.
Your object does not know anything about itself other than that it is an object literal. So instanceof will not help you to identify its origin. You'll be stuck using only duck typing.
Your methods are essentially namespaced static functions, where you have to repeat yourself by passing in the object as the first argument. By having a prototyped object, you can take advantage of dynamic dispatch, so that p.sayHello() can do different things for PERSON or ANIMAL depending on the type Javascript knows about. This is a form of polymorphism. Your approach requires you to name (and possibly make a mistake about) the type each time you call a method.
You don't actually need a constructor function, since functions are already objects. Your PERSON variable may as well be the constructor function.
What you've done here is create a module pattern (like a namespace).
Here is another pattern that keeps what you have but supplies the above advantages:
function Person(name)
{
var p = Object.create(Person.prototype);
p.name = name; // or other means of initialization, use of overloaded arguments, etc.
return p;
}
Person.prototype.sayHello = function () { alert (this.name); }
var p = Person("Fred"); // you can omit "new"
var q = Person("Me");
p.sayHello();
q.sayHello();
console.log(p instanceof Person); // true
var people = ["Bob", "Will", "Mary", "Alandra"].map(Person);
// people contains array of Person objects
Yeah, I'm not really understanding why you're trying to dodge the constructor approach or why they even felt a need to layer syntactical sugar over function constructors (Object.create and soon classes) when constructors by themselves are an elegant, flexible, and perfectly reasonable approach to OOP no matter how many lame reasons are given by people like Crockford for not liking them (because people forget to use the new keyword - seriously?). JS is heavily function-driven and its OOP mechanics are no different. It's better to embrace this than hide from it, IMO.
First of all, your points listed under "Obviously"
Hardly even worth mentioning in JavaScript. High degrees of mutability is by-design. We're not afraid of ourselves or other developers in JavaScript. The private vs. public paradigm isn't useful because it protects us from stupidity but rather because it makes it easier to understand the intention behind the other dev's code.
The effort in invoking isn't the problem. The hassle comes later when it's unclear why you've done what you've done there. I don't really see what you're trying to achieve that the core language approaches don't do better for you.
This is JavaScript. It's been weird to all but JS devs for years now. Don't sweat that if you find a better way to do something that works better at solving a problem in a given domain than a more typical solution might. Just make sure you understand the point of the more typical approach before trying to replace it as so many have when coming to JS from other language paradigms. It's easy to do trivial stuff with JS but once you're at the point where you want to get more OOP-driven learn everything you can about how the core language stuff works so you can apply a bit more skepticism to popular opinions out there spread by people who make a side-living making JavaScript out to be scarier and more riddled with deadly booby traps than it really is.
Now your points under "positive side,"
First of all, repetitive function definition was really only something to worry about in heavy looping scenario. If you were regularly producing objects in large enough quantity fast enough for the non-prototyped public method definitions to be a perf problem, you'd probably be running into memory usage issues with non-trivial objects in short order regardless. I speak in the past tense, however, because it's no longer really a relevant issue either way. In modern browsers, functions defined inside other functions are actually typically performance enhancing due to the way modern JIT compilers work. Regardless of what browsers you support, a few funcs defined per object is a non-issue unless you're expecting tens of thousands of objects.
On the question of simple and understandable, it's not to me because I don't see what win you've garnered here. Now instead of having one object to use, I have to use both the object and it's pseudo-constructor together which if I weren't looking at the definition would imply to me the function that you use with a 'new' keyword to build objects. If I were new to your codebase I'd be wasting a lot of time trying to figure out why you did it this way to avoid breaking some other concern I didn't understand.
My questions would be:
Why not just add all the methods in the object literal in the constructor in the first place? There's no performance issue there and there never really has been so the only other possible win is that you want to be able to add new methods to person after you've created new objects with it, but that's what we use prototype for on proper constructors (prototype methods btw are great for memory in older browsers because they are only defined once).
And if you have to keep passing the object in for the methods to know what the properties are, why do you even want objects? Why not just functions that expect simple data structure-type objects with certain properties? It's not really OOP anymore.
But my main point of criticism
You're missing the main point of OOP which is something JavaScript does a better job of not hiding from people than most languages. Consider the following:
function Person(name){
//var name = name; //<--this might be more clear but it would be redundant
this.identifySelf = function(){ alert(name); }
}
var bob = new Person();
bob.identifySelf();
Now, change the name bob identifies with, without overwriting the object or the method, which are both things you'd only do if it were clear you didn't want to work with the object as originally designed and constructed. You of course can't. That makes it crystal clear to anybody who sees this definition that the name is effectively a constant in this case. In a more complex constructor it would establish that the only thing allowed to alter or modify name is the instance itself unless the user added a non-validating setter method which would be silly because that would basically (looking at you Java Enterprise Beans) MURDER THE CENTRAL PURPOSE OF OOP.
Clear Division of Responsibility is the Key
Forget the key words they put in every book for a second and think about what the whole point is. Before OOP, everything was just a pile of functions and data structures all those functions acted on. With OOP you mostly have a set of methods bundled with a set of data that only the object itself actually ever changes.
So let's say something's gone wrong with output:
In our strictly procedural pile of functions there's no real limit to the number of hands that could have messed up that data. We might have good error-handling but one function could branch in such a way that the original culprit is hard to track down.
In a proper OOP design where data is typically behind an object gatekeeper I know that only one object can actually make the changes responsible.
Objects exposing all of their data most of the time is really only marginally better than the old procedural approach. All that really does is give you a name to categorize loosely related methods with.
Much Ado About 'this'
I've never understood the undue attention assigned to the 'this' keyword being messy and confusing. It's really not that big of a deal. 'this' identifies the instance you're working with. That's it. If the method isn't called as a property it's not going to know what instance to look for so it defaults to the global object. That was dumb (undefined would have been better), but it not working properly in that scenario should be expected in a language where functions are also portable like data and can be attached to other objects very easily. Use 'this' in a function when:
It's defined and called as a property of an instance.
It's passed as an event handler (which will call it as a member of the thing being listened to).
You're using call or apply methods to call it as a property of some other object temporarily without assigning it as such.
But remember, it's the calling that really matters. Assigning a public method to some var and calling from that var will do the global thing or throw an error in strict mode. Without being referenced as object properties, functions only really care about the scope they were defined in (their closures) and what args you pass them.