According to the Google style guide, methods should be defined on the prototype of a constructor and properties should be defined in the constructor using the this keyword.
I do most of my front end development using Knockout, which handles observing properties by turning them into functions. That is, all of my properties are now methods, more or less. Is this a significant performance hit? Are there any Knockout workarounds using JavaScript getters and setters?
So first, yes there is a plugin for knockout that uses getters and setters, but it only works in newer browsers. You sacrifice compatability to IE8< (this is unavoidable, since those browsers do not support javascript getters/setters). The plugin can be found here.
To your main point: understanding the style guides intent is important. Because methods are usually reusable, putting them on the prototype saves duplicated code and memory allocation. This is why it is a recommendation to put them on the prototype. However, knockout observables are not reusable. They behave like properties: they store information specific to an instance. This difference is important. They may be functions, but they are treated like properties.
The Google style guide simply does not address this scenario. It is not a performance hit to place them on the instance, because you are comparing it to a scenario that will not work. Placing observable's on the prototype will break the model. It isn't a performance hit to do the only thing that works.
As a final note, the getters and setters plugin doesn't make the functions disappear, it just hides them behind the getter and setter. Performance will not improve, because the same work still has to be done.
Related
In a backdraftjs component with watchable 'titleString', is there any difference/preference between
this.watch('titleString', this.handleTitleStringChange);
and
onMutateTitleString(newTitle, oldTitle) {
...
}
The onMutate has the benefit that it remembers the old value (oldTitle) but this.watch() is maybe a little easier to find in code since it contains the word titleString -- where for onMutate you have to know to search for the camelcased-and-concatenated version.
Are there any other reasons to use one or the other?
Great question.
To begin...a minor clarification in the question: both methods are provided the new value and old value. See the docs for a watcher.
The main difference is that onMutateproperty-name is a member function. As such, it is unconditionally applied immediately after the actual mutation to the underlying memory slot occurs and before any watchers are applied. In essence, it is an extension of the framework's mutation machinery for a particular property in a particular class that contains the WatchHub mixin. It is used to define/enforce a behavior that is part of the definition of the class. As such, note also onMutateproperty-name can be overridden in subclasses (because, structurally, onMutateproperty-name is a method, and, heuristically, the behavior is part of what defines the mental model of the class).
All that said, it is certainly possible to accomplish most of what onMutateproperty-name does by simply connecting a watcher with the watch instance method...sans subclass override capability, and at the added expense of creating a watch handle and the rest.
On the other hand, connecting a watcher via a component's watch instance method is intended for use by clients of instances of the class. These connections typically should not result in mutating the instance state internally...if that was not true, then a particular instance would behave differently depending upon what clients had connected to its watch interface.
Of course, in JavaScript this is expensive to enforce and the library made the intentional decision to not construct enforcement machinery. This is a general principle of the library: encourage canonical design and implementation paradigms, but don't prevent the using engineer from doing what needs to be done (sometimes the world isn't perfect and we need to do what we need to do).
If I have a javascript class in which I wish to expose a method, is it better to expose the method using the this keyword (ie to make it publicly accessible in an unqualified way either using this or object.prototype.method, or to define an object property using object.defineProperty?
I would like to know any performance implications that you might know about, including speed and/or footprint. Obviously in object.defineProperty, I have the ability to control access to the property, and thus to any method that it exposes, although I am limited to only one parameter per method. However, if use a method, then I can create the access control more flexibly, so encapsulation is not really an issue, but given that object.defineProperty is a language feature and allows me to use the revealing pattern, I wonder if there are any performance implications.
Many thanks for your input...
Objects in JavaScript can be used as Hashtable
(the key must be String)
Is it perform well as Hashtable the data structure?
I mean , does it implemented as Hashtable behind the scene?
Update: (1) I changed HashMap to hashtable (2) I guess most of the browser implement it the same, if not why not? is there any requirement how to implement it in the ECMAScript specs?
Update 2 : I understand, I just wonder how V8 and the Firefox JS VM implements the Object.properties getters/setters?
V8 doesn't implement Object properties access as hashtable, it actually implement it in a better way (performance wise)
So how does it work? "V8 does not use dynamic lookup to access properties. Instead, V8 dynamically creates hidden classes behind the scenes" - that make the access to properties almost as fast as accessing properties of C++ objects.
Why? because in fixed class each property can be found on a specific fixed offset location..
So in general accessing property of an object in V8 is faster than Hashtable..
I'm not sure how it works on other VMs
More info can be found here: https://v8.dev/blog/fast-properties
You can also read more regarding Hashtable in JS here:(my blog) http://simplenotions.wordpress.com/2011/07/05/javascript-hashtable/
"I guess most of the browser implement it the same, if not why not? is there any requirement how to implement it in the ECMAScript specs?"
I am no expert, but I can't think of any reason why a language spec would detail exactly how its features must be implemented internally. Such a constraint would have absolutely no purpose, since it does not impact the functioning of the language in any way other than performance.
In fact, this is absolutely correct, and is in fact the implementation-independence of the ECMA-262 spec is specifically described in section 8.6.2 of the spec:
"The descriptions in these tables indicate their behaviour for native
ECMAScript objects, unless stated otherwise in this document for particular kinds of native ECMAScript objects. Host objects may support these internal properties with any implementation-dependent behaviour as long as it is consistent with the specific host object restrictions stated in this document"
"Host objects may implement these internal methods in any manner unless specified otherwise;"
The word "hash" appears nowhere in the entire ECMA-262 specification.
(original, continued)
The implementations of JavaScript in, say, Internet Explorer 6.0 and Google Chrome's V8 have almost nothing in common, but (more or less) both conform to the same spec.
If you want to know how a specific JavaScript interpreter does something, you should research that engine specifically.
Hashtables are an efficient way to create cross references. They are not the only way. Some engines may optimize the storage for small sets (for which the overhead of a hashtable may be less efficient) for example.
At the end of the day, all you need to know is, they work. There may be faster ways to create lookup tables of large sets, using ajax, or even in memory. For example see the interesting discussion on this post from John Reseig's blog about using a trie data structure.
But that's neither here nor there. Your choice of whether to use this, or native JS objects, should not be driven by information about how JS implements objects. It should be driven only by performance comparison: how does each method scale. This is information you will get by doing performance tests, not by just knowing something about the JS engine implementation.
Most modern JS engines use pretty similar technique to speed up the object property access. The technique is based on so called hidden classes, or shapes. It's important to understand how this optimization works to write efficient JS code.
JS object looks like a dictionary, so why not use one to store the properties? Hash table has O(1) access complexity, it looks like a good solution. Actually, first JS engines have implemented objects this way. But in static typed languages, like C++ or Java a class instance property access is lightning fast. In such languages a class instance is just a segment of memory, end every property has its own constant offset, so to get the property value we just need to take the instance pointer and add the offset to it. In other words, in compile time an expression like this point.x is just replaced by its address in memory.
May be we can implement some similar technique in JS? But how? Let's look at a simple JS function:
function getX(point) {
return point.x;
}
How to get the point.x value? The first problem here is that we don't have a class (or shape) which describes the point. But we can calculate one, that is what modern JS engines do. Most of JS objects at runtime have a shape which is bound to the object. The shape describes properties of the object and where these properties values are stored. It's very similar to how a class definition describes the class in C++ or Java. It's a pretty big question, how the Shape of an object is calculated, I won't describe it here. I recommend this article which contains a great explanation of the shapes in general, and this post which explains how the things are implemented in V8. The most important thing you should know about the shapes is that all objects with the same properties which are added in the same order will have the same shape. There are few exceptions, for example if an object has a lot of properties which are frequently changed, or if you delete some of the object properties using delete operator, the object will be switched into dictionary mode and won't have a shape.
Now, let's imagine that the point object has an array of property values, and we have a shape attached to it, which describes where the x value in this property array is stored. But there is another problem - we can pass any object to the function, it's not even necessary that the object has the x property. This problem is solved by the technique called Inline caching. It's pretty simple, when getX() is executed the first time, it remembers the shape of the point and the result of the x lookup. When the function is called second time, it compares the shape of the point with the previous one. If the shape matches no lookup is required, we can take the previous lookup result.
The primary takeaway is that all objects which describe the same thing should have the same shape, i.e. they should have the same set of properties which are added in the same order. It also explains why it's better to always initialize object properties, even if they are undefined by default, here is a great explanation of the problem.
Relative resources:
JavaScript engine fundamentals: Shapes and Inline Caches and a YouTube video
A tour of V8: object representation
Fast properties in V8
JavaScript Engines Hidden Classes (and Why You Should Keep Them in Mind)
Should I put default values of attributes on the prototype to save space?
this article explains how they are implemented in V8, the engine used by Node.js and most versions of Google Chrome
https://v8.dev/blog/fast-properties
apparently the "tactic" can change over time, depending on the number of properties, going from an array of named values to a dictionary.
v8 also takes the type into account, a number or string will not be treated in the same way as an object (or function, a type of object)
if i understand this correctly a property access frequently, for example in a loop, will be cached.
v8 optimises code on the fly by observing what its actually doing, and how often
v8 will identify the objects with the same set of named properties, added in the same order (like a class constructor would do, or a repetitive bit of JSON, and handle them in the same way.
see the article for more details, then apply at google for a job :)
I just finished Doug Crockford's The Good Parts, and he offers three different ways to go about inheritance: emulation of the classical model, prototype-based inheritance, and functional inheritance.
In the latter, he creates a function, a factory of sorts, which spits out objects augmented with desired methods that build upon other objects; something along the lines of:
var dog = function(params) {
// animal is the 'super class' created
// the same way as the dog, and defines some
// common methods
var that = animal(params);
that.sound = 'bark';
that.name = function () {};
return that;
}
Since all objects created this way will have a reference to the same functions, the memory footprint will be much lower than using when using the new operator, for instance. The question is, would the prototype approach offer any advantages in this case? In other words, are object prototypes somehow 'closer to the metal' that provide performance advantages, or are they just a convenience mechanism?
EDIT: I'll simplify the question. Prototypes vs. their emulation through object composition. So long as you don't require all object instances to get updated with new methods which is a convenience offered only by prototypes, are there any advantages to using prototypes in the first place?
I emailed Doug Crockford, and he had this to say:
[Using the functional approach above vs. prototypes] isn't that much memory. If you have a huge number of objects times a huge number of methods, then you might want to go prototypal. But memories are abundant these days, and only an extreme application is going to notice it.
Prototypes can use less memory, but can have slightly slower retrieval, particularly if the chains are very long. But generally, this is not noticeable.
The instances don't all reference the same functions etc. Each call to "dog" will create a new "name" function instance.
There are many opinions about this, and Crockford's isn't necessarily the right one.
The main disadvantage to modifying the prototypes is that it may make it harder to work with other javascript libraries.
But the disadvantage of Crockford's functional way of creating classes is that you can't add a method or field to all instances of a type as easily.
See Closure: The Definitive Guide for some critical comments on Crockford's view about classes and inheritance in javascript:
http://my.safaribooksonline.com/9781449381882/I_sect1_d1e29990#X2ludGVybmFsX0ZsYXNoUmVhZGVyP3htbGlkPTk3ODE0NDkzODE4ODIvNTE0
Libraries like Dojo, Google Closure Library (which seems to copy Dojo's style), and perhaps YUI have their own class system that seems to be a good middle ground. I like Dojo's system probably the best because it has support for Mixins, unlike Closure. Some other class systems not tied to gui toolkits include Joose, JS.Class, and JavascriptMVC (check out last one esp. if using jquery).
I once created a GUI that was backed by a class that didn't have great event-based support for the information it needed. The GUI implemented a generic interface where an instance of that class is passed in, so I cannot inherit from that class, so I did something similar to prototype-based inheritance, I created a proxy object that wraps the instance of that class when it's passed to my GUI. The proxy intercepts methods of interest that mutate state, and report them as events. The entire GUI then revolved around using these events.
In this case, it would have just been annoying to create a separate class that wraps the passed in instance and reproduces the same interface. This would actually have lead to more overhead (not that it matters) due to creating all the redundant functions.
There was simply no other viable alternative. The only alternatives were:
Create an object that composes the object passed to the GUI (like I mentioned above). This object would then reproduce every function in the original class to implement the same interface, and add events, so it's effectively just the same as the proxy object way.
Have the GUI track state mutation in its passed in object. This would have been a pain and bug prone.
The question is from a language design perspective.
I should explain a little about the situation. I am working on a javascript variant which does not support prototypes, however it is overdue a decent type system (most importantly support for instanceof). The ecmascript spec is not important, so i have the freedom to implement something different and better suited.
In the variant:-
You do not declare constructors with function foo(), rather constructors are declared in template files, which means constructors exist in a namespace (detirmined by the path of the file)
Currently all inheritance of behaviour is done by applying templates which means all shared functions are copied to each individual object (there are no prototypes afterall).
Never having been a web developer, this puts me in the slightly bizarre position of never having used prototypes in anger. Though this hasn't stopped me having opinions on them.
My principal issues with the prototype model as i understand it are
unnecessary littering of object namespace, obj.prototype, obj.constructor (is this an immature objection, trying to retain ability to treat objects as maps, which perhaps they are not?)
ability to change shared behaviour at runtime seems unnecessary, when directly using an extra level of indirection would be more straight forward obj.shared.foo(). Particularly it is quite a big implementation headache
people do not seem to understand prototypes very well generally e.g. the distinction between a prototype and a constructor.
So to get around these my idea is to have a special operator constructorsof. Basically the principal is that each object has a list of constructors, which occasionally you will want access to.
var x = new com.acme.X();
com.acme.Y(x,[]); // apply y
(constructorsof x) // [com.acme.Y,com.acme.X,Object];
x instanceof com.acme.X; // true
x instanceof com.acme.Y; // true
All feedback appreciated, I appreciate it maybe difficult to appreciate my POV as there is a lot i am trying to convey, but its an important decision and an expert opinion could be invaluable.
anything that can improve my understanding of the prototype model, the good and the bad.
thoughts on my proposal
thanks,
mike
edit: proposal hopefully makes sense now.
Steve Yegge has written a good technical article about the prototype model.
I don't think your issues with the prototype model are valid:
unnecessary littering of object namespace, obj.prototype, obj.constructor
prototype is a property of the contructor function, not the object instance. Also, the problem isn't as bad as it sounds because of the [[DontEnum]] attribute, which unfortunately can't be set programatically. Some of the perceived problems would go away if you could.
is this an immature objection, trying to retain ability to treat objects as maps, which perhaps they are not?
There's no problem with using objects as maps as long as the keys are strings and you check hasOwnProperty().
ability to change shared behaviour at runtime seems unnecessary, when directly using an extra level of indirection would be more straight forward obj.shared.foo(). Particularly it is quite a big implementation headache
I don't see where the big implementation headache in implementning the prototype chain lies. In fact, I consider prototypical inheritance conceptually simpler than class-based inheritance, which doesn't offer any benefits in languages with late-binding.
people do not seem to understand prototypes very well generally e.g. the distinction between a prototype and a constructor.
People who only know class-based oo languages like Java and C++ don't understand JavaScript's inheritance system, news at 11.
In addition to MarkusQ's suggestions, you might also want to check out Io.
It might just be easier to try a few things with practical code. Create the language with one simple syntax, whatever that is, and implement something in that language. Then, after a few iterations of refactoring, identify the features that are obstacles to reading and writing the code. Add, alter or remove what you need to improve the language. Do this a few times.
Be sure your test-code really exercises all parts of your language, even with some bits that really try to break it. Try to do everything wrong in your tests (as well as everything right)
Reading up on "self", the language that pioneered the prototype model, will probably help you more than just thinking of it in terms of javascript (especially since you seem to associate that, as many do, with "web programming"). A few links to get you started:
http://selflanguage.org/
http://www.self-support.com/
Remember, those who fail to learn history are doomed to reimplement it.