Related
Is it worth using mini functions in JavaScript? Example:
function setDisplay(id, status) {
document.getElementById(id).style.display = status;
}
And after that, each time you want to manipulate the display attribute of any given object, you could just call this function:
setDisplay("object1", "block");
setDisplay("object2", "none");
Would there be any benefit of coding this way?
It would reduce file size and consequently page load time if that particular attribute is changed multiple times. Or would calling that function each time put extra load on processing, which would make page loading even slower?
Is it worth using mini functions in JavaScript?
Yes. If done well, it will improve on:
Separation of concern. For instance, a set of such functions can create an abstraction layer for what concerns presentation (display). Code that has just a huge series of instructions combining detailed DOM manipulation with business logic is not best practice.
Coding it only once. Once you have a case where you can call such a function more than once, you have already gained. Code maintenance becomes easier when you don't have (a lot of) code repetition.
Readability. By giving such functions well chosen names, you make code more self-explaining.
Concerning your example:
function setDisplay(id, status) {
document.getElementById(id).style.display = status;
}
[...]
setDisplay("object1", "block");
setDisplay("object2", "none");
This example could be improved. It is a pity that the second argument is closely tied to implementation aspects. When it comes to "separation of concern" it would be better to replace that argument with a boolean, as essentially you are only interested in an on/off functionality. Secondly, the function should probably be fault-tolerant. So you could do:
function setDisplay(id, visible) {
let elem = document.getElementById(id);
if (elem) elem.style.display = visible ? "block" : "none";
}
setDisplay("object1", true);
setDisplay("object2", false);
Would there be any benefit of coding this way?
Absolutely.
would [it] reduce file size...
If you have multiple calls of the same function: yes. But this is not a metric that you should be much concerned with. If we specifically speak of "one-liner" functions, then the size gain will hardly ever be noticeable on modern day infrastructures.
...and consequently page load time
It will have no noticeable effect on page load time.
would calling that function each time put extra load on processing, which would make page loading even slower?
No, it wouldn't. The overhead of function calling is very small. Even if you have hundreds of such functions, you'll not see any downgraded performance.
Other considerations
When you define such functions, it is worth to put extra effort in making those functions robust, so they can deal gracefully with unusual arguments, and you can rely on them without ever having to come back to them.
Group them into modules, where each module deals with one layer of your application: presentation, control, business logic, persistence, ...
It would reduce file size and consequently page load time if that particular attribute is changed multiple times. Or would calling that function each time put extra load on processing, which would make page loading even slower?
It won't make the page's loading slower. Adding a few lines here and there with vanilla js won't effect the page loading time, after all, the page's loading time is the time to load the html js and css files, so a few lines in js won't effect it. A performance issue is unlikely to happen unless you're doing something really intended that brings you into a massive calculation or huge recursion.
Is it worth using mini functions in JavaScript?
In my opinion - Yes. You don't want to overuse that when unnecessary, after all, you don't want to write each line of code in a function right? In many cases, creating mini functions can improve the code's cleanness, readability and they can made it easier and faster to code.
With es6 you can use a single line functions with is very nice and easy:
const setDisplay = (id, status) => document.getElementById(id).style.display = status;
It doesn't really affect the performance unless you execute that function 10,000 or more times.
Jquery use mini functions, like $('#id').hide(); $('#id').show(); and $("#id").css("display", "none"); $("#id").css("display", "block");
It creates more readable code but it doesn't do that much to the performance.
var divs = document.querySelectorAll("div");
function display(div, state) {
div.style.display = state;
}
console.time("style");
for(var i = 0; i < divs.length; i++) {
divs[i].style.display = "none";
}
console.timeEnd("style");
console.time("function");
for(var j = 0; j < divs.length; j++) {
display(divs[j], "block");
}
console.timeEnd("function");
<div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div>
In terms of runtime performance I’d say it doesn’t make any difference. The JavaScript engines are incredibly optimized and are able to recognize that this function is being called often and to replace it by native code, and maybe even able to inline the function call so that the cost of calling an extra function is eliminated completely.
If your intent is to reduce the JavaScript files: you save a few tens of bytes for each place in the code where the function can be used, but even if you call this mini function 100 times, you will be saving 1 or 2 kb of uncompressed JavaScript. If you compare the final gzipped JavaScript (which should be how you serve your resources if you care about performance) I doubt there will be any noticeable difference.
You also need to think about how this type of code will be understood by your colleagues, or whoever may have to read this code in the future. How likely it is that a new programmer in your team knows the standard DOM API? Your non-standard functions might confuse your future teammates, and require them to look into the function body to know what they do. If you have one function like this it probably doesn’t make much of a difference, but as you start adding more and more mini functions for each possible style attribute you end up with hundreds of functions that people need to get used to.
The real benefit is ability to update on several places. Let say that you realize that you want to use classes instead of using styles. Instead of having to change on all places, you can just change that one method.
This is even more apparent when you read from a common source. Perhaps you have an API that you call with a single method.
let user = getData('user');
...but instead doing a API call, you want to use localforage to store the user data and read from the disk instead.
let user = getData('user');
... but you realized that it takes about 70-200 ms to read from localforage, so instead you store everything in memory, but reading from localforage if it's not in memory, and doing an API call if it's not in localforage. Getting user information is still the same:
let user = getData('user');
Just imagine if you wrote the whole call in every single place.
I usually create mini functions like these if I 1) read/write data, 2) if the code gets shorter OR more readable (i.e. the method name replaces comments), 3) or if I have repeatable code that I can sum up in a dynamic way.
Also, to quote from the book Clean Code: each function should do only one thing. Thinking in terms of mini functions helps with that.
I'm not fully sure of what a mini function is but if you're asking whether it's better to define functions for code that needs to be called multiple times instead of typing out/ copy-pasting those same lines of code over and over then it's better to go with option 1 for readability and file-size.
It would reduce file size and consequently page load time if that particular attribute is changed multiple times. Or would calling that function each time put extra load on processing, which would make page loading even slower?
It's not in typical scenarios possible for the usage of such functions to put (any significant) extra load on processing that would slow down the document parsing, it's likeliness to do that would however depend on when and how you call your code to be executed but it's not like you'll be looping O(N3) and calling such functions thousands of times to impact page loading (right?) - if anything, predefined functions like that meant to take dynamic inputs would make your code neater, more uniform looking and reduce file-size... with a very tiny obvious overhead when compared to directly accessing properties without having to go through another function. So you decide whether you want to take up that bargain depending on whether the frequency of you calling functions like that would be significant enough to be considered.
I'm sure most engines optimize repetitive code so it's (or nearly) equivalent to the same thing anyway so I'd choose readability and improve how easy it is for others to understand and follow through code instead.
JavaScript is prototype language on top of that going on a leaf to say everything in JavaScript is an object so its object oriented as object oriented gets to be although at a very different level.
So the question is like we create classes in object oriented, would we develop script framework as functional programming or prototype and what will be performance hit.
so lets consider this very fine example.
function abc(name, age) {
this.name = name;
this.age = age;
}
person1 = new abc('jhon', 25);
person2 = new abc('mark', 46);
console.log(person1.age);console.log(person2.age);
person1.father = 'mark';
console.log(person1.father);
we have created a function and have used the power of this in the scope of function thus every new insurance will carry this information. and no two instances will be alike.
Then down the a line we have used prototype to add father information on instance. This where prototyping becomes awesome. Thus whole this scoping + instance + prototyping is fast.
function abc(name, age) {
name = name;
age = age;
return name; // its either name or age no two values
}
person1 = abc('jhon', 25);
person2 = abc('mark', 46);
console.log(person1);
console.log(person2);
abc.father = 'mark';
console.log(person1.father);
console.log(abc.father);
we tried the same on functional sense the whole thing fell apart. no instance + no this scoping + no new prototyping delegating down the line. Thus in long run this approach will reduce performance as one would have to re-fetch things over and over again on top of that with t=out this scoping we are only returning single factor where as with this scooping we are packing a lot more into object. this one point.
function abc(name, age) {
this.name = name;
this.age = age;
}
person1 = new abc('jhon', 25);
person1.fatherName = 'Mark';
person1.fatherUpdate = function (d){this.fatherAge = this.fatherName+' '+d;};
person1.fatherUpdate(56);
console.log(person1.fatherAge);
Now have added more complexity we have carried this down the line hence the scope and have added functions on top of it.
This will literally make your hair fall out if things were done in pure functional way.
Always keep in mind any given function can and may compute and execute as many things you want but it will always give single return with 99% of the time single property.
If you need more prototype way with scoping as above.
jquery way of doing things.
I had made my framework ActiveQ which uses the above its almost 30 Kb unminified and does everything jquery does and much more with template engine.
anyways lets build example - it is an example so please be kind ;
function $(a){
var x;
if(x = document.querySelector(a)) return x;
return;}
console.log($('div'));
console.log($('p'));
<div>abc</div>
Now this almost 50% of your jquery selector library in just few lines.
now lets extended this
function $(a){
this.x = document.querySelector(a);
}
$.prototype.changeText = function (a){
this.x.innerHTML = a;
}
$.prototype.changeColor = function (a){
this.x.style.color = a;
}
console.log(document.querySelector('div').innerText);
app = new $('div');
app.changeText('hello');
console.log(document.querySelector('div').innerText);
app.changeColor('red');
<div>lets see</div>
what was the point of the whole above exercise you wont have to search through DOM over and over again as long as long one remains in scope of the function with this
Obviously with pure functional programming one would have to search though all over again and again.
even jquery at time forgets about what it had searched and one would have to research because this context is forwarded.
lets do some jquery style chaining full mode way - please be kind its jusrt an example.
function $(a){
return document.querySelector(a);
}
Element.prototype.changeText = function (a){
this.innerHTML = a;
return this;
}
Element.prototype.changeColor = function (a){
this.style.color = a;
return this;
}
console.log($('div').innerText);
//now we have full DOm acces on simple function reurning single context
//lets chain it
console.log($('div').changeText('hello').changeColor('red').innerText);
<div>lets see</div>
See the difference less line of code is way faster in performance as its working with the browser rather then pure functional way rather then creating a functional call load and search load all repeatedly
so you need to perform bunch of tasks with single output stick to functions as in conventional way and if you need to perform bunch of tasks on context as in last example where context is to manipulate properties of the element as compared to $ function which only searches and returns the element.
In node v14.3.0, I discovered (while doing some coding work with very large arrays) that sub-classing an array can cause .slice() to slow down by a factor 20x. While, I could imagine that there might be some compiler optimizations around a non-subclassed array, what I do not understand at all is how .slice() can be more than 2x slower than just manually copying elements from one array to another. That does not make sense to me at all. Anyone have any ideas? Is this a bug or is there some aspect to this that would/could explain it?
For the test, I created a 100,000,000 unit array filled with increasing numbers. I made a copy of the array with .slice() and I made a copy manually by then iterating over the array and assigning values to a new array. I then ran those two tests for both an Array and my own empty subclass ArraySub. Here are the numbers:
Running with Array(100,000,000)
sliceTest: 436.766ms
copyTest: 4.821s
Running with ArraySub(100,000,000)
sliceTest: 11.298s
copyTest: 4.845s
The manual copy is about the same both ways. The .slice() copy is 26x slower on the sub-class and more than 2x slower than the manual copy. Why would that be?
And, here's the code:
// empty subclass for testing purposes
class ArraySub extends Array {
}
function test(num, cls) {
let name = cls === Array ? "Array" : "ArraySub";
console.log(`--------------------------------\nRunning with ${name}(${num})`);
// create array filled with unique numbers
let source = new cls(num);
for (let i = 0; i < num; i++) {
source[i] = i;
}
// now make a copy with .slice()
console.time("sliceTest");
let copy = source.slice();
console.timeEnd("sliceTest");
console.time("copyTest");
// these next 4 lines are a lot faster than this.slice()
const manualCopy = new cls(num);
for (let [i, item] of source.entries()) {
manualCopy[i] = item;
}
console.timeEnd("copyTest");
}
[Array, ArraySub].forEach(cls => {
test(100_000_000, cls);
});
FYI, there's a similar result in this jsperf.com test when run in the Chrome browser. Running the jsperf in Firefox shows a similar trend, but not as much of a difference as in Chrome.
V8 developer here. What you're seeing is fairly typical:
The built-in .slice() function for regular arrays is heavily optimized, taking all sorts of shortcuts and specializations (it even goes as far as using memcpy for arrays containing only numbers, hence copying more than one element at a time using your CPU's vector registers!). That makes it the fastest option.
Calling Array.prototype.slice on a custom object (like a subclassed array, or just let obj = {length: 100_000_000, foo: "bar", ...}) doesn't fit the restrictions of the fast path, so it's handled by a generic implementation of the .slice builtin, which is much slower, but can handle anything you throw at it. This is not JavaScript code, so it doesn't collect type feedback and can't get optimized dynamically. The upside is that it gives you the same performance every time, no matter what. This performance is not actually bad, it just pales in comparison to the optimizations you get with the alternatives.
Your own implementation, like all JavaScript functions, gets the benefit of dynamic optimization, so while it naturally can't have any fancy shortcuts built into it right away, it can adapt to the situation at hand (like the type of object it's operating on). That explains why it's faster than the generic builtin, and also why it provides consistent performance in both of your test cases. That said, if your scenario were more complicated, you could probably pollute this function's type feedback to the point where it becomes slower than the generic builtin.
With the [i, item] of source.entries() approach you're coming close to the spec behavior of .slice() very concisely at the cost of some overhead; a plain old for (let i = 0; i < source.length; i++) {...} loop would be about twice as fast, even if you add an if (i in source) check to reflect .slice()'s "HasElement" check on every iteration.
More generally: you'll probably see the same general pattern for many other JS builtins -- it's a natural consequence of running on an optimizing engine for a dynamic language. As much as we'd love to just make everything fast, there are two reasons why that won't happen:
(1) Implementing fast paths comes at a cost: it takes more engineering time to develop (and debug) them; it takes more time to update them when the JS spec changes; it creates an amount of code complexity that quickly becomes unmanageable leading to further development slowdown and/or functionality bugs and/or security bugs; it takes more binary size to ship them to our users and more memory to load such binaries; it takes more CPU time to decide which path to take before any of the actual work can start; etc. Since none of those resources are infinite, we'll always have to choose where to spend them, and where not.
(2) Speed is fundamentally at odds with flexibility. Fast paths are fast because they get to make restrictive assumptions. Extending fast paths as much as possible so that they apply to as many cases as possible is part of what we do, but it'll always be easy for user code to construct a situation that makes it impossible to take the shortcuts that make a fast path fast.
I have the following 3 different ways of creating an object instance.
I was wondering which of one of these is faster, takes less memory and why?
//Option 1
export interface Animal1 {
name: string;
color: string;
isHungry: boolean;
}
let animal: Animal1 = {
name: 'cat',
color: 'brown',
isHungry: true
};
//Option 2
export class Animal2 {
name: string;
color: string;
isHungry: boolean;
}
let animal2: Animal2 = new Animal2();
animal2.name = 'cat';
animal2.color = 'brown';
animal2.isHungry = true;
//Option 3
export class Animal3 {
constructor(public name:string, public color:string, public isHungry:boolean) {}
}
let animal3 = new Animal3('cat', 'brown', true);
As of coding style and maintenance I prefer option 1 with builder classes in unit testing. Option 3 is the least favourite as it's:
Difficult to create an object easily, you have to set all properties
Difficult to maintain, every time a new property is added you need to update every single instance creation
Error prone, when the number of properties with the same type is high, e.g string, string, string. You may set the wrong property.
Is there any performance/memory difference between these three?
It is not possible to say, in a general sense, which is “fastest”.
Object instance creation is highly optimized in JavaScript engines. The speed will be affected by the surrounding code, the frequency with which it is run, the runtime version and many other factors.
I’m sorry this is not the answer you wanted.
In terms of memory, you can simply count the objects created by each approach - but unless you are writing a highly optimized piece of JavaScript (unlikely), this will be moot.
Typically your choice of pattern for object creation should be driven not by performance, but by non-functional requirements like the style of code agreed by the team, maintainability and API shape.
Of course, algorithms written in TS or JS can still be analysed for time and space complexity in the usual way.
The same as in JavaScript. There might be the little difference in object creation times, but the noticeable impact on overall performance of the code which uses these types. These options are not equal.
The reason is "hidden classes" JIT optimization and things JIT can do with the monomorphic code. If object members are always initialized in the same order, JIT can assign the same "hidden class" to all of the type instances and then use this information to access class members using memory offset instead of the expensive hash table lookup. When you iterate through the collection of objects having the same hidden class (or repeatedly call the function with arguments having the same hidden classes) JIT can recognize the monomorphic code and compile the specialized version of the function with excessive type checks omitted, dynamically inline method calls, and do some other magic which makes modern JS VM so fast.
So, in general, the monomorphic code is faster than the polymorphic one, and the order of properties initialization is being used by JIT to determine the "internal object type".
https://draft.li/blog/2016/12/22/javascript-engines-hidden-classes/
https://richardartoul.github.io/jekyll/update/2015/04/26/hidden-classes.html
The easiest and safest way to enable these optimizations is to use the class and make sure that all class members are always initialized in the same order in its constructor which is an option 3 from your example. Option 2 is the worst as there's no guarantee that the "hidden class" will be the same for the different class instances, and JIT won't consider them having the same type.
The last but not least. There's nothing obvious about JS JIT performance. Not only JIT engines use slightly different (yet very sophisticated) optimization strategies, but they evolve at a time which makes the majority of articles about JIT internals obsolete in quite a short period of time. However, "hidden classes" or similar optimizations allowing JIT to take advantage of the monomorphic code are fundamental for the JS JIT performance.
Anyway, don't guess about performance and don't believe in seemingly obvious things others say. Measure it.
I'm the author of a graph datastructure library for JavaScript. I'm currently ES6-ifying the library.
I also want to make it more usable for ES6 programmers, which means implementing the iterable protocol. But a graph has multiple things to iterate over, and multiple ways to iterate over them (vertices, edges, successors, predecessors, vertices in topological order, etc.)
I have an idea of how to design this interface, but I'd like to follow existing conventions if they exist. Here's an example of how I might do the 'vertices' part:
class JsGraph {
// ...
get vertices() {
return {
_graph: this,
get length() { return this._graph._vertexCount },
*[Symbol.iterator]() {
var keys = Object.keys(this._graph._vertices);
for (let i = 0; i < keys.length; ++i) {
yield [keys[i], this._graph._vertices[keys[i]]];
}
}
};
}
// ...
}
If there are any existng convention I should probably be following (or any problems with this code), your feedback would be much appreciated.
You might get some inspiration from the Map and Set ES6 data structures. They do provide multiple methods to get different iterators: .values(), .keys(), and .entries(). Their ##iterator method defaults to entries or values respectively.
I don't know of any other existing conventions, I guess these will have to form yet.
any problems with this code
First, I'm not sure whether the getter vertices that returns such an object is a good idea. I'd rather do
get vertexCount() { … }
vertices*() { … }
but that'll basically come down to preference. Your current code is not as efficient because it creates two functions at every .vertices access, you could improve that using prototypes if you see the need (class VertexIterator {…}?).
Second, iterators on mutable data are a hazzle to implement, as it may change while being looped over. You should be aware of that and choose a strategy (making your structures immutable is probably out of question). For example, the MapIterator's next method is specified to "redetermine the number of elements each time it is evaluated.". Similarly, for object enumerations (for in) there is the rule that deleted properties must never appear (if not already enumerated before they were deleted), and of course no property may be enumerated twice. However, they explicitly are allowed to be inconsistent about keys that are added during the execution, no guarantees are given on whether those appear in the enumeration or not.
It is not clear to me when anyone would need to use Object.freeze in JavaScript. MDN and MSDN don't give real life examples when it is useful.
I get it that an attempt to change such an object at runtime means a crash. The question is rather, when would I appreciate this crash?
To me the immutability is a design time constraint which is supposed to be guaranteed by the type checker.
So is there any point in having a runtime crash in a dynamically typed language, besides detecting a violation better later than never?
The Object.freeze function does the following:
Makes the object non-extensible, so that new properties cannot be added to it.
Sets the configurable attribute to false for all properties of the object. When - configurable is false, the property attributes cannot be changed and the property cannot be deleted.
Sets the writable attribute to false for all data properties of the object. When writable is false, the data property value cannot be changed.
That's the what part, but why would anyone do this?
Well, in the object-oriented paradigm, the notion exists that an existing API contains certain elements that are not intended to be extended, modified, or re-used outside of their current context. The final keyword in various languages is the most suitable analogy of this. Even in languages that are not compiled and therefore easily modified, it still exists, i.e. PHP, and in this case, JavaScript.
You can use this when you have an object representing a logically immutable data structure, especially if:
Changing the properties of the object or altering its "duck type" could lead to bad behavior elsewhere in your application
The object is similar to a mutable type or otherwise looks mutable, and you want programmers to be warned on attempting to change it rather than obtain undefined behavior.
As an API author, this may be exactly the behavior you want. For example, you may have an internally cached structure that represents a canonical server response that you provide to the user of your API by reference but still use internally for a variety of purposes. Your users can reference this structure, but altering it may result in your API having undefined behavior. In this case, you want an exception to be thrown if your users attempt to modify it.
In my nodejs server environment, I use freeze for the same reason I use 'use strict'. If I have an object that I do not want being extended or modified, I will freeze it. If something attempts to extend or modify my frozen object, I WANT my app to throw an error.
To me this relates to consistent, quality, more secure code.
Also,
Chrome is showing significant performance increases working with frozen objects.
Edit:
In my most recent project, I'm sending/receiving encrypted data between a government entity. There are a lot of configuration values. I'm using frozen object(s) for these values. Modification of these values could have serious, adverse side effects. Additionally, as I linked previously, Chrome is showing performance advantages with frozen objects, I assume nodejs does as well.
For simplicity, an example would be:
var US_COIN_VALUE = {
QUARTER: 25,
DIME: 10,
NICKEL: 5,
PENNY: 1
};
return Object.freeze( US_COIN_VALUE );
There is no reason to modify the values in this example. And enjoy the benefits of speed optimizations.
Object.freeze() mainly using in Functional Programming (Immutability)
Immutability is a central concept of functional programming because without it, the data flow in your program is lossy. State history is abandoned, and strange bugs can creep into your software.
In JavaScript, it’s important not to confuse const, with immutability. const creates a variable name binding which can’t be reassigned after creation. const does not create immutable objects. You can’t change the object that the binding refers to, but you can still change the properties of the object, which means that bindings created with const are mutable, not immutable.
Immutable objects can’t be changed at all. You can make a value truly immutable by deep freezing the object. JavaScript has a method that freezes an object one-level deep.
const a = Object.freeze({
foo: 'Hello',
bar: 'world',
baz: '!'
});
When you're writing a library/framework in JS and you don't want some developer to break your dynamic language creation by re-assigning "internal" or public properties.
This is the most obvious use case for immutability.
With the V8 release v7.6 the performance of frozen/sealed arrays is greatly improved. Therefore, one reason you would like to freeze an object is when your code is performance-critical.
What is a practical situation when you might want to freeze an object?
One example, on application startup you create an object containing app settings. You may pass that configuration object around to various modules of the application. But once that settings object is created you want to know that it won't be changed.
This is an old question, but I think I have a good case where freeze might help. I had this problem today.
The problem
class Node {
constructor() {
this._children = [];
this._parent = undefined;
}
get children() { return this._children; }
get parent() { return this._parent; }
set parent(newParent) {
// 1. if _parent is not undefined, remove this node from _parent's children
// 2. set _parent to newParent
// 3. if newParent is not undefined, add this node to newParent's children
}
addChild(node) { node.parent = this; }
removeChild(node) { node.parent === this && (node.parent = undefined); }
...
}
As you can see, when you change the parent, it automatically handles the connection between these nodes, keeping children and parent in sync. However, there is one problem here:
let newNode = new Node();
myNode.children.push(newNode);
Now, myNode has newNode in its children, but newNode does not have myNode as its parent. So you've just broken it.
(OFF-TOPIC) Why are you exposing the children anyway?
Yes, I could just create lots of methods: countChildren(), getChild(index), getChildrenIterator() (which returns a generator), findChildIndex(node), and so on... but is it really a better approach than just returning an array, which provides an interface all javascript programmers already know?
You can access its length to see how many children it has;
You can access the children by their index (i.e. children[i]);
You can iterate over it using for .. of;
And you can use some other nice methods provided by an Array.
Note: returning a copy of the array is out of question! It costs linear time, and any updates to the original array do not propagate to the copy!
The solution
get children() { return Object.freeze(Object.create(this._children)); }
// OR, if you deeply care about performance:
get children() {
return this._PUBLIC_children === undefined
? (this._PUBLIC_children = Object.freeze(Object.create(this._children)))
: this._PUBLIC_children;
}
Done!
Object.create: we create an object that inherits from this._children (i.e. has this._children as its __proto__). This alone solves almost the entire problem:
It's simple and fast (constant time)
You can use anything provided by the Array interface
If you modify the returned object, it does not change the original!
Object.freeze: however, the fact that you can modify the returned object BUT the changes do not affect the original array is extremely confusing for the user of the class! So, we just freeze it. If he tries to modify it, an exception is thrown (assuming strict mode) and he knows he can't (and why). It's sad no exception is thrown for myFrozenObject[x] = y if you are not in strict mode, but myFrozenObject is not modified anyway, so it's still not-so-weird.
Of course the programmer could bypass it by accessing __proto__, e.g:
someNode.children.__proto__.push(new Node());
But I like to think that in this case they actually know what they are doing and have a good reason to do so.
IMPORTANT: notice that this doesn't work so well for objects: using hasOwnProperty in the for .. in will always return false.
UPDATE: using Proxy to solve the same problem for objects
Just for completion: if you have an object instead of an Array you can still solve this problem by using Proxy. Actually, this is a generic solution that should work with any kind of element, but I recommend against (if you can avoid it) due to performance issues:
get myObject() { return Object.freeze(new Proxy(this._myObject, {})); }
This still returns an object that can't be changed, but keeps all the read-only functionality of it. If you really need, you can drop the Object.freeze and implement the required traps (set, deleteProperty, ...) in the Proxy, but that takes extra effort, and that's why the Object.freeze comes in handy with proxies.
I can think of several places that Object.freeze would come in very handy.
The first real world implementation that could use freeze is when developing an application that requires 'state' on the server to match what's in the browser. For instance, imagine you need to add in a level of permissions to your function calls. If you are working in an application there may be places where a Developer could easily change or overwrite the permission settings without even realizing it (especially if the object were being passed through by reference!). But permissions by and large can never change and error'ing when they are changed is preferred. So in this case, the permissions object could be frozen, thereby limiting developer from mistakenly 'setting' permissions erroneously. The same could be said for user-like data like a login name or email address. These things can be mistakenly or maliciously broken with bad code.
Another typical solution would be in a game loop code. There are many settings of game state that you would want to freeze to retain that the state of the game is kept in sync with the server.
Think of Object.freeze as a way to make an object as a Constant. Anytime you would want to have variable constant, you could have an object constant with freeze for similar reasons.
There are also times where you want to pass immutable objects through functions and data passing, and only allow updating the original object with setters. This can be done by cloning and freezing the object for 'getters' and only updating the original with 'setters'.
Are any of these not valid things? It can also be said that frozen objects could be more performant due to the lack of dynamic variables, but I haven't seen any proof of that yet.
The only practical use for Object.freeze is during development. For production code, there is absolutely no benefit for freezing/sealing objects.
Silly Typos
It could help you catch this very common problem during development:
if (myObject.someProp = 5) {
doSomething();
}
In strict mode, this would throw an error if myObject was frozen.
Enforce Coding Protocol / Restriction
It would also help in enforcing a certain protocol in a team, especially with new members who may not have the same coding style as everyone else.
A lot of Java guys like to add a lot of methods to objects to make JS feel more familiar. Freezing objects would prevent them from doing that.
I could see this being useful when you're working with an interactive tool. Rather than:
if ( ! obj.isFrozen() ) {
obj.x = mouse[0];
obj.y = mouse[1];
}
You could simply do:
obj.x = mouse[0];
obj.y = mouse[1];
Properties will only update if the object isn't frozen.
Don't know if this helps, but I use it to create simple enumerations. It allows me to hopefully not get duff data in a database, by knowing the source of the data has been attempted to be unchangeable without purposefully trying to break the code. From a statically typed perspective, it allows for reasoning over code construction.
All the other answers pretty much answer the question.
I just wanted to summarise everything here along with an example.
Use Object.freeze when you need utmost surety regarding its state in the future. You need to make sure that other developers or users of your code do not change internal/public properties. Alexander Mills's answer
Object.freeze has better performance since 19th June, 2019, ever since V8 v7.6 released. Philippe's answer. Also take a look at the V8 docs.
Here is what Object.freeze does, and it should clear out doubts for people who only have surface level understanding of Object.freeze.
const obj = {
name: "Fanoflix"
};
const mutateObject = (testObj) => {
testObj.name = 'Arthas' // NOT Allowed if parameter is frozen
}
obj.name = "Lich King" // Allowed
obj.age = 29; // Allowed
mutateObject(obj) // Allowed
Object.freeze(obj) // ========== Freezing obj ==========
mutateObject(obj) // passed by reference NOT Allowed
obj.name = "Illidan" // mutation NOT Allowed
obj.age = 25; // addition NOT Allowed
delete obj.name // deletion NOT Allowed