With the flexibility of JavaScript, we can write code full of side-effects, or just purely functional.
I have been interested in functional JavaScript, and wanting to start a project in this paradigm. And a linter about that can surely help me gathering good practices. Is there any linter to enforce pure functional and side-effect free style?
Purity Analysis is equivalent to Solving the Halting Problem, so any kind of static analysis that can determine whether code is pure or impure is impossible in the general case. There will always be infinitely many programs for which it is undecidable whether or not they are pure; some of those programs will be pure, some impure.
Now, you deliberately used the term "linter" instead of static analyzer (although of course a linter is just a static analyzer), which seems to imply that you are fine with an approximate heuristic result. You can have a linter that will sometimes tell you that your code is pure, sometimes tell you that your code is impure, and most times tell you that it cannot decide whether your code is pure or impure. And you can have a whitelist of operations that are known to be pure (e.g. adding two Numbers using the + operator), and a blacklist of operations that are known to be impure (e.g. anything that can throw an exception, any sort of loops, if statements, Array.prototype.forEach) and do a heuristic scan for those.
But in the end, the results will be too unreliable to do anything serious with them.
I haven't used this myself but I found this plugin for ESLint: https://github.com/jfmengels/eslint-plugin-fp
You cannot use JS commpletely without side effects. Every DOM-access is a side effect, and we could have an argument wether the whole global namespace may also fall under that definition.
The best you can do is, stay reasonable. I'm splitting this logically into two groups:
the work horses (utilities): their purpose is to take some data and to process it somehow. These are (mostly) side effects free. mostly, because somethimes these functions need some state, like a counter or a cache, wich could be argued to be a side effect, but since this is isolated/enclosed into these functions I don't really care. like the functions that you pass to Array#map() or to a promises then(), and similar places.
and the management: these functions rarely do some data-processing on their own, they mostly orchestrate the data flow, from whereever it is created, to whatever processing(-utilities) it has to be run, up to where it ends, like modifying the DOM, or mutating an object.
var theOnesINeed = compose(...);
var theOtherOnesINeed = compose(...);
var intoADifferentFormat = function(value){ return ... }
function(event){
var a = someList.filter(theOnesINeed).map(intoADifferentFormat);
var b = someOtherList.filter(theOtherOnesINeed);
var rows = a.concat(b).map(wrap('<li>', '</li>'));
document.querySelector('#youYesYou').innerHTML = rows.join('\n');
}
so that all functions stay as short and simple as possible. And don't be afraid of descriptive names (not like these way to general ones :))
I am currently working on React JS and React Native frameworks. On the half way road I came across Immutability or the Immutable-JS library, when I was reading about Facebook's Flux and Redux implementation.
The question is, why is immutability so important? What is wrong in mutating objects? Doesn't it make things simple?
Giving an example, let us consider a simple News reader app with the opening screen being a list view of news headlines.
If I set say an array of objects with a value initially I can't manipulate it. That's what immutability principle says, right? (Correct me if I am wrong.)
But, what if I have a new News object that has to be updated? In usual case, I could have just added the object to the array.
How do I achieve in this case? Delete the store and recreate it?
Isn't adding an object to the array a less expensive operation?
I have recently been researching the same topic. I'll do my best to answer your question(s) and try to share what I have learned so far.
The question is, why is immutability so important? What is wrong in
mutating objects? Doesn't it make things simple?
Basically it comes down to the fact that immutability increases predictability, performance (indirectly) and allows for mutation tracking.
Predictability
Mutation hides change, which create (unexpected) side effects, which can cause nasty bugs. When you enforce immutability you can keep your application architecture and mental model simple, which makes it easier to reason about your application.
Performance
Even though adding values to an immutable Object means that a new instance needs to be created where existing values need to be copied and new values need to be added to the new Object which cost memory, immutable Objects can make use of structural sharing to reduce memory overhead.
All updates return new values, but internally structures are shared to
drastically reduce memory usage (and GC thrashing). This means that if
you append to a vector with 1000 elements, it does not actually create
a new vector 1001-elements long. Most likely, internally only a few
small objects are allocated.
You can read more about this here.
Mutation Tracking
Besides reduced memory usage, immutability allows you to optimize your application by making use of reference- and value equality. This makes it really easy to see if anything has changed. For example a state change in a react component. You can use shouldComponentUpdate to check if the state is identical by comparing state Objects and prevent unnecessary rendering.
You can read more about this here.
Additional resources:
The Dao of Immutability
Immutable Data Structures and JavaScript
Immutability in JavaScript
If I set say an array of objects with a value initially. I can't
manipulate it. That's what immutability principle says, right?(Correct
me if I am wrong). But, what if I have a new News object that has to
be updated? In usual case, I could have just added the object to the
array. How do I achieve in this case? Delete the store & recreate it?
Isn't adding an object to the array a less expensive operation?
Yes this is correct. If you're confused on how to implement this in your application I would recommend you to look at how redux does this to get familiar with the core concepts, it helped me a lot.
I like to use Redux as an example because it embraces immutability. It has a single immutable state tree (referred to as store) where all state changes are explicit by dispatching actions which are processed by a reducer that accepts the previous state together with said actions (one at a time) and returns the next state of your application. You can read more about it's core principles here.
There is an excellent redux course on egghead.io where Dan Abramov, the author of redux, explains these principles as follows (I modified the code a bit to better fit the scenario):
import React from 'react';
import ReactDOM from 'react-dom';
// Reducer.
const news = (state=[], action) => {
switch(action.type) {
case 'ADD_NEWS_ITEM': {
return [ ...state, action.newsItem ];
}
default: {
return state;
}
}
};
// Store.
const createStore = (reducer) => {
let state;
let listeners = [];
const subscribe = (listener) => {
listeners.push(listener);
return () => {
listeners = listeners.filter(cb => cb !== listener);
};
};
const getState = () => state;
const dispatch = (action) => {
state = reducer(state, action);
listeners.forEach( cb => cb() );
};
dispatch({});
return { subscribe, getState, dispatch };
};
// Initialize store with reducer.
const store = createStore(news);
// Component.
const News = React.createClass({
onAddNewsItem() {
const { newsTitle } = this.refs;
store.dispatch({
type: 'ADD_NEWS_ITEM',
newsItem: { title: newsTitle.value }
});
},
render() {
const { news } = this.props;
return (
<div>
<input ref="newsTitle" />
<button onClick={ this.onAddNewsItem }>add</button>
<ul>
{ news.map( ({ title }) => <li>{ title }</li>) }
</ul>
</div>
);
}
});
// Handler that will execute when the store dispatches.
const render = () => {
ReactDOM.render(
<News news={ store.getState() } />,
document.getElementById('news')
);
};
// Entry point.
store.subscribe(render);
render();
Also, these videos demonstrate in further detail how to achieve immutability for:
Arrays
Objects
A Contrarian View of Immutability
TL/DR: Immutability is more a fashion trend than a necessity in JavaScript. If you are using React it does provide a neat work-around to some confusing design choices in state management. However in most other situations it wont add enough value over the complexity it introduces, serving more to pad up a resume than to fulfill an actual client need.
Long answer: read below.
Why is immutability so important(or needed) in javascript?
Well, I'm glad you asked!
Some time ago a very talented guy called Dan Abramov wrote a javascript state management library called Redux which uses pure functions and immutability. He also made some really cool videos that made the idea really easy to understand (and sell).
The timing was perfect. The novelty of Angular was fading, and JavaScript world was ready to fixate on the latest thing that had the right degree of cool, and this library was not only innovative but slotted in perfectly with React which was being peddled by another Silicon Valley powerhouse.
Sad as it may be, fashions rule in the world of JavaScript. Now Abramov is being hailed as a demigod and all us mere mortals have to subject ourselves to the Dao of Immutability... Wether it makes sense or not.
What is wrong in mutating objects?
Nothing!
In fact programmers have been mutating objects for er... as long as there has been objects to mutate. 50+ years of application development in other words.
And why complicate things? When you have object cat and it dies, do you really need a second cat to track the change? Most people would just say cat.isDead = true and be done with it.
Doesn't (mutating objects) make things simple?
YES! .. Of course it does!
Specially in JavaScript, which in practice is most useful used for rendering a view of some state that is maintained elsewhere (like in a database).
What if I have a new News object that has to be updated? ... How do I achieve in this case? Delete the store & recreate it? Isn't adding an object to the array a less expensive operation?
Well, you can go the traditional approach and update the News object, so your in-memory representation of that object changes (and the view displayed to the user, or so one would hope)...
Or alternatively...
You can try the sexy FP/Immutability approach and add your changes to the News object to an array tracking every historical change so you can then iterate through the array and figure out what the correct state representation should be (phew!).
I am trying to learn what's right here. Please do enlighten me :)
Fashions come and go buddy. There are many ways to skin a cat.
I am sorry that you have to bear the confusion of a constantly changing set of programming paradigms. But hey, WELCOME TO THE CLUB!!
Now a couple of important points to remember with regards to Immutability, and you'll get these thrown at you with the feverish intensity that only naivety can muster.
1) Immutability is awesome for avoiding race conditions in multi-threaded environments.
Multi-threaded environments (like C++, Java and C#) are guilty of the practice of locking objects when more than one thread wants to change them. This is bad for performance, but better than the alternative of data corruption. And yet not as good as making everything immutable (Lord praise Haskell!).
BUT ALAS! In JavaScript you always operate on a single thread. Even web workers (each runs inside a separate context). So since you can't have a thread related race condition inside your execution context (all those lovely global variables and closures), the main point in favour of Immutability goes out the window.
(Having said that, there is an advantage to using pure functions in web workers, which is that you'll have no expectations about fiddling with objects on the main thread.)
2) Immutability can (somehow) avoid race conditions in the state of your app.
And here is the real crux of the matter, most (React) developers will tell you that Immutability and FP can somehow work this magic that allows the state of your application to become predictable.
Of course this doesn’t mean that you can avoid race conditions in the database, to pull that one off you’d have to coordinate all users in all browsers, and for that you’d need a back-end push technology like WebSockets (more on this below) that will broadcast changes to everyone running the app.
Nor does it mean that there is some inherent problem in JavaScript where your application state needs immutability in order to become predictable, any developer that has been coding front-end applications before React would tell you this.
This rather confusing claim simply means that if you use React your application is prone to race conditions, but that immutability allows you to take that pain away. Why? Because React is special.. its been designed first and foremost as a highly optimised rendering library with state management subverted to that aim, and thus component state is managed via an asynchronous chain of events (aka "one-way data binding") that optimize rendering but you have no control over and rely on you remembering not to mutate state directly...
Given this context, its easy to see how the need for immutability has little to do with JavaScript and a lot to do with React: if have a bunch of inter-dependent changes in your spanking new application and no easy way to figure out what your state is currently at, you are going to get confused, and thus it makes perfect sense to use immutability to track every historical change.
3) Race conditions are categorically bad.
Well, they might be if you are using React. But they are rare if you pick up a different framework.
Besides, you normally have far bigger problems to deal with… Problems like dependency hell. Like a bloated code-base. Like your CSS not getting loaded. Like a slow build process or being stuck to a monolithic back-end that makes iterating almost impossible. Like inexperienced devs not understanding whats going on and making a mess of things.
You know. Reality. But hey, who cares about that?
4) Immutability makes use of Reference Types to reduce the performance impact of tracking every state change.
Because seriously, if you are going to copy stuff every time your state changes, you better make sure you are smart about it.
5) Immutability allows you to UNDO stuff.
Because er.. this is the number one feature your project manager is going to ask for, right?
6) Immutable state has lots of cool potential in combination with WebSockets
Last but not least, the accumulation of state deltas makes a pretty compelling case in combination with WebSockets, which allows for an easy consumption of state as a flow of immutable events...
Once the penny drops on this concept (state being a flow of events -- rather than a crude set of records representing the latest view), the immutable world becomes a magical place to inhabit. A land of event-sourced wonder and possibility that transcends time itself. And when done right this can definitely make real-time apps easier to accomplish, you just broadcast the flow of events to everyone interested so they can build their own representation of the present and write back their own changes into the communal flow.
But at some point you wake up and realise that all that wonder and magic do not come for free. Unlike your eager colleagues, your stakeholders (yea, the people who pay you) care little about philosophy or fashion and a lot about the money they pay to build a product they can sell. And the bottom line is that its harder to code for immutability and easier to break it, plus there is little point having an immutable front-end if you don't have a back-end to support it. When (and if!) you finally convince your stakeholders that you should publish and consume events via a push techology like WebSockets, you find out what a pain it is to scale in production.
Now for some advice, should you choose to accept it.
A choice to write JavaScript using FP/Immutability is also a choice to make your application code-base larger, more complex and harder to manage. I would strongly argue for limiting this approach to your Redux reducers, unless you know what you are doing... And IF you are going to go ahead and use immutability regardless, then apply immutable state to your whole application stack, and not just the client-side. After all, there is little point having an immutable front-end, and then connect it to a database where all records have a single mutable version... you just go back to the same problems you were trying to get away from!
Now, if you are fortunate enough to be able to make choices in your work, then try and use your wisdom (or not) and do what's right by the person who is paying you. You can base this on your experience, on your gut, or whats going on around you (admittedly if everyone is using React/Redux then there a valid argument that it will be easier to find a resource to continue your work).. Alternatively, you can try either Resume Driven Development or Hype Driven Development approaches. They might be more your sort of thing.
In short, the thing to be said for immutability is that it will make you fashionable with your peers, at least until the next craze comes around, by which point you'll be glad to move on.
Now after this session of self-therapy I'd like to point out that I've added this as an article in my blog => Immutability in JavaScript: A Contrarian View. Feel free to reply in there if you have strong feelings you'd like to get off your chest too ;).
The question is, why is immutability so important? What is wrong in mutating objects? Doesn't it make things simple?
Actually, the opposite is true: mutability makes things more complicated, at least in the long run. Yes, it makes your initial coding easier because you can just change things wherever you want, but when your program goes bigger it becomes a problem – if a value changed, what changed it?
When you make everything immutable, it means data can't be changed by surprise any more. You know for certain that if you pass a value into a function, it can't be changed in that function.
Put simply: if you use immutable values, it makes it very easy to reason about your code: everyone gets a unique* copy of your data, so it can't futz with it and break other parts of your code. Imagine how much easier this makes working in a multi-threaded environment!
Note 1: There is a potential performance cost to immutability depending on what you're doing, but things like Immutable.js optimise as best they can.
Note 2: In the unlikely event you weren't sure, Immutable.js and ES6 const mean very different things.
In usual case, I could have just added the object to the array. How do I achieve in this case? Delete the store & recreate it? Isn't adding an object to the array a less expensive operation? PS: If the example is not the right way to explain immutability, please do let me know what's the right practical example.
Yes, your news example is perfectly good, and your reasoning is exactly right: you can't just amend your existing list, so you need to create a new one:
var originalItems = Immutable.List.of(1, 2, 3);
var newItems = originalItems.push(4, 5, 6);
Although the other answers are fine, to address your question about a practical use case (from the comments on the other answers) lets step outside your running code for a minute and look at the ubiquitous answer right under your nose: git. What would happen if every time you pushed a commit you overwrote the data in the repository?
Now we're in to one of the problems that immutable collections face: memory bloat. Git is smart enough to not simply make new copies of files every time you make a change, it simply keeps track of the diffs.
While I don't know much about the inner workings of git, I can only assume it uses a similar strategy to that of libraries you reference: structural sharing. Under the hood the libraries use tries or other trees to only track the nodes that are different.
This strategy is also reasonably performant for in-memory data structures as there are well-known tree-operation algorithms that operate in logarithmic time.
Another use case: say you want an undo button on your webapp. With immutable representations of your data, implementing such is relatively trivial. But if you rely on mutation, that means you have to worry about caching the state of the world and making atomic updates.
In short, there's a price to pay for immutability in runtime performance and the learning curve. But any experienced programmer will tell you that debugging time outweighs code-writing time by an order of magnitude. And the slight hit on runtime performance is likely outweighed by the state-related bugs your users don't have to endure.
The question is, why is immutability so important? What is wrong in mutating objects? Doesn't it make things simple?
About mutability
Nothing is wrong in mutability from technical point of view. It is fast, it is re-using the memory. Developers are use to it from the beginning (as I remember it). Problem exists in the use of mutability and troubles which this use can bring.
If object is not shared with anything, for example exists in the scope of the function and is not exposed to the outside, then it is hard to see benefits in immutability. Really in this case it is no sense to be immutable. The sense of immutability starts when something is shared.
Mutability headache
Mutable shared structure can easily create many pitfalls. Any change in any part of the code with access to the reference has impact to other parts with visibility of this reference. Such impact connects all parts together, even when they should not be aware of different modules. Mutation in one function can crash totally different part of the app. Such thing is a bad side effect.
Next often problem with mutation is corrupted state. Corrupted state can happen when mutation procedure fails in the middle, and some fields were modified and some not.
What’s more, with mutation it is hard to track the change. Simple reference check will not show the difference, to know what changed some deep check needs to be done. Also to monitor the change some observable pattern needs to be introduced.
Finally, mutation is reason of the trust deficit. How you can be sure that some structure has wanted value, if it can be mutated.
const car = { brand: 'Ferrari' };
doSomething(car);
console.log(car); // { brand: 'Fiat' }
As above example shows, passing mutable structure always can finish by having different structure. Function doSomething is mutating the attribute given from outside. No trust for the code, you really don't know what you have and what you will have. All these problems take place because: Mutable structures are representing pointers to the memory.
Immutability is about values
Immutability means that change is not done on the same object,structure, but change is represented in new one. And this is because reference represents value not only memory pointer. Every change creates new value and doesn't touch the old one. Such clear rules gives back the trust and code predictability. Functions are safe to use because instead of mutation, they deal with own versions with own values.
Using values instead of memory containers gives certainty that every object represents specific unchangeable value and it is safe to use it.
Immutable structures are representing values.
I am diving even more into the subject in medium article - https://medium.com/#macsikora/the-state-of-immutability-169d2cd11310
Why is immutability so important(or needed) in JavaScript?
Immutability can be tracked in different contexts, but most important would be to track it against the application state and against the application UI.
I will consider the JavaScript Redux pattern as very trendy and modern approach and because you mentioned that.
For the UI we need to make it predictable.
It will be predictable if UI = f(application state).
Applications (in JavaScript) do change the state via actions implemented using the reducer function.
The reducer function simply takes the action and the old state and returns the new state, keeping the old state intact.
new state = r(current state, action)
The benefit is: you time-travel the states since all the state objects are saved, and you can render the app in any state since UI = f(state)
So you can undo/redo easily.
Happens to be creating all these states can still be memory efficient, an analogy with Git is great, and we have the similar analogy in Linux OS with symbolic links (based on the inodes).
Another benefit of Immutability in Javascript is that it reduces Temporal Coupling, which has substantial benefits for design generally. Consider the interface of an object with two methods:
class Foo {
baz() {
// ....
}
bar() {
// ....
}
}
const f = new Foo();
It may be the case that a call to baz() is required to get the object in a valid state for a call to bar() to work correctly. But how do you know this?
f.baz();
f.bar(); // this is ok
f.bar();
f.baz(); // this blows up
To figure it out you need to scrutinise the class internals because it is not immediately apparent from examining the public interface. This problem can explode in a large codebase with lots of mutable state and classes.
If Foo is immutable then this is no longer a problem. It is safe to assume we can call baz or bar in any order because the inner state of the class cannot change.
Once upon a time, there was a problem with data synchronization between threads. This problem was a great pain, there were 10+ solutions. Some people tried to solve it radically. It was a place where functional programming was born. It is just like Marxism. I couldn't understand how Dan Abramov sold this idea into JS, because it is single threaded. He is a genius.
I can give a small example. There is an attribute __attribute__((pure)) in gcc. Compilers tries to solve whether your function is pure or not if you won't declear it specially. Your function can be pure even your state is mutable. Immutability is just a one of 100+ ways to guarantee that you function will be pure. Actually 95% of your functions will be pure.
You shouldn't use any limitations (like immutability) if you actually don't have a serious reason. If you want to "Undo" some state, you can create transactions. If you want to simplify communications you can send events with immutable data. It is up to you.
I am writing this message from post marxism republic. I am sure that radicalization of any idea is a wrong way.
A Different Take...
My other answer addresses the question from a very practical standpoint, and I still like it. I've decided to add this as another answer rather than an addendum to that one because it is a boring philosophical rant which hopefully also answers the question, but doesn't really fit with my existing answer.
TL;DR
Even in small projects immutability can be useful, but don't assume that because it exists it's meant for you.
Much, much longer answer
NOTE: for the purpose of this answer I'm using the word 'discipline' to mean self-denial for some benefit.
This is similar in form to another question: "Should I use Typescript? Why are types so important in JavaScript?". It has a similar answer too. Consider the following scenario:
You are the sole author and maintainer of a JavaScript/CSS/HTML codebase of some 5000 lines. Your semi-technical boss reads something about Typescript-as-the-new-hotness and suggests that we may want to move to it but leaves the decision to you. So you read about it, play with it, etc.
So now you have a choice to make, do you move to Typescript?
Typescript has some compelling advantages: intellisense, catching errors early, specifying your APIs upfront, ease of fixing things when refactoring breaks them, fewer tests. Typescript also has some costs: certain very natural and correct JavaScript idioms can be tricky to model in it's not-especially-powerful type system, annotations grow the LoC, time and effort of rewriting existing codebase, extra step in the build pipeline, etc. More fundamentally, it carves out a subset of possible correct JavaScript programs in exchange for the promise that your code is more likely to be correct. It's arbitrarily restrictive. That's the whole point: you impose some discipline that limits you (hopefully from shooting yourself in the foot).
Back to the question, rephrased in the context of the above paragraph: is it worth it?
In the scenario described, I would contend that if you are very familiar with a small-to-middling JS codebase, that the choice to use Typescript is more aesthetic than practical. And that's fine, there's nothing wrong with aesthetics, they just aren't necessarily compelling.
Scenario B:
You change jobs and are now a line-of-business programmer at Foo Corp. You're working with a team of 10 on a 90000 LoC (and counting) JavaScript/HTML/CSS codebase with a fairly complicated build pipeline involving babel, webpack, a suite of polyfills, react with various plugins, a state management system, ~20 third-party libraries, ~10 internal libraries, editor plugins like a linter with rules for in-house style guide, etc. etc.
Back when you were 5k LoC guy/girl, it just didn't matter that much. Even documentation wasn't that big a deal, even coming back to a particular portion of the code after 6 months you could figure it out easily enough. But now discipline isn't just nice but necessary. That discipline may not involve Typescript, but will likely involve some form of static analysis as well as all the other forms of coding discipline (documentation, style guide, build scripts, regression testing, CI). Discipline is no longer a luxury, it is a necessity.
All of this applied to GOTO in 1978: your dinky little blackjack game in C could use GOTOs and spaghetti logic and it just wasn't that big a deal to choose-your-own-adventure your way through it, but as programs got bigger and more ambitious, well, undisciplined use of GOTO could not be sustained. And all of this applies to immutability today.
Just like static types, if you are not working on a large codebase with a team of engineers maintaining/extending it, the choice to use immutability is more aesthetic than practical: it's benefits are still there but may not outweigh the costs yet.
But as with all useful disciplines, there comes a point at which it is no longer optional. If I want to maintain a healthy weight, then discipline involving ice cream may be optional. But if I want to be a competitive athlete, my choice of whether or not to eat ice cream is subsumed by my choice of goals. If you want to change the world with software, immutability might be part of what you need to avoid it collapsing under it's own weight.
Take for example:
const userMessage = {
user: "userId",
topic: "topicId"
content: {}
}
validateMessage(userMessage)
saveMessage(userMessage)
sendMessageViaEmail(userMessage)
**sendMessageViaMobilePush(userMessage)**
console.log(userMessage) // => ?
and now answer some questions:
what is under userMessage on line sendMessageViaMobilePush(userMessage)) in mutable code?
{
id: "xxx-xxx-xxx-xxx", //set by ..(Answer for question 3)
user:"John Tribe", //set by sendMessageViaEmail
topic: "Email title", //set by sendMessageViaEmail
status: FINAL, //set by saveMessage or could be set by sendMessageViaEmail
from: "..", //set by sendMessageViaEmail
to:"...", //set by sendMessageViaEmail
valid:true, //set by validateMessage
state: SENT //set by sendMessageViaEmail
}
Surprised?? Me too :d. But this is normal with mutability in javascript.
(in Java too but a bit in different way. When You expect null but get some object).
What is under userMessage on same line in immutable code?
const userMessage = {
user: "userId",
topic: "topicId",
content: {}
}
Easy right ?
Can You guess by which method "id" is updated in mutable code in Snippet 1 ??
By sendMessageViaEmail.
Why?
Why not?
Well it was at first updated by saveMessage,
but then overridden by sendMessageViaEmail.
In mutable code people didn't received push messages (sendMessageViaMobilePush). Can You guess why ??
because I am amazing developer :D and I put safety check in method sendMessageViaMobilePush(userMessage)
function sendMessageViaMobilePush(userMessage) {
if (userMessage.state != SENT) { //was set to SENT by sendMessageViaEmail
send(userMessage)
}
}
Even if You saw this method before,
was this possible for You to predict this behavior in mutable code ?
For me it wasn't.
Hope this helped You to understand what is major issue using mutable objects in javascript.
Note that when complexity rise it is too difficult to check what was set and where especially when You work with other people.
I've created a framework agnostic open source (MIT) lib for mutable (or immutable) state which can replace all those immutable storage like libs (redux, vuex etc...).
Immutable states was ugly for me because there was too much work to do (a lot of actions for simple read/write operations), code was less readable and performance for big datasets was not acceptable (whole component re-render :/ ).
With deep-state-observer I can update only one node with dot notation and use wildcards. I can also create history of the state (undo/redo/time travel) keeping just those concrete values that have been changed {path:value} = less memory usage.
With deep-state-observer I can fine-tune things and I have grain control over component behavior so performance can be drastically improved. Code is more readable and refactoring is a lot easier - just search and replace path strings (no need to change code/logic).
The main advantage of immutability is its simplicity.
Replacing an object is simpler than modifying an existing one.
It allows you to focus on correctness in one place. Rather than every possible place where your object might change.
If your object is in an invalid state, its easier to fix because the fault must have occurred when you created it (since its immutable)
I think the main reason pro immutable objects, is keeping the state of the object valid.
Suppose we have an object called arr. This object is valid when all the items are the same letter.
// this function will change the letter in all the array
function fillWithZ(arr) {
for (var i = 0; i < arr.length; ++i) {
if (i === 4) // rare condition
return arr; // some error here
arr[i] = "Z";
}
return arr;
}
console.log(fillWithZ(["A","A","A"])) // ok, valid state
console.log(fillWithZ(["A","A","A","A","A","A"])) // bad, invalid state
if arr become an immutable object, then we will be sure arr is always in a valid state.
So I started learning React a week ago and I inevitably got to the problem of state and how components are supposed to communicate with the rest of the app. I searched around and Redux seems to be the flavor of the month. I read through all the documentation and I think it's actually a pretty revolutionary idea. Here are my thoughts on it:
State is generally agreed to be pretty evil and a large source of bugs in programming. Instead of scattering it all throughout your app Redux says why not just have it all concentrated in a global state tree that you have to emit actions to change? Sounds interesting. All programs need state so let's stick it in one impure space and only modify it from within there so bugs are easy to track down. Then we can also declaratively bind individual state pieces to React components and have them auto-redraw and everything is beautiful.
However, I have two questions about this whole design. For one, why does the state tree need to be immutable? Say I don't care about time travel debugging, hot reload, and have already implemented undo/redo in my app. It just seems so cumbersome to have to do this:
case COMPLETE_TODO:
return [
...state.slice(0, action.index),
Object.assign({}, state[action.index], {
completed: true
}),
...state.slice(action.index + 1)
];
Instead of this:
case COMPLETE_TODO:
state[action.index].completed = true;
Not to mention I am making an online whiteboard just to learn and every state change might be as simple as adding a brush stroke to the command list. After a while (hundreds of brush strokes) duplicating this entire array might start becoming extremely expensive and time-consuming.
I'm ok with a global state tree that is independent from the UI that is mutated via actions, but does it really need to be immutable? What's wrong with a simple implementation like this (very rough draft. wrote in 1 minute)?
var store = { items: [] };
export function getState() {
return store;
}
export function addTodo(text) {
store.items.push({ "text": text, "completed", false});
}
export function completeTodo(index) {
store.items[index].completed = true;
}
It's still a global state tree mutated via actions emitted but extremely simple and efficient.
Isn't Redux just glorified global state?
Of course it is. But the same holds for every database you have ever used. It is better to treat Redux as an in-memory database - which your components can reactively depend upon.
Immutability enables checking if any sub-tree has been altered very efficient because it simplifies down to an identity check.
Yes, your implementation is efficient, but the entire virtual dom will have to be re-rendered each time the tree is manipulated somehow.
If you are using React, it will eventually do a diff against the actual dom and perform minimal batch-optimized manipulations, but the full top-down re-rendering is still inefficient.
For an immutable tree, stateless components just have to check if the subtree(s) it depends on, differ in identities compared to previous value(s), and if so - the rendering can be avoided entirely.
Yes it is!!!
Since there is no governance of who is allowed to write a specific property/variable/entry to the store and practically you can dispatch any action from anywhere, the code tends to be harder to maintain and even spaghetti when your code base grows and/or managed by more than one person.
I had the same questions and issues with Redux when I started use it so I have created a library that fix these issue:
It is called Yassi:
Yassi solves the problems you mentioned by define a globally readable and privately writable store. It means that anyone can read a property from the store (such as in Redux but simpler).
However only the owner of the property, meaning the object that declare the property can write/update that property in the store
In addition, Yassi has other perks in it such as zero boilerplate to declare entry in the store by using annotations (use #yassit('someName'))
Update the value of that entry does not require actions/reducers or other such cumbersome code snippets, instead just update the variable like in regular object.
We have been debating how best to handle objects in our JS app, studying Stoyan Stefanov's book, reading endless SO posts on 'new', 'this', 'prototype', closures etc. (The fact that there are so many, and they have so many competing theories, suggests there is no completely obvious answer).
So let's assume the we don't care about private data. We are content to trust users and developers not to mess around in objects outside the ways we define.
Given this, what (other than it seeming to defy decades of OO style and history) would be wrong with this technique?
// namespace to isolate all PERSON's logic
var PERSON = {};
// return an object which should only ever contain data.
// The Catch: it's 100% public
PERSON.constructor = function (name) {
return {
name: name
}
}
// methods that operate on a Person
// the thing we're operating on gets passed in
PERSON.sayHello = function (person) {
alert (person.name);
}
var p = PERSON.constructor ("Fred");
var q = PERSON.constructor ("Me");
// normally this coded like 'p.sayHello()'
PERSON.sayHello(p);
PERSON.sayHello(q);
Obviously:
There would be nothing to stop someone from mutating 'p' in unholy
ways, or simply the logic of PERSON ending up spread all over the place. (That is true with the canonical 'new' technique as well).
It would be a minor hassle to pass 'p' in to every function that you
wanted to use it.
This is a weird approach.
But are those good enough reasons to dismiss it? On the positive side:
It is efficient, as (arguably) opposed to closures with repetitive function declaration.
It seems very simple and understandable, as opposed to fiddling with
'this' everywhere.
The key point is the foregoing of privacy. I know I will get slammed for this, but, looking for any feedback. Cheers.
There's nothing inherently wrong with it. But it does forgo many advantages inherent in using Javascript's prototype system.
Your object does not know anything about itself other than that it is an object literal. So instanceof will not help you to identify its origin. You'll be stuck using only duck typing.
Your methods are essentially namespaced static functions, where you have to repeat yourself by passing in the object as the first argument. By having a prototyped object, you can take advantage of dynamic dispatch, so that p.sayHello() can do different things for PERSON or ANIMAL depending on the type Javascript knows about. This is a form of polymorphism. Your approach requires you to name (and possibly make a mistake about) the type each time you call a method.
You don't actually need a constructor function, since functions are already objects. Your PERSON variable may as well be the constructor function.
What you've done here is create a module pattern (like a namespace).
Here is another pattern that keeps what you have but supplies the above advantages:
function Person(name)
{
var p = Object.create(Person.prototype);
p.name = name; // or other means of initialization, use of overloaded arguments, etc.
return p;
}
Person.prototype.sayHello = function () { alert (this.name); }
var p = Person("Fred"); // you can omit "new"
var q = Person("Me");
p.sayHello();
q.sayHello();
console.log(p instanceof Person); // true
var people = ["Bob", "Will", "Mary", "Alandra"].map(Person);
// people contains array of Person objects
Yeah, I'm not really understanding why you're trying to dodge the constructor approach or why they even felt a need to layer syntactical sugar over function constructors (Object.create and soon classes) when constructors by themselves are an elegant, flexible, and perfectly reasonable approach to OOP no matter how many lame reasons are given by people like Crockford for not liking them (because people forget to use the new keyword - seriously?). JS is heavily function-driven and its OOP mechanics are no different. It's better to embrace this than hide from it, IMO.
First of all, your points listed under "Obviously"
Hardly even worth mentioning in JavaScript. High degrees of mutability is by-design. We're not afraid of ourselves or other developers in JavaScript. The private vs. public paradigm isn't useful because it protects us from stupidity but rather because it makes it easier to understand the intention behind the other dev's code.
The effort in invoking isn't the problem. The hassle comes later when it's unclear why you've done what you've done there. I don't really see what you're trying to achieve that the core language approaches don't do better for you.
This is JavaScript. It's been weird to all but JS devs for years now. Don't sweat that if you find a better way to do something that works better at solving a problem in a given domain than a more typical solution might. Just make sure you understand the point of the more typical approach before trying to replace it as so many have when coming to JS from other language paradigms. It's easy to do trivial stuff with JS but once you're at the point where you want to get more OOP-driven learn everything you can about how the core language stuff works so you can apply a bit more skepticism to popular opinions out there spread by people who make a side-living making JavaScript out to be scarier and more riddled with deadly booby traps than it really is.
Now your points under "positive side,"
First of all, repetitive function definition was really only something to worry about in heavy looping scenario. If you were regularly producing objects in large enough quantity fast enough for the non-prototyped public method definitions to be a perf problem, you'd probably be running into memory usage issues with non-trivial objects in short order regardless. I speak in the past tense, however, because it's no longer really a relevant issue either way. In modern browsers, functions defined inside other functions are actually typically performance enhancing due to the way modern JIT compilers work. Regardless of what browsers you support, a few funcs defined per object is a non-issue unless you're expecting tens of thousands of objects.
On the question of simple and understandable, it's not to me because I don't see what win you've garnered here. Now instead of having one object to use, I have to use both the object and it's pseudo-constructor together which if I weren't looking at the definition would imply to me the function that you use with a 'new' keyword to build objects. If I were new to your codebase I'd be wasting a lot of time trying to figure out why you did it this way to avoid breaking some other concern I didn't understand.
My questions would be:
Why not just add all the methods in the object literal in the constructor in the first place? There's no performance issue there and there never really has been so the only other possible win is that you want to be able to add new methods to person after you've created new objects with it, but that's what we use prototype for on proper constructors (prototype methods btw are great for memory in older browsers because they are only defined once).
And if you have to keep passing the object in for the methods to know what the properties are, why do you even want objects? Why not just functions that expect simple data structure-type objects with certain properties? It's not really OOP anymore.
But my main point of criticism
You're missing the main point of OOP which is something JavaScript does a better job of not hiding from people than most languages. Consider the following:
function Person(name){
//var name = name; //<--this might be more clear but it would be redundant
this.identifySelf = function(){ alert(name); }
}
var bob = new Person();
bob.identifySelf();
Now, change the name bob identifies with, without overwriting the object or the method, which are both things you'd only do if it were clear you didn't want to work with the object as originally designed and constructed. You of course can't. That makes it crystal clear to anybody who sees this definition that the name is effectively a constant in this case. In a more complex constructor it would establish that the only thing allowed to alter or modify name is the instance itself unless the user added a non-validating setter method which would be silly because that would basically (looking at you Java Enterprise Beans) MURDER THE CENTRAL PURPOSE OF OOP.
Clear Division of Responsibility is the Key
Forget the key words they put in every book for a second and think about what the whole point is. Before OOP, everything was just a pile of functions and data structures all those functions acted on. With OOP you mostly have a set of methods bundled with a set of data that only the object itself actually ever changes.
So let's say something's gone wrong with output:
In our strictly procedural pile of functions there's no real limit to the number of hands that could have messed up that data. We might have good error-handling but one function could branch in such a way that the original culprit is hard to track down.
In a proper OOP design where data is typically behind an object gatekeeper I know that only one object can actually make the changes responsible.
Objects exposing all of their data most of the time is really only marginally better than the old procedural approach. All that really does is give you a name to categorize loosely related methods with.
Much Ado About 'this'
I've never understood the undue attention assigned to the 'this' keyword being messy and confusing. It's really not that big of a deal. 'this' identifies the instance you're working with. That's it. If the method isn't called as a property it's not going to know what instance to look for so it defaults to the global object. That was dumb (undefined would have been better), but it not working properly in that scenario should be expected in a language where functions are also portable like data and can be attached to other objects very easily. Use 'this' in a function when:
It's defined and called as a property of an instance.
It's passed as an event handler (which will call it as a member of the thing being listened to).
You're using call or apply methods to call it as a property of some other object temporarily without assigning it as such.
But remember, it's the calling that really matters. Assigning a public method to some var and calling from that var will do the global thing or throw an error in strict mode. Without being referenced as object properties, functions only really care about the scope they were defined in (their closures) and what args you pass them.