halting after errors happen in catch - javascript

I recently read about it is a good practice to write throw error when an error appears in order to stop dealing with the errant results. I am not sure if whether it applies to the server of a web app built with node.js and express. Should I handle all the .catch with throw error. Would throw error causes my app to crash?

The catch block is where you handle the error. In other words, this is where you could potentially trigger an alternate result from the function. Example:
try {
const result = await fetchSomeData();
if(result !== null) {
return result;
}else{
throw "nothing was fetched"
}
} catch(error) {
console.log(error);
const result = { error: "a custom error message", function: customCallBack };
return result;
}

Catching your errors properly is a good habit in every language (which supports it, of course). This applies for every code you write, no matter if it is some server, client or anything else.
The whole point of catching an error is to avoid your app from crashing without noticing and handle the reason for crashing appropriately, for example, inform user (or yourself) that there was some problem, stop processing some loop, stop the whole application or whatever else makes sense for your case.

Related

How to fix "'throw' of exception caught locally"?

In this function that handles a REST API call, any of the called functions to handle parts of the request might throw an error to signal that an error code should be sent as response. However, the function itself might also discover an error, at which point it should jump into the exception handling block.
static async handleRequest(req) {
try {
let isAllowed = await checkIfIsAllowed(req);
if (!isAllowed) {
throw new ForbiddenException("You're not allowed to do that.");
}
let result = await doSomething(req); // can also raise exceptions
sendResult(result);
} catch(err) {
sendErrorCode(err);
}
}
Webstorm will underline the throw with the following message: 'throw' of exception caught locally. This inspection reports any instances of JavaScript throw statements whose exceptions are always caught by containing try statements. Using throw statements as a "goto" to change the local flow of control is likely to be confusing.
However, I'm not sure how to refactor the code to improve the situation.
I could copypaste the code from the catch block into the if check, but I believe this would make my code less readable and harder to maintain.
I could write a new function that does the isAllowed check and throws an exception if it doesn't succeed, but that seems to be sidestepping the issue, rather than fixing a design problem that Webstorm is supposedly reporting.
Are we using exceptions in a bad way, and that's why we're encountering this problem, or is the Webstorm error simply misguiding and should be disabled?
Contrary to James Thorpe's opinion, I slightly prefer the pattern of throwing. I don't see any compelling reason to treat local errors in the try block any differently from errors that bubble up from deeper in the call stack... just throw them. In my opinion, this is a better application of consistency.
Because this pattern is more consistent, it naturally lends itself better to refactoring when you want to extract logic in the try block to another function that is perhaps in another module/file.
// main.js
try {
if (!data) throw Error('missing data')
} catch (error) {
handleError(error)
}
// Refactor...
// validate.js
function checkData(data) {
if (!data) throw Error('missing data')
}
// main.js
try {
checkData(data)
} catch (error) {
handleError(error)
}
If instead of throwing in the try block you handle the error, then the logic has to change if you refactor it outside of the try block.
In addition, handling the error has the drawback of making you remember to return early so that the try block doesn't continue to execute logic after the error is encountered. This can be quite easy to forget.
try {
if (!data) {
handleError(error)
return // if you forget this, you might execute code you didn't mean to. this isn't a problem with throw.
}
// more logic down here
} catch (error) {
handleError(error)
}
If you're concerned about which method is more performant, you shouldn't be. Handling the error is technically more performant, but the difference between the two is absolutely trivial.
Consider the possibility that WebStorm is a bit too opinionated here. ESLint doesn't even have a rule for this. Either pattern is completely valid.
You're checking for something and throwing an exception if isAllowed fails, but you know what to do in that situation - call sendErrorCode. You should throw exceptions to external callers if you don't know how to handle the situation - ie in exceptional circumstances.
In this case you already have a defined process of what to do if this happens - just use it directly without the indirect throw/catch:
static async handleRequest(req) {
try {
let isAllowed = await checkIfIsAllowed(req);
if (!isAllowed) {
sendErrorCode("You're not allowed to do that.");
return;
}
let result = await doSomething(req); // can also raise exceptions
sendResult(result);
} catch(err) {
sendErrorCode(err);
}
}
I could copypaste the code from the catch block into the ifcheck, but I believe this would make my code less readable and harder to maintain.
On the contrary, as above, I would expect this to be the way to handle this situation.
Since this is not a blocking error, but only an IDE recommendation, then the question should be viewed from two sides.
The first side is performance. If this is a bottleneck and it is potentially possible to use it with compilation or when transferring to new (not yet released) versions of nodejs, the presence of repetitions is not always a bad solution. It seems that the IDE hints precisely in this case and that such a design can lead to poor optimization in some cases.
The second side is the code design. If it will make the code more readable and simplify the work for other developers - keep it. From this point of view, solutions have already been proposed above.
Return a promise reject instead of throwing error in the try block
try {
const isAllowed = await checkIfIsAllowed(request);
if (!isAllowed) {
return Promise.reject(Error("You're not allowed to do that."));
}
const result = await doSomething(request);
sendResult(result);
} catch (error) {
throw error;
}
There are good answers to the question "Why not use exceptions as normal flow control?" here.
The reason not to throw an exception that you will catch locally is that you locally know how to handle that situation, so it is, by definition, not exceptional.
#James Thorpe's answer looks good to me, but #matchish feels it violates DRY. I say that in general, it does not. DRY, which stands for Don't Repeat Yourself, is defined by the people who coined the phrase as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system". As applied to writing software code, it is about not repeating complex code.
Practically any code that is said to violate DRY is said to be "fixed" by extracting the repeated code into a function and then calling that function from the places it was previously repeated. Having multiple parts of your code call sendErrorCode is the solution to fixing a DRY problem. All of the knowledge of what to do with the error is in one definitive place, namely the sendErrorCode function.
I would modify #James Thorpe's answer slightly, but it is more of a quibble than a real criticism, which is that sendErrorCode should be receiving exception objects or strings but not both:
static async handleRequest(req) {
try {
let isAllowed = await checkIfIsAllowed(req);
if (!isAllowed) {
sendErrorCode(new ForbiddenException("You're not allowed to do that."));
return;
}
let result = await doSomething(req); // can also raise exceptions
sendResult(result);
} catch(err) {
sendErrorCode(err);
}
}
The larger question is what is the likelihood of the error and is it appropriate to treat !isAllowed as an exception. Exceptions are meant to handle unusual or unpredictable situations. I would expect !isAllowed to be a normal occurrence that should be handled with logic specific to that situation, unlike, say, a sudden inability to query the database that has the answer to the isAllowed question.
#matchish's proposed solution changes the contract of doSomethingOnAllowedRequest from something that will never throw an exception to something that will routinely throw an exception, placing the burden of exception handling on all of its callers. This is likely to cause a violation of DRY by causing multiple callers to have repetitions of the same error handling code, so in the abstract I do not like it. In practice, it would depend on the overall situation, such as how many callers are there and do they, in fact, share the same response to errors.
Answer of James Thorpe has one disadvantage on my opinion. It's not DRY, in both cases when you call sendError you handle Exceptions. Let's imagine we have many lines of code with logic like this where Exception can be thrown. I think it can be better.
This is my solution
async function doSomethingOnAllowedRequest(req) {
let isAllowed = await checkIfIsAllowed(req);
if (!isAllowed) {
throw new ForbiddenException("You're not allowed to do that.");
}
doSomething(req);
}
static async handleRequest(req) {
try {
let result = await doSomethingOnAllowedRequest(req);
sendResult(result);
} catch(err) {
sendErrorCode(err);
}
}
This could give you some tips, maybe that can be the cause(not sure if relevant).
Catch statement does not catch thrown error
"
The reason why your try catch block is failing is because an ajax request is asynchronous. The try catch block will execute before the Ajax call and send the request itself, but the error is thrown when the result is returned, AT A LATER POINT IN TIME.
When the try catch block is executed, there is no error. When the error is thrown, there is no try catch. If you need try catch for ajax requests, always put ajax try catch blocks inside the success callback, NEVER outside of it."

Is try-catch meant to prevent or handle errors? (in javascript)

Recently I had a discussion with my coworker, about using try and catch to notify errors or avoid them.
This is my coworker's approach:
import Config from 'config';
export const getUserFromLocalStorage = () => {
const key = Object.keys(localStorage).find(value => value === `${Config.applicationId}/currentUser`);
try {
return key ? JSON.parse(localStorage[key]) : {};
} catch (e) {
return {};
}
};
Wich means, he doesn't care about the given error and he is just carrying of returning an object in order to continue the process
and mine is:
import Config from 'config';
export const getUserFromLocalStorage = () => {
const key = Object.keys(localStorage).find(value => value === `${Config.applicationId}/currentUser`);
try {
return key ? JSON.parse(localStorage[key]) : {};
} catch (e) {
console.log('the given error', e); // Just simple notifier for this example
}
};
but my approach, still has a problem which is that it will return undefined (which can crash internaly my app), that can easily fix it using finally and returning a default value but it doesn't sound a good practice to me.
THE QUESTION
So what will the balance using try catch and finally if needed, to make my application stable.
Is there something wrong with our approach?
In particular, we can't trust the data that is comming from the localStorage, so what will be the best approach for this implementation?
Since finally is executed in either case, whether something was thrown or not, it is not the place to return a default value. It is also questionable whether you need to log the error in great detail or not. It all depends on whether something is an expected error or a truly exceptional circumstance and who can do something about it.
Is it rather likely or possible that the value stored is invalid JSON? And you have a "backup plan" for what to do in that case? And there's nothing the user and/or developer can do about it? Then don't bother anyone with it. Maybe you want to console.log a message that might aid in debugging, but other than that just carry on with the program flow. There's most certainly no need to bug the user with an alert if a) the user didn't initiate the action and b) there's nothing for them to do about it either.
Considerations to take:
Whether to catch an error in the first place:
is it an expected error which may naturally occur during the program flow?
is it an error you can do something about?
do you have any plan what to do if you caught the error?
Whether to log an error:
does this log do anyone any good?
will anybody ever see that log entry?
does it give anyone any useful information that can lead to fixing the issue?
Whether to bug the user about something:
did the user initiate the action?
does the user expect some form of response, positive or negative?
can the user do anything to fix the problem?
Whether to return an empty object or nothing/null/undefined depends on what the function's responsibility is. Is the function defined to always return an object? Then it should return {} from catch. Or is "nothing" a valid response when the expected object doesn't exist? Then perhaps return false.
Overall, your coworker's approach seem very reasonable to me.
In this specific case, where you are working with localStorage (which almost always inevitably means working with JSON.parse()), it's always best practice to wrap your processing in try-catch. This is because both localStorage and JSON.parse have exceptions as a normal part of their error handling and it is usually possible to fallback to a default or initial value gracefully.
A pattern I use is something like as follows:
const DEFAULT_VALUE = {};
try {
const result = JSON.parse(result);
return result || DEFAULT_VALUE;
} catch (e) {
console.warn('Error parsing result', e);
}
return DEFAULT_VALUE;
This way, you have consistent error handling and default value fallback.
In general, you shouldn't need use try-catch unless you can and will safely handle the error and generate a useful fallback. Most try-catch blocks tend to sit at the bottom of a call-stack for this reason, so that they catch unplanned errors, handle them gracefully for the user, but log them noisily with a call-stack to the console for the developer to investigate/handle correctly/workaround.
I think the most important thing is that user satisfaction. At the end of the day, the program is used by the normal user. The user needs to continue his work using the program without any interruptions.
So i think that the best practices is to use try to run code and catch if there any errors and notify developer and/or user that there is an exception, and use finally to overcome the exception by returning a valid object.
In this way user is also able to continue the work and the developer also can check the error in log file for future debugging.
This is my personal idea.

Achieve error differentiation in Promises

Background
I have a REST API using MongoDB, Node.js and Express that makes a request to my NoSQL DB and depending on different results, I want to differentiate which error I send the customer.
Problem
The current version of my code has a generic error handler and always sends the same error message to the client:
api.post("/Surveys/", (req, res) => {
const surveyJSON = req.body;
const sender = replyFactory(res);
Survey.findOne({_id: surveyJSON.id})
.then(doc => {
if(doc !== null)
throw {reason: "ObjectRepeated"};
//do stuff
return new Survey(surveyJSON).save();
})
.then(() => sender.replySuccess("Object saved with success!"))
.catch(error => {
/*
* Here I don't know if:
* 1. The object is repeated
* 2. There was an error while saving (eg. validation failed)
* 3. Server had a hiccup (500)
*/
sender.replyBadRequest(error);
});
});
This is a problem, because the client will always get the same error message, no matter what and I need error differentiation!
Research
I found a possible solution, based on the division of logic and error/response handling:
Handling multiple catches in promise chain
However, I don't understand a few things:
I don't see how, at least in my example, I can separate the logic from the response. The response will depend on the logic after all!
I would like to avoid error sub-classing and hierarchy. First because I don't use bluebird, and I can't subclass the error class the answer suggests, and second because I don't want my code with a billion different error classes with brittle hierarchies that will change in the future.
My idea, that I don't really like either
With this structure, if I want error differentiation, the only thing I can do is to detect an error occurred, build an object with that information and then throw it:
.then(doc => {
if(doc === null)
throw {reason: "ObjectNotFound"};
//do stuff
return doc.save();
})
.catch(error => {
if(error.reason === "ObjectNotFound")
sendJsonResponse(res, 404, err);
else if(error.reason === "Something else ")
sendJsonResponse(/*you get the idea*/);
else //if we don't know the reasons, its because the server likely crashed
sendJsonResponse(res, 500, err);
});
I personally don't find this solution particularly attractive because it means I will have a huge if then else chain of statements in my catch block.
Also, as mentioned in the previous post, generic error handlers are usually frowned upon (and for a good reason imo).
Questions
How can I improve this code?
Objectives
When I started this thread, I had two objectives in mind:
Having error differentiation
Avoid an if then else of doom in a generic catcher
I have now come up with two radically distinct solutions, which I now post here, for future reference.
Solution 1: Generic error handler with Errors Object
This solution is based on the solution from #Marc Rohloff, however, instead of having an array of functions and looping through each one, I have an object with all the errors.
This approach is better because it is faster, and removes the need for the if validation, meaning you actually do less logic:
const errorHandlers = {
ObjectRepeated: function(error){
return { code: 400, error };
},
SomethingElse: function(error){
return { code: 499, error };
}
};
Survey.findOne({
_id: "bananasId"
})
.then(doc => {
//we dont want to add this object if we already have it
if (doc !== null)
throw { reason: "ObjectRepeated", error:"Object could not be inserted because it already exists."};
//saving empty object for demonstration purposes
return new Survey({}).save();
})
.then(() => console.log("Object saved with success!"))
.catch(error => {
respondToError(error);
});
const respondToError = error => {
const errorObj = errorHandlers[error.reason](error);
if (errorObj !== undefined)
console.log(`Failed with ${errorObj.code} and reason ${error.reason}: ${JSON.stringify(errorObj)}`);
else
//send default error Obj, server 500
console.log(`Generic fail message ${JSON.stringify(error)}`);
};
This solution achieves:
Partial error differentiation (I will explain why)
Avoids an if then else of doom.
This solution only has partial error differentiation. The reason for this is because you can only differentiate errors that you specifically build, via the throw {reaon: "reasonHere", error: "errorHere"} mechanism.
In this example, you would be able to know if the document already exists, but if there is an error saving the said document (lets say, a validation one) then it would be treated as "Generic" error and thrown as a 500.
To achieve full error differentiation with this, you would have to use the nested Promise anti pattern like the following:
.then(doc => {
//we dont want to add this object if we already have it
if (doc !== null)
throw { reason: "ObjectRepeated", error:"Object could not be inserted because it already exists." };
//saving empty object for demonstration purposes
return new Survey({}).save()
.then(() => {console.log("great success!");})
.catch(error => {throw {reason: "SomethingElse", error}});
})
It would work... But I see it as a best practice to avoid anti-patterns.
Solution 2: Using ECMA6 Generators via co.
This solution uses Generators via the library co. Meant to replace Promises in near future with a syntax similar to async/await this new feature allows you to write asynchronous code that reads like synchronous (well, almost).
To use it, you first need to install co, or something similar like ogen. I pretty much prefer co, so that is what i will be using here instead.
const requestHandler = function*() {
const survey = yield Survey.findOne({
_id: "bananasId"
});
if (survey !== null) {
console.log("use HTTP PUT instead!");
return;
}
try {
//saving empty object for demonstration purposes
yield(new Survey({}).save());
console.log("Saved Successfully !");
return;
}
catch (error) {
console.log(`Failed to save with error: ${error}`);
return;
}
};
co(requestHandler)
.then(() => {
console.log("finished!");
})
.catch(console.log);
The generator function requestHandler will yield all Promises to the library, which will resolve them and either return or throw accordingly.
Using this strategy, you effectively code like you were coding synchronous code (except for the use of yield).
I personally prefer this strategy because:
Your code is easy to read and it looks synchronous (while still have the advantages of asynchronous code).
You do not have to build and throw error objects every where, you can simply send the message immediately.
And, you can BREAK the code flow via return. This is not possible in a promise chain, as in those you have to force a throw (many times a meaningless one) and catch it to stop executing.
The generator function will only be executed once passed into the library co, which then returns a Promise, stating if the execution was successful or not.
This solution achieves:
error differentiation
avoids if then else hell and generalized catchers (although you will use try/catch in your code, and you still have access to a generalized catcher if you need one).
Using generators is, in my opinion, more flexible and makes for easier to read code. Not all cases are cases for generator usage (like mpj suggests in the video) but in this specific case, I believe it to be the best option.
Conclusion
Solution 1: good classical approach to the problem, but has issues inherent to promise chaining. You can overcome some of them by nesting promises, but that is an anti pattern and defeats their purpose.
Solution 2: more versatile, but requires a library and knowledge on how generators work. Furthermore, different libraries will have different behaviors, so you should be aware of that.
I think a good improvement would be creating an error utility method that takes the error message as a parameter, then does all your ifs to try to parse the error (logic that does have to happen somewhere) and returns a formatted error.
function errorFormatter(errMsg) {
var formattedErr = {
responseCode: 500,
msg: 'Internal Server Error'
};
switch (true) {
case errMsg.includes('ObjectNotFound'):
formattedErr.responseCode = 404;
formattedErr.msg = 'Resource not found';
break;
}
return formattedErr;
}

how avoid nodejs crashes [duplicate]

I just started trying out node.js a few days ago. I've realized that the Node is terminated whenever I have an unhandled exception in my program. This is different than the normal server container that I have been exposed to where only the Worker Thread dies when unhandled exceptions occur and the container would still be able to receive the request. This raises a few questions:
Is process.on('uncaughtException') the only effective way to guard against it?
Will process.on('uncaughtException') catch the unhandled exception during execution of asynchronous processes as well?
Is there a module that is already built (such as sending email or writing to a file) that I could leverage in the case of uncaught exceptions?
I would appreciate any pointer/article that would show me the common best practices for handling uncaught exceptions in node.js
Update: Joyent now has their own guide. The following information is more of a summary:
Safely "throwing" errors
Ideally we'd like to avoid uncaught errors as much as possible, as such, instead of literally throwing the error, we can instead safely "throw" the error using one of the following methods depending on our code architecture:
For synchronous code, if an error happens, return the error:
// Define divider as a syncrhonous function
var divideSync = function(x,y) {
// if error condition?
if ( y === 0 ) {
// "throw" the error safely by returning it
return new Error("Can't divide by zero")
}
else {
// no error occured, continue on
return x/y
}
}
// Divide 4/2
var result = divideSync(4,2)
// did an error occur?
if ( result instanceof Error ) {
// handle the error safely
console.log('4/2=err', result)
}
else {
// no error occured, continue on
console.log('4/2='+result)
}
// Divide 4/0
result = divideSync(4,0)
// did an error occur?
if ( result instanceof Error ) {
// handle the error safely
console.log('4/0=err', result)
}
else {
// no error occured, continue on
console.log('4/0='+result)
}
For callback-based (ie. asynchronous) code, the first argument of the callback is err, if an error happens err is the error, if an error doesn't happen then err is null. Any other arguments follow the err argument:
var divide = function(x,y,next) {
// if error condition?
if ( y === 0 ) {
// "throw" the error safely by calling the completion callback
// with the first argument being the error
next(new Error("Can't divide by zero"))
}
else {
// no error occured, continue on
next(null, x/y)
}
}
divide(4,2,function(err,result){
// did an error occur?
if ( err ) {
// handle the error safely
console.log('4/2=err', err)
}
else {
// no error occured, continue on
console.log('4/2='+result)
}
})
divide(4,0,function(err,result){
// did an error occur?
if ( err ) {
// handle the error safely
console.log('4/0=err', err)
}
else {
// no error occured, continue on
console.log('4/0='+result)
}
})
For eventful code, where the error may happen anywhere, instead of throwing the error, fire the error event instead:
// Definite our Divider Event Emitter
var events = require('events')
var Divider = function(){
events.EventEmitter.call(this)
}
require('util').inherits(Divider, events.EventEmitter)
// Add the divide function
Divider.prototype.divide = function(x,y){
// if error condition?
if ( y === 0 ) {
// "throw" the error safely by emitting it
var err = new Error("Can't divide by zero")
this.emit('error', err)
}
else {
// no error occured, continue on
this.emit('divided', x, y, x/y)
}
// Chain
return this;
}
// Create our divider and listen for errors
var divider = new Divider()
divider.on('error', function(err){
// handle the error safely
console.log(err)
})
divider.on('divided', function(x,y,result){
console.log(x+'/'+y+'='+result)
})
// Divide
divider.divide(4,2).divide(4,0)
Safely "catching" errors
Sometimes though, there may still be code that throws an error somewhere which can lead to an uncaught exception and a potential crash of our application if we don't catch it safely. Depending on our code architecture we can use one of the following methods to catch it:
When we know where the error is occurring, we can wrap that section in a node.js domain
var d = require('domain').create()
d.on('error', function(err){
// handle the error safely
console.log(err)
})
// catch the uncaught errors in this asynchronous or synchronous code block
d.run(function(){
// the asynchronous or synchronous code that we want to catch thrown errors on
var err = new Error('example')
throw err
})
If we know where the error is occurring is synchronous code, and for whatever reason can't use domains (perhaps old version of node), we can use the try catch statement:
// catch the uncaught errors in this synchronous code block
// try catch statements only work on synchronous code
try {
// the synchronous code that we want to catch thrown errors on
var err = new Error('example')
throw err
} catch (err) {
// handle the error safely
console.log(err)
}
However, be careful not to use try...catch in asynchronous code, as an asynchronously thrown error will not be caught:
try {
setTimeout(function(){
var err = new Error('example')
throw err
}, 1000)
}
catch (err) {
// Example error won't be caught here... crashing our app
// hence the need for domains
}
If you do want to work with try..catch in conjunction with asynchronous code, when running Node 7.4 or higher you can use async/await natively to write your asynchronous functions.
Another thing to be careful about with try...catch is the risk of wrapping your completion callback inside the try statement like so:
var divide = function(x,y,next) {
// if error condition?
if ( y === 0 ) {
// "throw" the error safely by calling the completion callback
// with the first argument being the error
next(new Error("Can't divide by zero"))
}
else {
// no error occured, continue on
next(null, x/y)
}
}
var continueElsewhere = function(err, result){
throw new Error('elsewhere has failed')
}
try {
divide(4, 2, continueElsewhere)
// ^ the execution of divide, and the execution of
// continueElsewhere will be inside the try statement
}
catch (err) {
console.log(err.stack)
// ^ will output the "unexpected" result of: elsewhere has failed
}
This gotcha is very easy to do as your code becomes more complex. As such, it is best to either use domains or to return errors to avoid (1) uncaught exceptions in asynchronous code (2) the try catch catching execution that you don't want it to. In languages that allow for proper threading instead of JavaScript's asynchronous event-machine style, this is less of an issue.
Finally, in the case where an uncaught error happens in a place that wasn't wrapped in a domain or a try catch statement, we can make our application not crash by using the uncaughtException listener (however doing so can put the application in an unknown state):
// catch the uncaught errors that weren't wrapped in a domain or try catch statement
// do not use this in modules, but only in applications, as otherwise we could have multiple of these bound
process.on('uncaughtException', function(err) {
// handle the error safely
console.log(err)
})
// the asynchronous or synchronous code that emits the otherwise uncaught error
var err = new Error('example')
throw err
Following is a summarization and curation from many different sources on this topic including code example and quotes from selected blog posts. The complete list of best practices can be found here
Best practices of Node.JS error handling
Number1: Use promises for async error handling
TL;DR: Handling async errors in callback style is probably the fastest way to hell (a.k.a the pyramid of doom). The best gift you can give to your code is using instead a reputable promise library which provides much compact and familiar code syntax like try-catch
Otherwise: Node.JS callback style, function(err, response), is a promising way to un-maintainable code due to the mix of error handling with casual code, excessive nesting and awkward coding patterns
Code example - good
doWork()
.then(doWork)
.then(doError)
.then(doWork)
.catch(errorHandler)
.then(verify);
code example anti pattern – callback style error handling
getData(someParameter, function(err, result){
if(err != null)
//do something like calling the given callback function and pass the error
getMoreData(a, function(err, result){
if(err != null)
//do something like calling the given callback function and pass the error
getMoreData(b, function(c){
getMoreData(d, function(e){
...
});
});
});
});
});
Blog quote: "We have a problem with promises"
(From the blog pouchdb, ranked 11 for the keywords "Node Promises")
"…And in fact, callbacks do something even more sinister: they deprive us of the stack, which is something we usually take for granted in programming languages. Writing code without a stack is a lot like driving a car without a brake pedal: you don’t realize how badly you need it, until you reach for it and it’s not there. The whole point of promises is to give us back the language fundamentals we lost when we went async: return, throw, and the stack. But you have to know how to use promises correctly in order to take advantage of them."
Number2: Use only the built-in Error object
TL;DR: It pretty common to see code that throws errors as string or as a custom type – this complicates the error handling logic and the interoperability between modules. Whether you reject a promise, throw exception or emit error – using Node.JS built-in Error object increases uniformity and prevents loss of error information
Otherwise: When executing some module, being uncertain which type of errors come in return – makes it much harder to reason about the coming exception and handle it. Even worth, using custom types to describe errors might lead to loss of critical error information like the stack trace!
Code example - doing it right
//throwing an Error from typical function, whether sync or async
if(!productToAdd)
throw new Error("How can I add new product when no value provided?");
//'throwing' an Error from EventEmitter
const myEmitter = new MyEmitter();
myEmitter.emit('error', new Error('whoops!'));
//'throwing' an Error from a Promise
return new promise(function (resolve, reject) {
DAL.getProduct(productToAdd.id).then((existingProduct) =>{
if(existingProduct != null)
return reject(new Error("Why fooling us and trying to add an existing product?"));
code example anti pattern
//throwing a String lacks any stack trace information and other important properties
if(!productToAdd)
throw ("How can I add new product when no value provided?");
Blog quote: "A string is not an error"
(From the blog devthought, ranked 6 for the keywords “Node.JS error object”)
"…passing a string instead of an error results in reduced interoperability between modules. It breaks contracts with APIs that might be performing instanceof Error checks, or that want to know more about the error. Error objects, as we’ll see, have very interesting properties in modern JavaScript engines besides holding the message passed to the constructor.."
Number3: Distinguish operational vs programmer errors
TL;DR: Operations errors (e.g. API received an invalid input) refer to known cases where the error impact is fully understood and can be handled thoughtfully. On the other hand, programmer error (e.g. trying to read undefined variable) refers to unknown code failures that dictate to gracefully restart the application
Otherwise: You may always restart the application when an error appear, but why letting ~5000 online users down because of a minor and predicted error (operational error)? the opposite is also not ideal – keeping the application up when unknown issue (programmer error) occurred might lead unpredicted behavior. Differentiating the two allows acting tactfully and applying a balanced approach based on the given context
Code example - doing it right
//throwing an Error from typical function, whether sync or async
if(!productToAdd)
throw new Error("How can I add new product when no value provided?");
//'throwing' an Error from EventEmitter
const myEmitter = new MyEmitter();
myEmitter.emit('error', new Error('whoops!'));
//'throwing' an Error from a Promise
return new promise(function (resolve, reject) {
DAL.getProduct(productToAdd.id).then((existingProduct) =>{
if(existingProduct != null)
return reject(new Error("Why fooling us and trying to add an existing product?"));
code example - marking an error as operational (trusted)
//marking an error object as operational
var myError = new Error("How can I add new product when no value provided?");
myError.isOperational = true;
//or if you're using some centralized error factory (see other examples at the bullet "Use only the built-in Error object")
function appError(commonType, description, isOperational) {
Error.call(this);
Error.captureStackTrace(this);
this.commonType = commonType;
this.description = description;
this.isOperational = isOperational;
};
throw new appError(errorManagement.commonErrors.InvalidInput, "Describe here what happened", true);
//error handling code within middleware
process.on('uncaughtException', function(error) {
if(!error.isOperational)
process.exit(1);
});
Blog Quote: "Otherwise you risk the state"
(From the blog debugable, ranked 3 for the keywords "Node.JS uncaught exception")
"…By the very nature of how throw works in JavaScript, there is almost never any way to safely “pick up where you left off”, without leaking references, or creating some other sort of undefined brittle state. The safest way to respond to a thrown error is to shut down the process. Of course, in a normal web server, you might have many connections open, and it is not reasonable to abruptly shut those down because an error was triggered by someone else. The better approach is to send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker"
Number4: Handle errors centrally, through but not within middleware
TL;DR: Error handling logic such as mail to admin and logging should be encapsulated in a dedicated and centralized object that all end-points (e.g. Express middleware, cron jobs, unit-testing) call when an error comes in.
Otherwise: Not handling errors within a single place will lead to code duplication and probably to errors that are handled improperly
Code example - a typical error flow
//DAL layer, we don't handle errors here
DB.addDocument(newCustomer, (error, result) => {
if (error)
throw new Error("Great error explanation comes here", other useful parameters)
});
//API route code, we catch both sync and async errors and forward to the middleware
try {
customerService.addNew(req.body).then(function (result) {
res.status(200).json(result);
}).catch((error) => {
next(error)
});
}
catch (error) {
next(error);
}
//Error handling middleware, we delegate the handling to the centrzlied error handler
app.use(function (err, req, res, next) {
errorHandler.handleError(err).then((isOperationalError) => {
if (!isOperationalError)
next(err);
});
});
Blog quote: "Sometimes lower levels can’t do anything useful except propagate the error to their caller"
(From the blog Joyent, ranked 1 for the keywords “Node.JS error handling”)
"…You may end up handling the same error at several levels of the stack. This happens when lower levels can’t do anything useful except propagate the error to their caller, which propagates the error to its caller, and so on. Often, only the top-level caller knows what the appropriate response is, whether that’s to retry the operation, report an error to the user, or something else. But that doesn’t mean you should try to report all errors to a single top-level callback, because that callback itself can’t know in what context the error occurred"
Number5: Document API errors using Swagger
TL;DR: Let your API callers know which errors might come in return so they can handle these thoughtfully without crashing. This is usually done with REST API documentation frameworks like Swagger
Otherwise: An API client might decide to crash and restart only because he received back an error he couldn’t understand. Note: the caller of your API might be you (very typical in a microservices environment)
Blog quote: "You have to tell your callers what errors can happen"
(From the blog Joyent, ranked 1 for the keywords “Node.JS logging”)
…We’ve talked about how to handle errors, but when you’re writing a new function, how do you deliver errors to the code that called your function? …If you don’t know what errors can happen or don’t know what they mean, then your program cannot be correct except by accident. So if you’re writing a new function, you have to tell your callers what errors can happen and what they mea
Number6: Shut the process gracefully when a stranger comes to town
TL;DR: When an unknown error occurs (a developer error, see best practice number #3)- there is uncertainty about the application healthiness. A common practice suggests restarting the process carefully using a ‘restarter’ tool like Forever and PM2
Otherwise: When an unfamiliar exception is caught, some object might be in a faulty state (e.g an event emitter which is used globally and not firing events anymore due to some internal failure) and all future requests might fail or behave crazily
Code example - deciding whether to crash
//deciding whether to crash when an uncaught exception arrives
//Assuming developers mark known operational errors with error.isOperational=true, read best practice #3
process.on('uncaughtException', function(error) {
errorManagement.handler.handleError(error);
if(!errorManagement.handler.isTrustedError(error))
process.exit(1)
});
//centralized error handler encapsulates error-handling related logic
function errorHandler(){
this.handleError = function (error) {
return logger.logError(err).then(sendMailToAdminIfCritical).then(saveInOpsQueueIfCritical).then(determineIfOperationalError);
}
this.isTrustedError = function(error)
{
return error.isOperational;
}
Blog quote: "There are three schools of thoughts on error handling"
(From the blog jsrecipes)
…There are primarily three schools of thoughts on error handling: 1. Let the application crash and restart it. 2. Handle all possible errors and never crash. 3. Balanced approach between the two
Number7: Use a mature logger to increase errors visibility
TL;DR: A set of mature logging tools like Winston, Bunyan or Log4J, will speed-up error discovery and understanding. So forget about console.log.
Otherwise: Skimming through console.logs or manually through messy text file without querying tools or a decent log viewer might keep you busy at work until late
Code example - Winston logger in action
//your centralized logger object
var logger = new winston.Logger({
level: 'info',
transports: [
new (winston.transports.Console)(),
new (winston.transports.File)({ filename: 'somefile.log' })
]
});
//custom code somewhere using the logger
logger.log('info', 'Test Log Message with some parameter %s', 'some parameter', { anything: 'This is metadata' });
Blog quote: "Lets identify a few requirements (for a logger):"
(From the blog strongblog)
…Lets identify a few requirements (for a logger):
1. Time stamp each log line. This one is pretty self explanatory – you should be able to tell when each log entry occured.
2. Logging format should be easily digestible by humans as well as machines.
3. Allows for multiple configurable destination streams. For example, you might be writing trace logs to one file but when an error is encountered, write to the same file, then into error file and send an email at the same time…
Number8: Discover errors and downtime using APM products
TL;DR: Monitoring and performance products (a.k.a APM) proactively gauge your codebase or API so they can auto-magically highlight errors, crashes and slow parts that you were missing
Otherwise: You might spend great effort on measuring API performance and downtimes, probably you’ll never be aware which are your slowest code parts under real world scenario and how these affects the UX
Blog quote: "APM products segments"
(From the blog Yoni Goldberg)
"…APM products constitutes 3 major segments:1. Website or API monitoring – external services that constantly monitor uptime and performance via HTTP requests. Can be setup in few minutes. Following are few selected contenders: Pingdom, Uptime Robot, and New Relic
2. Code instrumentation – products family which require to embed an agent within the application to benefit feature slow code detection, exceptions statistics, performance monitoring and many more. Following are few selected contenders: New Relic, App Dynamics
3. Operational intelligence dashboard – these line of products are focused on facilitating the ops team with metrics and curated content that helps to easily stay on top of application performance. This is usually involves aggregating multiple sources of information (application logs, DB logs, servers log, etc) and upfront dashboard design work. Following are few selected contenders: Datadog, Splunk"
The above is a shortened version - see here more best practices and examples
You can catch uncaught exceptions, but it's of limited use. See http://debuggable.com/posts/node-js-dealing-with-uncaught-exceptions:4c933d54-1428-443c-928d-4e1ecbdd56cb
monit, forever or upstart can be used to restart node process when it crashes. A graceful shutdown is best you can hope for (e.g. save all in-memory data in uncaught exception handler).
nodejs domains is the most up to date way of handling errors in nodejs. Domains can capture both error/other events as well as traditionally thrown objects. Domains also provide functionality for handling callbacks with an error passed as the first argument via the intercept method.
As with normal try/catch-style error handling, is is usually best to throw errors when they occur, and block out areas where you want to isolate errors from affecting the rest of the code. The way to "block out" these areas are to call domain.run with a function as a block of isolated code.
In synchronous code, the above is enough - when an error happens you either let it be thrown through, or you catch it and handle there, reverting any data you need to revert.
try {
//something
} catch(e) {
// handle data reversion
// probably log too
}
When the error happens in an asynchronous callback, you either need to be able to fully handle the rollback of data (shared state, external data like databases, etc). OR you have to set something to indicate that an exception has happened - where ever you care about that flag, you have to wait for the callback to complete.
var err = null;
var d = require('domain').create();
d.on('error', function(e) {
err = e;
// any additional error handling
}
d.run(function() { Fiber(function() {
// do stuff
var future = somethingAsynchronous();
// more stuff
future.wait(); // here we care about the error
if(err != null) {
// handle data reversion
// probably log too
}
})});
Some of that above code is ugly, but you can create patterns for yourself to make it prettier, eg:
var specialDomain = specialDomain(function() {
// do stuff
var future = somethingAsynchronous();
// more stuff
future.wait(); // here we care about the error
if(specialDomain.error()) {
// handle data reversion
// probably log too
}
}, function() { // "catch"
// any additional error handling
});
UPDATE (2013-09):
Above, I use a future that implies fibers semantics, which allow you to wait on futures in-line. This actually allows you to use traditional try-catch blocks for everything - which I find to be the best way to go. However, you can't always do this (ie in the browser)...
There are also futures that don't require fibers semantics (which then work with normal, browsery JavaScript). These can be called futures, promises, or deferreds (I'll just refer to futures from here on). Plain-old-JavaScript futures libraries allow errors to be propagated between futures. Only some of these libraries allow any thrown future to be correctly handled, so beware.
An example:
returnsAFuture().then(function() {
console.log('1')
return doSomething() // also returns a future
}).then(function() {
console.log('2')
throw Error("oops an error was thrown")
}).then(function() {
console.log('3')
}).catch(function(exception) {
console.log('handler')
// handle the exception
}).done()
This mimics a normal try-catch, even though the pieces are asynchronous. It would print:
1
2
handler
Note that it doesn't print '3' because an exception was thrown that interrupts that flow.
Take a look at bluebird promises:
https://github.com/petkaantonov/bluebird
Note that I haven't found many other libraries other than these that properly handle thrown exceptions. jQuery's deferred, for example, don't - the "fail" handler would never get the exception thrown an a 'then' handler, which in my opinion is a deal breaker.
I wrote about this recently at http://snmaynard.com/2012/12/21/node-error-handling/. A new feature of node in version 0.8 is domains and allow you to combine all the forms of error handling into one easier manage form. You can read about them in my post.
You can also use something like Bugsnag to track your uncaught exceptions and be notified via email, chatroom or have a ticket created for an uncaught exception (I am the co-founder of Bugsnag).
One instance where using a try-catch might be appropriate is when using a forEach loop. It is synchronous but at the same time you cannot just use a return statement in the inner scope. Instead a try and catch approach can be used to return an Error object in the appropriate scope. Consider:
function processArray() {
try {
[1, 2, 3].forEach(function() { throw new Error('exception'); });
} catch (e) {
return e;
}
}
It is a combination of the approaches described by #balupton above.
I would just like to add that Step.js library helps you handle exceptions by always passing it to the next step function. Therefore you can have as a last step a function that check for any errors in any of the previous steps. This approach can greatly simplify your error handling.
Below is a quote from the github page:
any exceptions thrown are caught and passed as the first argument to
the next function. As long as you don't nest callback functions inline
your main functions this prevents there from ever being any uncaught
exceptions. This is very important for long running node.JS servers
since a single uncaught exception can bring the whole server down.
Furthermore, you can use Step to control execution of scripts to have a clean up section as the last step. For example if you want to write a build script in Node and report how long it took to write, the last step can do that (rather than trying to dig out the last callback).
Catching errors has been very well discussed here, but it's worth remembering to log the errors out somewhere so you can view them and fix stuff up.
​Bunyan is a popular logging framework for NodeJS - it supporst writing out to a bunch of different output places which makes it useful for local debugging, as long as you avoid console.log.
​
In your domain's error handler you could spit the error out to a log file.
var log = bunyan.createLogger({
name: 'myapp',
streams: [
{
level: 'error',
path: '/var/tmp/myapp-error.log' // log ERROR to this file
}
]
});
This can get time consuming if you have lots of errors and/or servers to check, so it could be worth looking into a tool like Raygun (disclaimer, I work at Raygun) to group errors together - or use them both together.
​
If you decided to use Raygun as a tool, it's pretty easy to setup too
var raygunClient = new raygun.Client().init({ apiKey: 'your API key' });
raygunClient.send(theError);
​
Crossed with using a tool like PM2 or forever, your app should be able to crash, log out what happened and reboot without any major issues.
After reading this post some time ago I was wondering if it was safe to use domains for exception handling on an api / function level. I wanted to use them to simplify exception handling code in each async function I wrote. My concern was that using a new domain for each function would introduce significant overhead. My homework seems to indicate that there is minimal overhead and that performance is actually better with domains than with try catch in some situations.
http://www.lighthouselogic.com/#/using-a-new-domain-for-each-async-function-in-node/
If you want use Services in Ubuntu(Upstart): Node as a service in Ubuntu 11.04 with upstart, monit and forever.js
getCountryRegionData: (countryName, stateName) => {
let countryData, stateData
try {
countryData = countries.find(
country => country.countryName === countryName
)
} catch (error) {
console.log(error.message)
return error.message
}
try {
stateData = countryData.regions.find(state => state.name === stateName)
} catch (error) {
console.log(error.message)
return error.message
}
return {
countryName: countryData.countryName,
countryCode: countryData.countryShortCode,
stateName: stateData.name,
stateCode: stateData.shortCode,
}
},

Node.js Error throwing known as bad practice, yet essential for TDD

I've heard from plenty of people saying that throwing errors in Node is bad practice, and you should rather manually handle them via CommonJS's callback syntax:
somethingThatPassesAnError( function(err, value) {
if (err) console.log("ERROR: " + err);
});
Yet, I've found in multiple unit testing frameworks (Mocha, Should.js, Gently) that it seems like they want you to throw an error when something happens. I mean, sure, you can design your tests to check for equality of variables and check for not-null in error vars, but in the words of Ryan Dahl himself, "you should write your framework to make the right things easy to do and the wrong things hard to do".
So what gives? Can anyone explain why that practice exists? Should I start throwing fatal exceptions like require() would if the module couldn't be found?
It because nodejs programs typically make heavy use of async, and as a result errors are often thrown after your try/catch has already completed successfully. Consider this contrived example.
function foo(callback) {
process.nextTick(function() {
if (something) throw "error";
callback("data");
});
}
try {
foo(function(data) {
dosomething(data);
});
} catch (e) {
// "error" will not be caught here, as this code will have been executed
// before the callback returns.
}
The typical node pattern, of the first argument in a callback being an error, obviates this problem, providing a consistent way to return errors from asynchronous code.
function foo(callback) {
process.nextTick(function() {
if (something) return callback("error");
callback("data");
});
}
foo(function(error, data) {
if (error) return handleError(error);
dosomething(data);
});
It is my understanding that the case against throwing exceptions in JavaScript is due to the heavy use of asynchronous patterns. When an error occurs on another stack, you can't catch it. In those cases, use the err parameter as the first parameter for the callback.
I don't think that is the same as saying "never throw anything". If I have synchronous code, and an exception occurs, I throw it. There are differing opinions, but if callbacks aren't involved at all, I see no reason to not use throw.
I tend to follow the guidance on Joyent's Error Handling in Node.js for this one. The oversimplified gist is that there are really two types of errors (operational and programmer), and three ways to pass errors (emitting an error event on an event emitter, returning the callback with the error argument as non-null, and throwing the error).
Operational errors are errors that you expect might happen and are able to handle, ie. not necessarily bugs. Programmer errors are errors in the code itself. If you are writing code and you expect the error, then any of the patterns for passing an error are valuable. For example:
If the error happens inside an asynchronous function that accepts a callback, using the idiomatic return callback(new Error('Yadda yadda yadda')) is a correct solution (if you can't handle the error in the function).
If the error happens inside of a synchronous function and is a breaking problem (ie. the program cannot continue without the operation that was attempted) then blowing up with an uncaught thrown error is acceptable.
If the error occurs in a synchronous function but can be dealt with, then the error should be dealt with, otherwise it should be thrown, and maybe the parent function can handle it, maybe not.
Personally, I tend to only throw errors that I consider fatal, thus my code is mostly devoid of try/catch blocks (I even wrap JSON.parse in a function defined thusly: function jsonParseAsync(json, cb) { var out, err; try { out = JSON.parse(json) } catch(e) { err = e }; return cb(err, out); } ). I also try to avoid promises because they conflate promise rejection and thrown errors (even though this is getting harder to do as promises become more ubiquitous). Instead, I tend to think of synchronous functions as mathematical proofs in that if they are correct they must always be correct (thus an error in a synchronous function should break the whole program, otherwise the proof can be wrong but still usable). My error creation and checks are almost entirely assertions for bad input and emitting error events or handling asynchronous errors idiomatically.
I would suggest using exceptions to handle critical errors, much like the way require() works. If this functionality causes Node.js to misbehave, then that's a bug which I'm sure will get fixed in time.

Categories