What is the proper way to handle Knex pg database errors - javascript

I'm using postgres with knex javascript library to build my sql queries. I want to handle all thrown errors from postgres server, so the way I want to do this is by checking the type of the thrown error.
try {
// knex query here
} catch(error) {
if(error instanceof DatabaseError) {
// handle pg error by checking error.code or something else
// then send an custom error message to the client
}
// handle another error here which is not caused by the postgres server
}
Is there any way to handle the error like this ?

Catching Knex/DB Errors:
You can use the async/await syntax:
async function() {
try {
await knex(/* knex query here*/)
} catch(error) {
if(error instanceof DatabaseError) {
// handle pg error by checking error.code or something else
// then send an custom error message to the client
}Sorry
// handle another error here which is not caused by the postgres server
}
If for some reason you don't want to use that (newer) syntax, you can also chain a .catch ...
knex(/* knex query here*/)
.then(doWhatever)
.catch(error => {
if(error instanceof DatabaseError) { // ...
});
Alternative you can also use Knex's query-error (https://knexjs.org/#Interfaces-query-error) ... but personally I've never seen the point when the built-in promise handlers work just fine.
(EDIT) Distinguishing PG Errors From Knex Ones:
If you want to differentiate Knex and PG-specific errors, you can hook up an error-handler to your PG connection directly (bypassing Knex) like so:
function afterCreate(connection, callback) {
connection.on("error", connectionError);
callback(null, connection);
}
db.client.pool.config.afterCreate = afterCreate;
If you do that you won't need a error instanceof DatabaseError, because all errors caught by the connection.on will be PG errors.
You might also find this issue thread (where I got that code from) useful: https://github.com/knex/knex/issues/522, as it features a discussion of error handling in Knex, and specifically handling of underlying DB errors.
(EDIT) But How Can I Distinguish Errors When I Catch Them?
Unfortunately I don't think PG errors don't have a unique prototype (ie. class) or other distinguishing feature. You can see this by looking at an example one (from that thread I linked):
{"code":"ECONNREFUSED","errno":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5432}
As you can see, there's no way to look that and know "that's coming from PostgreSQL", unless you start checking for specific features like code === 'ECONNREFUSED' (there is no isPg: true flag on the error).
And why doesn't Knex just intelligently identify DB errors for us? From that same issue thread:
It is pretty much impossible to create events like redis have for knex because of multitude of different db drivers which doesn't actually support listening for connection errors
-elhigu (Knex team member)

Related

Redis (ioredis) - Unable to catch connection error in order to handle them gracefully

I'm trying to gracefully handle redis errors, in order to bypass the error and do something else instead, instead of crashing my app.
But so far, I couldn't just catch the exception thrown by ioredis, which bypasses my try/catch and terminates the current process. This current behaviour doesn't allow me to gracefully handle the error and in order to fetch the data from an alternative system (instead of redis).
import { createLogger } from '#unly/utils-simple-logger';
import Redis from 'ioredis';
import epsagon from './epsagon';
const logger = createLogger({
label: 'Redis client',
});
/**
* Creates a redis client
*
* #param url Url of the redis client, must contain the port number and be of the form "localhost:6379"
* #param password Password of the redis client
* #param maxRetriesPerRequest By default, all pending commands will be flushed with an error every 20 retry attempts.
* That makes sure commands won't wait forever when the connection is down.
* Set to null to disable this behavior, and every command will wait forever until the connection is alive again.
* #return {Redis}
*/
export const getClient = (url = process.env.REDIS_URL, password = process.env.REDIS_PASSWORD, maxRetriesPerRequest = 20) => {
const client = new Redis(`redis://${url}`, {
password,
showFriendlyErrorStack: true, // See https://github.com/luin/ioredis#error-handling
lazyConnect: true, // XXX Don't attempt to connect when initializing the client, in order to properly handle connection failure on a use-case basis
maxRetriesPerRequest,
});
client.on('connect', function () {
logger.info('Connected to redis instance');
});
client.on('ready', function () {
logger.info('Redis instance is ready (data loaded from disk)');
});
// Handles redis connection temporarily going down without app crashing
// If an error is handled here, then redis will attempt to retry the request based on maxRetriesPerRequest
client.on('error', function (e) {
logger.error(`Error connecting to redis: "${e}"`);
epsagon.setError(e);
if (e.message === 'ERR invalid password') {
logger.error(`Fatal error occurred "${e.message}". Stopping server.`);
throw e; // Fatal error, don't attempt to fix
}
});
return client;
};
I'm simulating a bad password/url in order to see how redis reacts when misconfigured. I've set lazyConnect to true in order to handle errors on the caller.
But, when I define the url as localhoste:6379 (instead of localhost:6379), I get the following error:
server 2019-08-10T19:44:00.926Z [Redis client] error: Error connecting to redis: "Error: getaddrinfo ENOTFOUND localhoste localhoste:6379"
(x 20)
server 2019-08-10T19:44:11.450Z [Read cache] error: Reached the max retries per request limit (which is 20). Refer to "maxRetriesPerRequest" option for details.
Here is my code:
// Fetch a potential query result for the given query, if it exists in the cache already
let cachedItem;
try {
cachedItem = await redisClient.get(queryString); // This emit an error on the redis client, because it fails to connect (that's intended, to test the behaviour)
} catch (e) {
logger.error(e); // It never goes there, as the error isn't "thrown", but rather "emitted" and handled by redis its own way
epsagon.setError(e);
}
// If the query is cached, return the results from the cache
if (cachedItem) {
// return item
} else {} // fetch from another endpoint (fallback backup)
My understanding is that redis errors are handled through client.emit('error', error), which is async and the callee doesn't throw an error, which doesn't allow the caller to handle errors using try/catch.
Should redis errors be handled in a very particular way? Isn't it possible to catch them as we usually do with most errors?
Also, it seems redis retries 20 times to connect (by default) before throwing a fatal exception (process is stopped). But I'd like to handle any exception and deal with it my own way.
I've tested the redis client behaviour by providing bad connection data, which makes it impossible to connect as there is no redis instance available at that url, my goal is to ultimately catch all kinds of redis errors and handle them gracefully.
Connection errors are reported as an error event on the client Redis object.
According to the "Auto-reconnect" section of the docs, ioredis will automatically try to reconnect when the connection to Redis is lost (or, presumably, unable to be established in the first place). Only after maxRetriesPerRequest attempts will the pending commands "be flushed with an error", i.e. get to the catch here:
try {
cachedItem = await redisClient.get(queryString); // This emit an error on the redis client, because it fails to connect (that's intended, to test the behaviour)
} catch (e) {
logger.error(e); // It never goes there, as the error isn't "thrown", but rather "emitted" and handled by redis its own way
epsagon.setError(e);
}
Since you stop your program on the first error:
client.on('error', function (e) {
// ...
if (e.message === 'ERR invalid password') {
logger.error(`Fatal error occurred "${e.message}". Stopping server.`);
throw e; // Fatal error, don't attempt to fix
...the retries and the subsequent "flushing with an error" never get the chance to run.
Ignore the errors in client.on('error', and you should get the error returned from await redisClient.get().
Here is what my team has done with IORedis in a TypeScript project:
let redis;
const redisConfig: Redis.RedisOptions = {
port: parseInt(process.env.REDIS_PORT, 10),
host: process.env.REDIS_HOST,
autoResubscribe: false,
lazyConnect: true,
maxRetriesPerRequest: 0, // <-- this seems to prevent retries and allow for try/catch
};
try {
redis = new Redis(redisConfig);
const infoString = await redis.info();
console.log(infoString)
} catch (err) {
console.log(chalk.red('Redis Connection Failure '.padEnd(80, 'X')));
console.log(err);
console.log(chalk.red(' Redis Connection Failure'.padStart(80, 'X')));
// do nothing
} finally {
await redis.disconnect();
}

Firestore INTERNAL ASSERTION FAILED: Got result for empty write pipeline

I'm using Firebase-Firestore on Javascript (web) with a Progressive web app. I ran into this error:
INTERNAL ASSERTION FAILED: Got result for empty write pipeline
Because Firebase runs asynchronously with XHR requests, it was difficult to determine the exact source of the error - it seemed like any onSnapshot, set or update was throwing this error for me.
And after that first error came a flurry of other errors:
INTERNAL ASSERTION FAILED: AsyncQueue is already failed: Error: FIRESTORE (5.3.0) INTERNAL ASSERTION FAILED: Got result for empty write pipeline
I thought my operation was pretty normal - just using the API set(), update() , onSnapshot() functions when it happened.
It's not a mission critical error - the code runs fine, but I'm hit with a couple thousand errors when I open debug, so it's prohibitive in that regard.
For my PWA I was using a cache-first, web-reupdate model which returns cachedResponse but also fetch()es the response and caches the fetched response.
Anyone have any insights?
It was the PWA! Using the PWA, I was catching all GET requests, including Firebase's own GET's. Filtering to ensure CORS requests don't return from cache fixed the problem.
To solve this, I added this code to my PWA:
self.addEventListener("fetch", event => {
if (event.request.method == "GET") {
event.respondWith(
(async function() {
const cachedResponse = await cache.match(event.request, {
ignoreSearch: true
});
// Returned the cached response if we have one, otherwise return the network response.
if (cachedResponse && event.request.type!="cors") {
//AVOID CORS FOR THINGS LIKE FIREBASE
updateCache(event);
return cachedResponse;
} else return await updateCache(event);
})()
);
} else {
event.respondWith(fetch(event.request));
}
});
If you're new to the PWA space, want to get a jump start to ANY PWA project, or want to just 'share notes', the repo with the full comprehensive PWA file is here: https://github.com/acenturyandabit/genUI/blob/master/Javascript/pwa.js
I've personally put a lot of time into this so I hope it helps :)

firebase firestore - how to check an error in the catch

I want to catch a firestore error and execute code based on what kind of error it is. (My database is build that if a document doesn't exist you won't be able to access it and therefore I have been getting a lot of permission-denied errors for non-existent documents. My solution is to check the error and create a document if I get a permission denied error.)
This is my current code:
userDataDocRef.collection('emailPreferences').doc('main').get().then(function(doc) {
if (doc.exists) {
var data = doc.data();
console.log(data);
var getGameUpdates = data.getGameUpdates;
switchEmailNewGames.checked = getGameUpdates;
} else {
}
}).catch(function(error) {
console.log(error)
if (error == firebase.firestore.FirestoreError.permission-denied) {
console.log('FIREBASE PERMISSION DENIED ERROR')
}
})
My Question: How do I check what kind of error I got properly? (what I have at the moment doesn't work)
Thank you for your help!
There is no way to determine from the response what specific security rule failed. This is by design, since exposing this information would give malicious users access to important information about how your database is secured. That's why the error message sent to the client is always a generic "permission denied".
In general depending on error message like this for regular flow control sounds like an antipattern. If you only want to create a document if it doesn't exist, perform the check in a transaction, which ensures the check and create happen atomically.

Node.js - Serving error 500 pages on unexpected errors

I'm trying to serve 500 pages (some generic HTML that says "500 - internal server error") from my Node.js server to requests that failed to resolve due to developer bugs, but can't find an elegant way to do this.
Lets say we have the following index.js, where a developer innocently made a mistake:
const http = require('http');
const port = 12345;
http.createServer(onHttpRequest).listen(port);
function onHttpRequest(req, res) {
var a = null;
var b = a.c; // this is the mistake
res.end('status: 200');
}
Trying to access property "c" of null throws an error, so "res.end" will never be reached. As a result, the requesting client will eventually get a timeout. Ideally, I my server to have code that can catch errors like this, and return 500 pages to the requesting client (as well as email an administrator and so on).
Using "try catch" in every single block is out of the question. Most Node.js code is async, and a lot of the code relies on external libraries with questionable error handling. Even if I use try-catch everywhere, there's a chance that an error would happen in an external library that didn't have a try-catch block inside of it, in a function that happens asynchronously, and thus my server will crash and the client would never get a response.
Shortest example I can provide:
/* my server's index.js */
const http = require('http');
const poorlyTestedNpmModule = require('some-npm-module');
const port = 12345;
http.createServer(onHttpRequest).listen(port);
function onHttpRequest(req, res) {
try {
poorlyTestedNpmModule(null, onResult);
}
catch(err) {
res.end('status: 500');
}
function onResult(err, expectedResult) {
if(err) {
res.end('status: 400');
}
else {
res.end('status: 200');
}
}
}
/* some-npm-module.js */
module.exports = function poorlyTestedNpmModule(options, callback) {
setTimeout(afterSomething, 100);
function afterSomething() {
var someValue = options.key; // here's the problem
callback(null, someValue);
}
}
Here, the server crashes, due to a function call that led to code that asynchronously throws an error. This code is not code that I control or wish to modify; I want my server to be able to handle all those errors on its own.
Now, I could, for instance, just use the global uncaughtException event, i.e.:
process.on('uncaughtException', doSomething);
but then I have no access to the (req, res) arguments, making it impossible to call res.end for the correct res instance; the only way to have access to them, is to store them in a higher-scope object for each incoming request, and then prune them on successful request resolutions, then mark existing [req, res] stored pairs as "potentially errored" whenever an uncaughtException triggers, and serve 500 pages to those requests whenever the count of currently-active requests matches the count of currently-unresolved-errors (and re-test that count per thrown uncaught expection and per successful res.end call).
Doing that works, but... it's ugly as hell. It means that request objects have to be leaked to the global scope, and it also means that my router module now has a dependency on the uncaughtException global event, and if any other code overwrites that event, everything breaks, or if I ever want to handle other uncaught exceptions for whatever reason, I'll run into cross dependency hell.
The root cause of this problem is that an unexpected error can happen anywhere, but I want to specifically catch whether an unexpected error originated from a stack trace that began from an incoming http request (and not, for example, from some interval I have running in the background, because then I get an unexpected error but obviously don't want to serve a 500 page to anyone, only email an admin with an error log), and on top of needing to know whether the error originated from an http request, I need to have access to the request+response objects that node server objects provide.
Is there no better way?
[Edit] The topic of this question is role distribution in modules.
i.e., one guy is making base code for a server, lets say a "router module". Other people will add new code to the server in the future, handling branches that are routed to.
The guy that writes the base server code has to write it in a way that it will serve 500 pages if any future code is written incorrectly and throws errors. Help him accomplish his goal.
Answers of the format "make sure all future people that add code never make mistakes and always write code that won't throw uncaught errors" will not be accepted.
At first, using uncaughtException in Nodejs is not safe. If you feel that there is no other option in your application, make sure that you exit the process in the handler of 'uncaughtException' and restart the process using pm2 or forever or someother modules. Below link can provide you its reference.
Catch all uncaughtException for Node js app
Coming to the process of error handling, as mentioned, you may always miss to handle errors with callback. To avoid, these we can use an exceptional advantage of promises in nodejs.
/* my server's index.js */
const http = require('http');
const poorlyTestedNpmModule = require('some-npm-module');
const port = 12345;
http.createServer(onHttpRequest).listen(port);
function onHttpRequest(req, res) {
try {
poorlyTestedNpmModule(null)
.then(result => {
res.end('status: 200');
})
.catch(err =>{
console.log('err is', err);
res.end('status: 400');
})
}
catch(err) {
res.end('status: 500');
}
}
/* some-npm-module.js */
module.exports = function poorlyTestedNpmModule(options, callback) {
setTimeout(afterSomething, 100);
afterSomthing = new Promise((resolve, reject)=> {
var someValue = options.key; // here's the problem
resolve(someValue);
})
}
If you see that some of the npm nodemodules are not present with promise, try to write wrappers to convert callback to promise model and use them in your application.

Node.js and https certificate error handling

To make a long story short:
I'm building node app which making a request with https (the secure version of http). Whenever I miss-configure my request options, I'm having this error:
Node.js Hostname/IP doesn't match certificate's altnames
Great... except of the fact that the entire request code is wrapped with a valid try..catch block (which works just fine.. checked that already). The code is basically something like this:
try
{
https.request(options, (response) =>
{
// no way I making it so far this that error
}).end();
}
catch(ex)
{
// for some reason.. I'm not able to get here either
}
What I intend to do is to simply handle that error within my try..catch block
After reading some posts I've learned that this behavior is mainly because the tls module is automatically process the request and therefore making this error - this is a nice piece of information but it doesn't really help me to handle the exception.
Some other suggested to use this option:
rejectUnauthorized: false // BEWARE: security hazard!
But I rather not... so.. I guess my questions are:
Handling an error with a try..catch block should work here..right?
If not - is this behavior is by-design in node?
Can I wrap the code in any other way to handle this error?
Just to be clear - I'm not using any third-party lib (so there is no one to blame)
Any kind of help will be appreciated
Thanks
You need to add an 'error' event handler on the request object returned by https.request() to handle that kind of error. For example:
var req = https.request(options, (response) => {
// ...
});
req.on('error', (err) => {
console.log('request error', err);
});
req.end();
See this section in the node.js documentation about errors for more information.

Categories