I'm using the following code to set up a listener for Firebase Database Ref:
export function listenToUserEventsFeed (userId, cb, errorCB) {
database.ref(`proUserEvents/${userId}`).on('value', (snapshot) => {
console.log('SNAPSHOT RECEIVED')
const feed = snapshot.val() || {}
const sortedIds = Object.keys(feed).sort((a, b) => feed[b].createdAtTimeStamp - feed[a].createdAtTimeStamp)
cb({feed, sortedIds})
}, (error) => {
console.log('SNAPSHOT ERROR: ', error)
})
}
But the console.log('SNAPSHOT ERROR: ', error) never runs if I test using no internet connection. Am I missing something or is there something wrong in my code? I essentially want to pass down the error to the errorCB() function.
The error callback will only be called in case of an error, i.e. when your current client has no permission to read the data it is trying to read.
Not having an internet connection is not an error.
If you want to detect whether there is an internet connection, listen for .info/connected.
Related:
Firebase Handling disconect to database
Android Firebase - "onDataChange" And "onCancelled" Not Being Called With No Internet Connection
How to Catch Error When Data is not Sent on Angularfire when adding data to firebase?
Related
I was working on admin registration and admin data retrieving react app. The registration works fine but retrieving admin data is crushing my backend. I have encountered this error when call the given endpoint from my react app. But when I call it from Postman it works very fine. And when I see the console on my browser my react app sends two calls simultaneously instead of one. On these calls my app crushes. If any one can show me how to solve this problem?
For backend = Node.js with express.js framework
For frontend = React
This is the error I am getting
node:internal/errors:465
ErrorCaptureStackTrace(err);
^
Error [ERR_HTTP_HEADERS_SENT]: Cannot remove headers after they are sent to the client
at new NodeError (node:internal/errors:372:5)
at ServerResponse.removeHeader (node:_http_outgoing:654:11)
at ServerResponse.send (C:\Users\Momentum\Documents\The Technologies\Madudi-App-Api\node_modules\express\lib\response.js:214:10)
at C:\Users\Momentum\Documents\The Technologies\Madudi-App-Api\api\index.js:22:72
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
code: 'ERR_HTTP_HEADERS_SENT'
}
[nodemon] app crashed - waiting for file changes before starting...
This is how I setup my endpoint and changed the data to a string in order to get simple response but it crushes
const makeHttpRequest = (controller, helper) => {
const makeRequest = (req, res) => {
try {
var data = "Trying response";
res.status(200).send({ status: true, data: data });
} catch (error) {
console.log(`ERROR: ${error.message}`);
res.status(400).send({ status: false, error: error.message });
}
};
return { makeRequest };
};
const makeApi = ({router, controller, helper}) => {
router.get("/test", (req, res) => res.send("Router is Woking..."));
router.get("/admin/get_all_admins", async (req, res) => res.send(await makeHttpRequest(controller, helper).makeRequest(req, res)));
}
module.exports = { makeApi }
And this is the call from my react app
export default function GetAllUsers() {
useEffect(() =>{
try{
const response = axios.get('http://localhost:5000/admin/get_all_admins').then(async (response) => {
console.log('response ', response)
return response.data;
});
}catch(error) {
return [];
}
}, [])
I'm not familiar with this method of responding to requests, but in my own opinion the error you are facing happens when you're sending multiple response.
This may be the asynchronous nature of JavaScript, there by causing another request to be sent after the function is done.
You should also try to return the response, so that once it's done it cancels out of the function. You can use the example below
const handler = (req,res) => {
return res.status(200).json(data)}
This particular error happens when you try to send more than one response for the same incoming request (something you are not allowed to do).
You are calling res.send() more than one for the same request on your server.
The first happens in the makeRequest() function.
The second time happens in this line of code:
router.get("/admin/get_all_admins", async (req, res) => res.send(await makeHttpRequest(controller, helper).makeRequest(req, res)));
You can't do that. You get ONE response per incoming request. So, either send the response in makeRquest() and don't send it in the caller. Or, don't send the response in makeRequest() and just return what the response should be and let the caller send it. Pick one model or the other.
I am not familiar with this way of setting up the server. Looks strange to me. However, in router.get("/admin/get_all_admins" your sending a response which calls a function makeHttpRequest that also sends a response. Thus you get an error Cannot remove headers after they are sent to the client because you're sending a response twice.
I'm trying to gracefully handle redis errors, in order to bypass the error and do something else instead, instead of crashing my app.
But so far, I couldn't just catch the exception thrown by ioredis, which bypasses my try/catch and terminates the current process. This current behaviour doesn't allow me to gracefully handle the error and in order to fetch the data from an alternative system (instead of redis).
import { createLogger } from '#unly/utils-simple-logger';
import Redis from 'ioredis';
import epsagon from './epsagon';
const logger = createLogger({
label: 'Redis client',
});
/**
* Creates a redis client
*
* #param url Url of the redis client, must contain the port number and be of the form "localhost:6379"
* #param password Password of the redis client
* #param maxRetriesPerRequest By default, all pending commands will be flushed with an error every 20 retry attempts.
* That makes sure commands won't wait forever when the connection is down.
* Set to null to disable this behavior, and every command will wait forever until the connection is alive again.
* #return {Redis}
*/
export const getClient = (url = process.env.REDIS_URL, password = process.env.REDIS_PASSWORD, maxRetriesPerRequest = 20) => {
const client = new Redis(`redis://${url}`, {
password,
showFriendlyErrorStack: true, // See https://github.com/luin/ioredis#error-handling
lazyConnect: true, // XXX Don't attempt to connect when initializing the client, in order to properly handle connection failure on a use-case basis
maxRetriesPerRequest,
});
client.on('connect', function () {
logger.info('Connected to redis instance');
});
client.on('ready', function () {
logger.info('Redis instance is ready (data loaded from disk)');
});
// Handles redis connection temporarily going down without app crashing
// If an error is handled here, then redis will attempt to retry the request based on maxRetriesPerRequest
client.on('error', function (e) {
logger.error(`Error connecting to redis: "${e}"`);
epsagon.setError(e);
if (e.message === 'ERR invalid password') {
logger.error(`Fatal error occurred "${e.message}". Stopping server.`);
throw e; // Fatal error, don't attempt to fix
}
});
return client;
};
I'm simulating a bad password/url in order to see how redis reacts when misconfigured. I've set lazyConnect to true in order to handle errors on the caller.
But, when I define the url as localhoste:6379 (instead of localhost:6379), I get the following error:
server 2019-08-10T19:44:00.926Z [Redis client] error: Error connecting to redis: "Error: getaddrinfo ENOTFOUND localhoste localhoste:6379"
(x 20)
server 2019-08-10T19:44:11.450Z [Read cache] error: Reached the max retries per request limit (which is 20). Refer to "maxRetriesPerRequest" option for details.
Here is my code:
// Fetch a potential query result for the given query, if it exists in the cache already
let cachedItem;
try {
cachedItem = await redisClient.get(queryString); // This emit an error on the redis client, because it fails to connect (that's intended, to test the behaviour)
} catch (e) {
logger.error(e); // It never goes there, as the error isn't "thrown", but rather "emitted" and handled by redis its own way
epsagon.setError(e);
}
// If the query is cached, return the results from the cache
if (cachedItem) {
// return item
} else {} // fetch from another endpoint (fallback backup)
My understanding is that redis errors are handled through client.emit('error', error), which is async and the callee doesn't throw an error, which doesn't allow the caller to handle errors using try/catch.
Should redis errors be handled in a very particular way? Isn't it possible to catch them as we usually do with most errors?
Also, it seems redis retries 20 times to connect (by default) before throwing a fatal exception (process is stopped). But I'd like to handle any exception and deal with it my own way.
I've tested the redis client behaviour by providing bad connection data, which makes it impossible to connect as there is no redis instance available at that url, my goal is to ultimately catch all kinds of redis errors and handle them gracefully.
Connection errors are reported as an error event on the client Redis object.
According to the "Auto-reconnect" section of the docs, ioredis will automatically try to reconnect when the connection to Redis is lost (or, presumably, unable to be established in the first place). Only after maxRetriesPerRequest attempts will the pending commands "be flushed with an error", i.e. get to the catch here:
try {
cachedItem = await redisClient.get(queryString); // This emit an error on the redis client, because it fails to connect (that's intended, to test the behaviour)
} catch (e) {
logger.error(e); // It never goes there, as the error isn't "thrown", but rather "emitted" and handled by redis its own way
epsagon.setError(e);
}
Since you stop your program on the first error:
client.on('error', function (e) {
// ...
if (e.message === 'ERR invalid password') {
logger.error(`Fatal error occurred "${e.message}". Stopping server.`);
throw e; // Fatal error, don't attempt to fix
...the retries and the subsequent "flushing with an error" never get the chance to run.
Ignore the errors in client.on('error', and you should get the error returned from await redisClient.get().
Here is what my team has done with IORedis in a TypeScript project:
let redis;
const redisConfig: Redis.RedisOptions = {
port: parseInt(process.env.REDIS_PORT, 10),
host: process.env.REDIS_HOST,
autoResubscribe: false,
lazyConnect: true,
maxRetriesPerRequest: 0, // <-- this seems to prevent retries and allow for try/catch
};
try {
redis = new Redis(redisConfig);
const infoString = await redis.info();
console.log(infoString)
} catch (err) {
console.log(chalk.red('Redis Connection Failure '.padEnd(80, 'X')));
console.log(err);
console.log(chalk.red(' Redis Connection Failure'.padStart(80, 'X')));
// do nothing
} finally {
await redis.disconnect();
}
I am using nodejs, express and node-mysql2 for my application. I want timezone for each connection to use UTC so that my time_created and time_modified (MYSQL: On update CURRENT_TIMESTAMP) column will have timestamp in UTC only. I don't have permissions to set timestamp of MYSQL server hence I need to do this in my application only.
I am using connection pool and adding an event listener for connection event which set timezone for that connection to be UTC.
pool.on('connection', (conn) => {
conn.query("SET time_zone='+00:00';", (error) => {
if (error) {
throw error;
}
});
});
My route definition is similar to this:
router.get('/route', async (req, res) => {
try {
const [rows] = await pool.execute('SELECT * FROM student', []);
return res.status(200).send('Success');
} catch (e) {
return res.status(500).send('Failure');
}
});
If this query "SET time_zone='+00:00'" in event listener fails, my node server get crashed with stacktrace on console. I want to catch this kind of errors in my route or anywhere so that I can send 500 response to client. Can you tell me what is good approach to handle this kind of exception thrown from event listeners in express route?
My frontend, using apollo-client, throws an exception when the backend returns an error after a request.
When the node server receives a request, I check the validity of the request's token using koa middleware. If the token is valid, the request is forwarded to the next middleware. If the token is invalid, I want to return a 401 access denied error to the client. To do this, I followed Koa's error documentation located here.
The code for the error handling middleware I wrote:
function userIdentifier() {
return async (ctx, next) => {
const token = ctx.request.headers.authorization
try {
const payload = checkToken(token)
ctx.user = {
id: payload.userId,
exp: payload.exp,
iat: payload.iat,
}
} catch (error) {
ctx.user = undefined
ctx.throw(401, "access_denied")
// throw new Error("access_denied")
}
await next()
}
}
This seemingly works on the backend, but not on the frontend. When the frontend receives this error, a JavaScript runtime error occurs. I am not sure what causes this.
Note, the unexpected "a" is the same "a" found in ctx.throw(401, "access_denied"). If it were instead ctx.throw(401, "x") the frontend shows "unexpected token x" instead.
The frontend code where the errors happens:
In an attempt to fix this, I followed Apollo's error handling documentation and used apollo-link-error.
const errorLink = onError(props => {
const { graphQLErrors, networkError } = props
console.log("ON ERROR", props)
if (graphQLErrors)
graphQLErrors.map(({ message, locations, path }) =>
console.log(
`[GraphQL error]: Message: ${message}, Location: ${locations}, Path: ${path}`
)
)
if (networkError) console.log(`[Network error]: ${networkError}`)
})
Then I combine all links and create the Apollo client like this:
const link = ApolloLink.from([errorLink, authLink, httpLink])
export const client = new ApolloClient({
link,
cache: new InMemoryCache(),
})
The output of the debugging log in apollo-link-error is as follows:
Related Documents
Someone seems to be having an identical error, but a solution was not listed.
I found that the errors were handled correctly on the frontend when I began using this library on the backend: https://github.com/jeffijoe/koa-respond
Using just ctx.unauthenticated()
But I would still like to know more about how to return json/object-based errors with koa without a plugin helping
I'm trying to make an alert window saying there's an error, When trying to post a message offline. But the catch doesn't seem to ever work, Maybe it just works in other cases?
here's my code :
return (dispatch) => {
dispatch({
type: POST_TO_DB,
});
firebase.database().ref(locationInDB).push(object)
.then((data) => {
dispatch({
type: POST_TRADE_TO_DB_SUCCESS,
}); // success
})
.catch((err) => {
console.log("failed to post...")
dispatch({
type: POST_TRADE_TO_DB_FAILED,
}); // failed
});
};
Is there an alternative? Or am I doing something wrong?
When there is no network connection, the Firebase client will keep the pending write in memory until the network connection is restored, at which point it will complete the write.
The catch() clause is triggered if the write fails on the server, not when it can't complete.
Also see:
To detect if the client is connected to the Firebase backend, see Detect if Firebase connection is lost/regained
Firebase synchronisation of locally-modified data: handling errors & global status