I am using node module pg in my application and I want to make sure it can properly handle connection and query errors.
The first problem I have is I want to make sure it can properly recover when postgres is unavailable.
I found there is an error event so I can detect if there is a connection error.
import pg from 'pg'
let pgClient = null
async function postgresConnect() {
pgClient = new pg.Client(process.env.CONNECTION_STRING)
pgClient.connect()
pgClient.on('error', async (e) => {
console.log('Reconnecting')
await sleep(5000)
await postgresConnect()
})
}
I don't like using a global here, and I want to set the sleep delay to do an small exponential backoff. I noticed "Reconnecting" fires twice immediately, then waits five seconds and I am not sure why it fired the first time without any waiting.
I also have to make sure the queries execute. I have something like this I was trying out.
async function getTimestamp() {
try {
const res = await pgClient.query(
'select current_timestamp from current_timestamp;'
)
return res.rows[0].current_timestamp
} catch (error) {
console.log('Retrying Query')
await sleep(1000)
return getTimestamp()
}
}
This seems to work, but I haven't tested it enough to make sure it will guarantee the query is executed or keep trying. I should look for specific errors and only loop forever on certain errors and fail on others. I need to do more research to find what errors are thrown. I also need to do a backoff on the delay here too.
It all "seems" to work, I don't want to fail victim to the Dunning-Kruger effect. I need to ensure this process can handle all sorts of situations and recover.
Related
My application allows users to do searches and get suggestions as they type in the search box. For each time that the user enters a character, I use 'fetch' to fetch the suggestions from an API. The thing is that if the user does the search fast, he can get the result before the suggestions are fetched. In this case, I want to cancel the fetch request.
I used to have the same application in React and I could easily cancel the request using AbortController, but that isn't working in Next js.
I did some research and I think the problem is happening because Next doesn't have access to AbortController when it tries to generate the pages.
I also had this problem when I tried to use 'window.innerWidth' because it seems Next doesn't have access to 'window' either.
The solution I found was to use 'useEffect'. It worked perfectly when I used it with 'window'.
const [size, setSize] = useState(0)
useEffect(() => {
setSize(window.innerWidth)
}, [])
But it isn't working when I use AbortController. First I did it like this:
let suggestionsController;
useEffect(() => {
suggestionsController = new AbortController();
},[])
But when I tried to use 'suggestionsController', it would always be undefined.
So I tried to do the same thing using 'useRef'.
const suggestionsControllerRef = useRef(null)
useEffect(() => {
suggestionsControllerRef.current = new AbortController();
},[])
This is how I'm fetching the suggestions:
async function fetchSuggestions (input){
try {
const response = await fetch(`url/${input}`, {signal: suggestionsControllerRef.current.signal})
const result = await response.json()
setSuggestionsList(result)
} catch (e) {
console.log(e)
}
}
And this is how I'm aborting the request:
function handleSearch(word) {
suggestionsControllerRef.current.abort()
router.push(`/dictionary/${word}`)
setShowSuggestions(false)
}
Everything works perfectly for the first time. But if the user tries to do another search, 'fetchSuggestions' function stops working and I get this error in the console 'DOMException: Failed to execute 'fetch' on 'Window': The user aborted a request'.
Does anyone know what is the correct way to use AbortController in Next js?
The solution I found to the problem was create a new instance of AbortController each time that the user does the search. While the suggestions were being displayed, 'showSuggestions' was true, but when 'handleSearch' was called, 'showSuggestions' was set to false. So I just added it as a dependency to useEffect.
useEffect(() => {
const obj = new AbortController();
setSuggestionController(obj)
},[showSuggestions])
I also switched from useRef to useState, but I'm not sure if that was necessary because I didn't test this solution with useRef.
I don't know if that is the best way of using AbortController in Next js, but my application is working as expected now.
I suppose you can try an abort controller to cancel your requests if the user stops typing, but this is not the standard way of solving this common problem.
You want to "debounce" the callback that runs when the user types. Debouncing is a strategy that essentially captures the function calls and "waits" a certain amount of time before executing a function. For example, in this case you might want to debounce your search function so that it will only run ONCE, 500 ms after the user has stopped typing, rather than running on every single keypress.
Look into debouncing libraries or write a debounce function yourself, but fair warning it can be pretty tricky at first!
Edit: 2021 feb this problem was fixed.
I have been dealing with a timeout error using node js firebase function emulator. My function was working, but now the code will wait for a timeout regardless of the code. I tried copying the example on the quick start page, and the same error is occurring.
I can put console statements in the code, and I will see nothing output. I have another function that works properly when a document is created. The response errors out, but the function will continue executing for the duration of the timeout.
const functions = require("firebase-functions");
const admin = require("firebase-admin");
admin.initializeApp();
exports.addMessage = functions.https.onRequest(async (req, res) => {
// Grab the text parameter.
const original = req.query.text;
// Push the new message into Firestore using the Firebase Admin SDK.
const writeResult = await admin.firestore().collection('messages').add({original: original});
// Send back a message that we've successfully written the message
res.json({result: `Message with ID: ${writeResult.id} added.`});
});
Error: Function timed out.
at Timeout._onTimeout (/usr/local/lib/node_modules/firebase-tools/lib/emulator/functionsEmulatorRuntime.js:640:19)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
i functions: Beginning execution of "createTokenForEvents"
⚠ functions: Your function timed out after ~60s. To configure this timeout, see
https://firebase.google.com/docs/functions/manage-functions#set_timeout_and_memory_allocation.
/usr/local/lib/node_modules/firebase-tools/lib/emulator/functionsEmulatorRuntime.js:640
throw new Error("Function timed out.");
^
Edit: I have come to the conclusion that this function does work, but it just always error out. I was writing another function when I discover this error and even when I switch to this simple case, the error was present. However, in this case, the function does create a document, but whatever is keeping it on for the entire duration may also be hiding log statements too. My question has change; why is the function executing for the entire duration even after the code completes.
https://firebase.google.com/docs/functions/terminate-functions
As stated in the documentation, one of the principles of writing a good functions is
Terminate HTTP functions with res.redirect(), res.send(), or
res.end().
Since my other functions where synchronous, writing a return statement was good enough. However, I solve the error by explicitly adding terminating response statement res.end(). I also experiment with returning a promise, but it did not solve the problem either.
Edit: This problem continues to occur in different scenarios. I literally cannot get the code to run when using a query https://github.com/firebase/firebase-functions/issues/847. As of Jan 2021, the firebase-functions repo seems to be unmaintained as far as fixing community issues. A user got a response from the firebase team saying they have been trying to fix the logging issues, but they would not commit to a date on the problem would be resolved. I would just not even bother using the firebase functions emulator. Deploying the function works fine.
Due to the query parameter, add the slash symbol:
http://localhost:5001/.../us-central1/helloWorld/?foo=bar&test=string
http://localhost:5001/.../us-central1/helloWorld?foo=bar&test=string (no slash before ?)
Refer from https://github.com/firebase/firebase-tools/issues/1314
I am having a problem where I am making a bulk insert of multiple elements into a table, then I immediatly get the last X elements from that table that were recently inserted but when I do that it seems that the elements have not yet been inserted fully even thought I am using async await to wait for the async operations.
I am making a bulk insert like
const createElements = elementsArray => {
return knex
.insert(elementsArray)
.into('elements');
};
Then I have a method to immediately access those X elements that were inserted:
const getLastXInsertedElements = (userId, length, columns=['*']) => {
return knex.select(...columns)
.from('elements').where('userId', userId)
.orderBy('createdAt', 'desc')
.limit(length);
}
And finally after getting those elements I get their ids and save them into another table that makes use of element_id of those recently added elements.
so I have something like:
// A simple helper function that handles promises easily
const handleResponse = (promise, message) => {
return promise
.then(data => ([data, undefined]))
.catch(error => {
if (message) {
throw new Error(`${message}: ${error}`);
} else {
return Promise.resolve([undefined, `${message}: ${error}`])
}
}
);
};
async function service() {
await handleResponse(createElements(list), 'error text'); // insert x elements from the list
const [elements] = await handleResponse(getLastXInsertedElements(userId, list.length), 'error text') // get last x elements that were recently added
await handleResponse(useElementsIdAsForeignKey(listMakingUseOfElementsIds), 'error text'); // Here we use the ids of the elements we got from the last query, but we are not getting them properly for some reason
}
So the problem:
Some times when I execute getLastXInsertedElements it seems that the elements are not yet finished inserting, even thought I am waiting with async/await for it, any ideas why this is? maybe something related to bulk inserts that I don't know of? an important note, all the elements always properly inserted into the table at some point, it just seems like this point is not respected by the promise (async operation that returns success for the knex.insert).
Update 1:
I have tried putting the select after the insert inside a setTimeout of 5 seconds for testing purposes, but the problem seems to persist, that is really weird, seems one would think 5 seconds is enough between the insert and the select to get all the data.
I would like to have all X elements that were just inserted accessible in the select query from getLastXInsertedElements consistently.
Which DB are you using, how big list of data are you inserting? You could also test if you are inserting and getLastXInsertedElements in a transaction if that hides your problem.
Doing those operations in transaction also forces knex to use the same connection for both queries so it might lead to a tracks where is this coming from.
Another trick to force queries to use the same connection would be to set pool's min and max configuration to be 1 (just for testing is parallelism is indeed the problem here).
Also since you have not provided complete reproduction code for this, I'm suspecting there is something else here in the mix which causes this odd behavior. Usually (but not always) this kind of weird cases that shouldn't happen are caused by user error in elsewhere using the library.
I'll update the answer if there is more information provided. Complete reproduction code would be the most important piece of information.
I am not 100% sure but I guess the knex functions do not return promise by default (but a builder object for the query). That builder has a function called then that transforms the builder into a promise. So you may try to add a call to that:
...
limit(length)
.then(x => x); // required to transform to promise
Maybe try debugging the actual type of the returned value. It might happen that this still is not a promise. In this case you may not use async await but need to use the then Syntax because it might not be real js promises but their own implementation.
Also see this issue about standard js promise in knex https://github.com/knex/knex/issues/1588
In theory, it should work.
You say "it seems"... a more clear problem explanation could be helpful.
I can argue the problem is that you have elements.length = list.length - n where n > 0; in your code there are no details about userId property in your list; a possible source of the problem could be that some elements in your list has a no properly set userId property.
I need to know if it is better to use asynchronous createConnection or not
Does this change anything at loading speed?
I'm using express, ReactJS, promise-mysql
What should I use?
This:
async connect () {
try{
const conn = await db.createConnection(this.config);
this.conn = conn;
} catch(error){
console.log(error)
}
}
Or this
connect () {
return db.createConnection(this.config).then(conn => {
this.conn = conn
})
}
Well, the error handling is completely different in your two examples. In the first, you log the error and allow the returned promise to then be resolved. In the second, a connection error will reject the returned promise. So, that's a major structural difference.
If you changed the first one to this:
async connect () {
this.conn = await db.createConnection(this.config);
}
Then, that would be structurally the same as your second example:
connect () {
return db.createConnection(this.config).then(conn => {
this.conn = conn;
});
}
Now, if you compare these two they have the same outcomes (except in an edge case where db.createConnection() would throw synchronously which it hopefully doesn't do).
So, then if you reasked your question based on these two that have the same outcomes, the answer is that it doesn't really make a difference.
If there was a measurable difference in execution speed, it would be so small as to be very unlikely to be meaningful and whatever difference there was would be only dependent upon the particular JS engine implementation and likely not constant as the JS engine matures.
So, it's really just a matter of coding style and which you prefer. The await version is less typing, less lines of code (as it often is). I personally tend to not use async/await unless I have more than one asynchronous operation that I'm trying to sequence, but that's perhaps just some inertia from coding with .then() for awhile before await came along.
I have quite a complex express route in my node.js server, that makes many modifications to the database via mongoose.
My goal is to implement a mechanism, that reverts all changes when any error occurs. My idea was implementing functions for undoing into the catch block.
But this is quite ugly, as I have to know what the previous values were and what if an error occurs in the catch block? It's especially difficult to revert those changes, when an error occurred during a Promise.all(array.map( /* ... */ ))
My route looks akin to this:
module.exports = (req, res) => {
var arr1, var2, var3
try {
const body = req.body
arr1 = await fetchVar1(body)
const _data = await Promise.all([
Promise.all(
arr1.map(async x => {
const y = await fetchSometing(x)
return doSometing(y)
})
),
doSomething3(arr1),
])
var2 = _data[1]
var3 = _data[2]
return res.json({ arr1, var2, var3 })
} catch (err) {
/**
* If an error occurs I have to undo
* everything that has been done
* in the try block
*/
}
}
Preferably I would like to implement something that "batches" all changes and "commits" the changes if no errors occurred.
What you are looking for is transactions: https://mongoosejs.com/docs/transactions.html
Manually undoing stuff after doing them won't protect you from every issue, so you should not rely on that. For example, exactly as you wrote: what happens if there is a crash after a partial write (some data is written, some is not), then another crash during your "rollback" code, which does not cleanup everything? If your code depends on your data being absolutely clean, then you have a problem. Your code should either be able to handle partial data correctly, or you must have some way to guarantee that your data is perfectly good at all times.
Transactions is the way to go, because it only commits everything at once if everything works.
What you’re looking for is called Transactions.
Transactions are new in MongoDB 4.0 and Mongoose 5.2.0. Transactions let you execute multiple operations in isolation and potentially undo all the operations if one of them fails. This guide will get you started using transactions with Mongoose.
For more information check the link below:
https://mongoosejs.com/docs/transactions.html