There are various posts dealing with the general issue of timeouts when using http.get(), but non of them seems to address the question of how to deal with timeouts that occur during the stream itself, after a successful response was already received.
Take this code for example. It sends a request to some server, that responds on time, but creates an artificial timeout during the stream:
(async()=>{
//Send request to a dummy server, that creates a timeout, IN THE MIDDLE OF THE STREAM.
const request =http.get('http://localhost/timeout/',async (res)=>{
const write = fs.createWriteStream('.text.txt');
try {
await pipelinePromisified(res,write);
console.log('Everything went fine')//Being that the timeout error during the stream is not caught by pipeline,
//the promise gets resolved..
} catch (error) {
//The error is NOT caught by pipeline!
console.log('Error from pipeline',error)
}
}).on('error',(e)=>{
//Error is caught here
console.log('error from request on error')
})
request.setTimeout(4000,()=>{
request.destroy(new Error('request timed out'));
//This causes the piping of the streams to stop in the middle(resulting a partial file), but the pipeline doesn't treat this is an error.
})
})()
Note the key issue: The timeout during the stream is recognized, the request is destroyed, the IncomingMessage(response) stops pumping data- but pipeline doesn't recognize it as an error.
The outcome is, that the client code is not aware of the fact that file was partially downloaded, being that no error is thrown.
How to handle this situation? In my testing, calling response.emit('error') seems to solve this, but Node's docs clearly state not to do this.
Any help would be greatly appreciated.
Update: It seems that on Node 12(i have 10 and 12 installed via nvm), an error is caught by pipeline, but not in Node 10.
Related
https://onoumenon.gitbook.io/wiki/programming/tips/rtmp
buildPlayer() {
if (this.player || !this.props.stream) {
return;
}
const { id } = this.props.match.params;
this.player = flv.createPlayer({
type: "flv",
url: `http://localhost:8000/live/${id}.flv`
});
this.player.attachMediaElement(this.videoRef.current);
this.player.load();
}
I am trying to use this code to stream videos to users, but I get an error, if there are no stream and the app crashes.
Unhandled Rejection (AbortError): The fetching process for the media resource was aborted by the user agent at the user's request.
The error is thrown when I try to execute this:
player.attachMediaElement(videoRef.current);
Is there a way to check if this line is going to throw an error?
This type of error (Unhandled Rejection) tells us you have a rejected Promise that is not being properly handled.
Promises are asynchronous, and the error will happen asynchronously as well.
According to the library documentation (here), the only method that returns a Promise is method play.
So I guess that you are probably invoking play method in a different part of your code. And you need to capture the error there, as in:
flvPlayer.play().catch((e)=>{
/* error handler */
})
I'm using Firebase-Firestore on Javascript (web) with a Progressive web app. I ran into this error:
INTERNAL ASSERTION FAILED: Got result for empty write pipeline
Because Firebase runs asynchronously with XHR requests, it was difficult to determine the exact source of the error - it seemed like any onSnapshot, set or update was throwing this error for me.
And after that first error came a flurry of other errors:
INTERNAL ASSERTION FAILED: AsyncQueue is already failed: Error: FIRESTORE (5.3.0) INTERNAL ASSERTION FAILED: Got result for empty write pipeline
I thought my operation was pretty normal - just using the API set(), update() , onSnapshot() functions when it happened.
It's not a mission critical error - the code runs fine, but I'm hit with a couple thousand errors when I open debug, so it's prohibitive in that regard.
For my PWA I was using a cache-first, web-reupdate model which returns cachedResponse but also fetch()es the response and caches the fetched response.
Anyone have any insights?
It was the PWA! Using the PWA, I was catching all GET requests, including Firebase's own GET's. Filtering to ensure CORS requests don't return from cache fixed the problem.
To solve this, I added this code to my PWA:
self.addEventListener("fetch", event => {
if (event.request.method == "GET") {
event.respondWith(
(async function() {
const cachedResponse = await cache.match(event.request, {
ignoreSearch: true
});
// Returned the cached response if we have one, otherwise return the network response.
if (cachedResponse && event.request.type!="cors") {
//AVOID CORS FOR THINGS LIKE FIREBASE
updateCache(event);
return cachedResponse;
} else return await updateCache(event);
})()
);
} else {
event.respondWith(fetch(event.request));
}
});
If you're new to the PWA space, want to get a jump start to ANY PWA project, or want to just 'share notes', the repo with the full comprehensive PWA file is here: https://github.com/acenturyandabit/genUI/blob/master/Javascript/pwa.js
I've personally put a lot of time into this so I hope it helps :)
I'm trying to serve 500 pages (some generic HTML that says "500 - internal server error") from my Node.js server to requests that failed to resolve due to developer bugs, but can't find an elegant way to do this.
Lets say we have the following index.js, where a developer innocently made a mistake:
const http = require('http');
const port = 12345;
http.createServer(onHttpRequest).listen(port);
function onHttpRequest(req, res) {
var a = null;
var b = a.c; // this is the mistake
res.end('status: 200');
}
Trying to access property "c" of null throws an error, so "res.end" will never be reached. As a result, the requesting client will eventually get a timeout. Ideally, I my server to have code that can catch errors like this, and return 500 pages to the requesting client (as well as email an administrator and so on).
Using "try catch" in every single block is out of the question. Most Node.js code is async, and a lot of the code relies on external libraries with questionable error handling. Even if I use try-catch everywhere, there's a chance that an error would happen in an external library that didn't have a try-catch block inside of it, in a function that happens asynchronously, and thus my server will crash and the client would never get a response.
Shortest example I can provide:
/* my server's index.js */
const http = require('http');
const poorlyTestedNpmModule = require('some-npm-module');
const port = 12345;
http.createServer(onHttpRequest).listen(port);
function onHttpRequest(req, res) {
try {
poorlyTestedNpmModule(null, onResult);
}
catch(err) {
res.end('status: 500');
}
function onResult(err, expectedResult) {
if(err) {
res.end('status: 400');
}
else {
res.end('status: 200');
}
}
}
/* some-npm-module.js */
module.exports = function poorlyTestedNpmModule(options, callback) {
setTimeout(afterSomething, 100);
function afterSomething() {
var someValue = options.key; // here's the problem
callback(null, someValue);
}
}
Here, the server crashes, due to a function call that led to code that asynchronously throws an error. This code is not code that I control or wish to modify; I want my server to be able to handle all those errors on its own.
Now, I could, for instance, just use the global uncaughtException event, i.e.:
process.on('uncaughtException', doSomething);
but then I have no access to the (req, res) arguments, making it impossible to call res.end for the correct res instance; the only way to have access to them, is to store them in a higher-scope object for each incoming request, and then prune them on successful request resolutions, then mark existing [req, res] stored pairs as "potentially errored" whenever an uncaughtException triggers, and serve 500 pages to those requests whenever the count of currently-active requests matches the count of currently-unresolved-errors (and re-test that count per thrown uncaught expection and per successful res.end call).
Doing that works, but... it's ugly as hell. It means that request objects have to be leaked to the global scope, and it also means that my router module now has a dependency on the uncaughtException global event, and if any other code overwrites that event, everything breaks, or if I ever want to handle other uncaught exceptions for whatever reason, I'll run into cross dependency hell.
The root cause of this problem is that an unexpected error can happen anywhere, but I want to specifically catch whether an unexpected error originated from a stack trace that began from an incoming http request (and not, for example, from some interval I have running in the background, because then I get an unexpected error but obviously don't want to serve a 500 page to anyone, only email an admin with an error log), and on top of needing to know whether the error originated from an http request, I need to have access to the request+response objects that node server objects provide.
Is there no better way?
[Edit] The topic of this question is role distribution in modules.
i.e., one guy is making base code for a server, lets say a "router module". Other people will add new code to the server in the future, handling branches that are routed to.
The guy that writes the base server code has to write it in a way that it will serve 500 pages if any future code is written incorrectly and throws errors. Help him accomplish his goal.
Answers of the format "make sure all future people that add code never make mistakes and always write code that won't throw uncaught errors" will not be accepted.
At first, using uncaughtException in Nodejs is not safe. If you feel that there is no other option in your application, make sure that you exit the process in the handler of 'uncaughtException' and restart the process using pm2 or forever or someother modules. Below link can provide you its reference.
Catch all uncaughtException for Node js app
Coming to the process of error handling, as mentioned, you may always miss to handle errors with callback. To avoid, these we can use an exceptional advantage of promises in nodejs.
/* my server's index.js */
const http = require('http');
const poorlyTestedNpmModule = require('some-npm-module');
const port = 12345;
http.createServer(onHttpRequest).listen(port);
function onHttpRequest(req, res) {
try {
poorlyTestedNpmModule(null)
.then(result => {
res.end('status: 200');
})
.catch(err =>{
console.log('err is', err);
res.end('status: 400');
})
}
catch(err) {
res.end('status: 500');
}
}
/* some-npm-module.js */
module.exports = function poorlyTestedNpmModule(options, callback) {
setTimeout(afterSomething, 100);
afterSomthing = new Promise((resolve, reject)=> {
var someValue = options.key; // here's the problem
resolve(someValue);
})
}
If you see that some of the npm nodemodules are not present with promise, try to write wrappers to convert callback to promise model and use them in your application.
To make a long story short:
I'm building node app which making a request with https (the secure version of http). Whenever I miss-configure my request options, I'm having this error:
Node.js Hostname/IP doesn't match certificate's altnames
Great... except of the fact that the entire request code is wrapped with a valid try..catch block (which works just fine.. checked that already). The code is basically something like this:
try
{
https.request(options, (response) =>
{
// no way I making it so far this that error
}).end();
}
catch(ex)
{
// for some reason.. I'm not able to get here either
}
What I intend to do is to simply handle that error within my try..catch block
After reading some posts I've learned that this behavior is mainly because the tls module is automatically process the request and therefore making this error - this is a nice piece of information but it doesn't really help me to handle the exception.
Some other suggested to use this option:
rejectUnauthorized: false // BEWARE: security hazard!
But I rather not... so.. I guess my questions are:
Handling an error with a try..catch block should work here..right?
If not - is this behavior is by-design in node?
Can I wrap the code in any other way to handle this error?
Just to be clear - I'm not using any third-party lib (so there is no one to blame)
Any kind of help will be appreciated
Thanks
You need to add an 'error' event handler on the request object returned by https.request() to handle that kind of error. For example:
var req = https.request(options, (response) => {
// ...
});
req.on('error', (err) => {
console.log('request error', err);
});
req.end();
See this section in the node.js documentation about errors for more information.
Piping the response of an http request to a file is pretty easy:
http.get(url, function (res) {
var file = fs.createWriteStream(filename)
res.pipe(file)
file.on('finish', function () {
file.close()
})
})
But when I try to set up a retry system, things get complicated. I can decide whether to retry based on res.statusCode, but then if I decide to retry, this means not piping the response to the writable stream, so the response just stays open. This considerably slows down execution when I do many retries. A solution to this is to (uselessly) listen to the data and end events just to get the response to close, but then if I decide not to retry, I can not longer pipe to the writeStream, as the response is now closed.
Anyways, I'm surely not the only one wanting to stream the response of an http request to a file, retrying when I get a 503, but I can't seem to find any code "out there" that does this.
Solved. The slowness happens when a lot of responses are left open (unconsumed). The solution was to response.resume() them, letting them "spew in nothingness" when a retry is necessary. So in pseudo code:
http.get(url, function (response) {
if (response.statusCode !== 200) {
response.resume()
retry()
} else {
response.pipe(file)
}
})
My original problem was that I was checking wheter to retry or not too late, forcing me to "spew into nothingness" before having decided, making it impossible to decide not to retry because the data from the response "stream" was already consumed.