Restify (Node) connection not closed after res.end() - javascript

We have a Node/Restify app and I'm having a problem with this route (some code has been omitted, so please ignore missing identifiers):
server.post('/report', requireAuth, function (req, res, next) {
generateReport(err, result, extension, contentType) {
// Filename for the report
var filename = 'Report_' + moment().format('YYYY-MM-DD_HH:mm:ss')
// Output the result directly with a lower-level APIs
res.writeHead(200, {
'Content-Length': Buffer.byteLength(result),
'Content-Type': contentType,
'Content-Disposition': 'attachment;filename='+filename+'.'+extension
})
res.write(result, 'utf8')
res.end()
console.log('ended')
return next()
}
})
The problem is that, after sending the output, the connection with the client is not closed, even if I call res.end()! (res is just a wrapper around the standard HTTP ServerResponse module).Commenting out return next() doesn't make any difference.
I can see on the console 'ended' and I know the content is sent to the client because if I force the termination of the connection (by simply terminating the Node process), the client has all the output.
What am I doing wrong?
EDIT
Removing the Content-Length header, instead, makes it work. Why is that? (The advice to use that header is part of the documentation)
EDIT2
The value of Buffer.byteLength(result) is 4478 bytes, but the data downloaded by the client (Postman) is 4110, and that's likely why the connection is not being closed (the client is expecting to receive more data). How is that possible?

Related

Is 'res.writeHead' necessary? Does it have a purpose?

I am writing a bare minimal node.js server to understand how it works. From what I can tell this line:
res.writeHead(200);
does nothing. If I remove it I get the exact same behavior from the browser. Does it have any purpose? Is it OK to remove it?
// https://nodejs.dev/learn/the-nodejs-http-module
const http = require('http');
const server = http.createServer(handler);
server.listen(3000, () => {
console.log('node: http server: listening on port: ' + 3000);
});
function handler(request, response) {
res.writeHead(200);
res.end('hello world\n');
}
Is it some how related to http headers?
The default http status is 200, so you do not have to tell the response object that the status is 200. If you don't set it, the response will automatically be 200. You can remove res.writeHead(200); and similarly, you don't need the express version of that which would be res.status(200).
The other thing that res.writeHeader(200) does is cause the headers to be written out in the response stream at that point. You also don't need to call that yourself because when you do res.send(...), the headers will automatically be sent first (if they haven't already been sent). In fact, the res.headersSent property keeps track of whether the headers have been sent yet. If not, they are sent as soon as you call any method that starts sending the body (like res.send() or res.write(), etc...).
Is it OK to remove it?
Yes, in this context it is OK to remove it.

Why Does Browser Keep Loading When Using createReadStream() Node.js

New to Node.js I do understand that createReadStream() function is better for the performance than readFile(), because createReadStream() reads and writes data in chucks while readFile() first reads the whole content. Thus if the file is large, readFile() function might take longer before data can be processed futher. Thus I choose to create server using createReadStream() function as following.
// Create a server with fs.createReadStream(), better performance and less memory usage.
http.createServer( function (request, response) {
// Parse the request containing file name
var pathname = url.parse(request.url).pathname;
// Create a readable stream.
var readerStream = fs.createReadStream(pathname.substr(1));
// Set the encoding to be UTF8.
readerStream.setEncoding('UTF8');
// Handle stream events --> data, end and error
readerStream.on('data', function(chunk) {
// Page found
// HTTP Status: 200 : OK
// Content Type: text/plain
response.writeHead(200, {'Content-type': 'text/html'});
// Write the content of the file to response body.
response.write(chunk);
console.log('Page is being streamed...');
});
readerStream.on('end', function() {
console.log('Page is streamed and emitted successfully.');
});
readerStream.on('error', function(err) {
// HTTP Status: 404 : NOT FOUND
// Content Type: text/plain
response.writeHead(404, {'Content-type': 'text/html'});
console.log('Page streaming error: ' + err);
});
console.log('Code ends!');
}).listen(8081);
// Console will print the message
console.log('Server running at http://127.0.0.1:8081/');
My .html or .txt file contains three short lines of text. After starting my server I visit my web page by going to http://127.0.0.1:8081/index.html. Everything works fine and the content of index.html is echoed on the browser.
But on the tab of the browser, the loader icon keeps turning like it keeps loading for about 1 minute.
Is that normal with Node.js server? Does the icon just keep turning, but costs nothing to the server? Or do I miss something and icon is not supposed to keep turning?
It doesn't look like you are ending your response. The browser probably thinks the request isn't finished and thus continues to "load".
If you look at the Network tab in the developer console you might see the request hasn't finished.
You should be sending response.end()
This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method, response.end(), MUST be called on each response.
I believe you should be calling response.end() in both the readerStream.on('end' and readerStream.on('error' callbacks after you write the head. This will tell the browser the request is finished and it can stop the loading action.

Drop request in node.js express

Is it possible using Node.js and express to drop a request for certain route? I.E. not return a http status or any headers? I'd like to just close the connection.
app.get('/drop', function(req, res) {
//how to drop the request here
});
To close a connection without returning anything, you can either end() or destroy() the underlying socket.
app.get('/drop', function(req, res) {
req.socket.end();
});
I don't think there's any way to drop the connection at your end but keep the client waiting until it times out (i.e. without sending a FIN). You'd perhaps have to interact with your firewall in some way.
Yes you can.
All you need to do is call the res.end method optionally passing in the status code.
Use one of the following methods:
res.end();
res.status(404).end();
If you wanted to also set the headers, then you'd use the res.set method.
See below
res.set('Content-Type', 'text/plain');
res.set({
'Content-Type': 'text/plain',
'Content-Length': '123',
'ETag': '12345'
})
For details have a look here
http://expressjs.com/api.html
You could do this wherever you want to close the connection:
res.end()

Using custom headers when piping external image

I am trying to pipe images from an Amazon S3 server through my node server while adding a custom header to the response.
Right now, however, the server will respond with a plain "Document" that will download to my computer with no file extension declared. The "Document" still contains the desired image data, but how can I make it clear that this is a PNG that can be viewed in my browser?
Here's my current code:
app.get('/skin', function (req, res) {
res.writeHead(200, {'Content-Type': 'image/png', 'access-control-allow-origin': '*'});
http.get("http://s3.amazonaws.com/MinecraftSkins/clone1018.png").pipe(res);
});
You might want to use http.request in order to make nice proxying and resource loading with duplicating headers.
Here is example in express that will listen on port 8080, and will make request to specific server with actually url that you request from /skin/* route:
var http = require('http'),
express = require('express'),
app = express();
app.get('/skin/*', function(req, res, next) {
var request = http.request({
hostname: 's3.amazonaws.com',
port: 80,
path: '/' + req.params[0],
method: req.method
}, function(response) {
if (response.statusCode == 200) {
res.writeHead(response.statusCode, response.headers);
response.pipe(res);
} else {
res.writeHead(response.statusCode);
res.end();
}
});
request.on('error', function(e) {
console.log('something went wrong');
console.log(e);
})
request.end();
});
app.listen(8080);
In order to test it out, run it on your machine, and then go to: http://localhost:8080/skin/nyc1940/qn01_GEO.png
It will load that image proxying from Amazon, and returning its headers as well. You might customize headers as well, in order to prevent XML being sent from S3 (when file does not exist).
You dont need to set any headers as they are proxied from s3.amazon and it does reliably set right headers for you.
Nor access-control-allow-origin as you will need it only in case with AJAX request to resource from another domain name. But anyway feel free to modify response.headers before sending out. It is simple object (console.log it for tests).

Node.js http.ServerRequest response never arrives

I'm creating a reverse HTTP proxy using Node.js for fun. The code is pretty simple at the moment. It listens on 127.0.0.1:8080 for HTTP requests and forwards these to hostname.com, responses from hostname.com are then forwarded back to the client. Nothing fancy is done yet such as rewriting redirect headers, etc. The code is as follows:
var http = require('http');
var server = http.createServer(
function(request, response) {
var proxy = http.createClient(8080, 'hostname.com')
var proxyRequest = proxy.request(request.method, request.url, request.headers);
proxyRequest.on('response', function(proxyResponse) {
proxyResponse.on('data', function(chunk) {
response.write(chunk, 'binary');
});
proxyResponse.on('end', function() {
response.end();
});
response.writeHead(proxyResponse.statusCode, proxyResponse.headers);
});
request.on('data', function(chunk) {
proxyRequest.write(chunk, 'binary');
});
request.on('end', function() {
proxyRequest.end();
});
proxyRequest.on('close', function(err) {
if (err) {
console.log('close error: ' + err + ' for ' + request.url);
}
});
});
server.listen(8080);
server.on('clientError', function(exception) {
console.log('boo a clientError occured :(');
});
All appears to work well until I browse to a page that requires many additional resources (such as images) to be fetched. Naturally the browser will generate a number of GET requests to the reverse proxy to fetch these additional resources.
When I do browse to such a page some of the http.ServerRequests for the additional resources never receive responses. If I restart the page request it almost always results in success as all the resources that were successfully fetched on the first attempt were cached (hence the browser doesn't try GET them again) and so now the browser only needs to grab a few missing ones.
At a guess I would imagine I'm hitting some kind of connection limit although I'm not sure. Any help would be greatly appreciated!
If you set up Wireshark on the proxy, you'll almost certainly see what's happening. (Note that you may need a second machine for this, because some TCP/IP stacks don't provide anything that Wireshark can listen on for loopback traffic - see this)
I'm almost certain that the problem(s) you are running into here are all down to the Connection: header - proxies MUST parse this header and handle it correctly. At a guess, I would say your code is handling the first request in a Connection: keep-alive stream and ignoring the rest. As a proxy, you are supposed to parse and remove/replace this header, and any associated headers (in this case the Keep-Alive: header), before forwarding the request to the server.
If you want to build a HTTP/1.1 proxy, it's very important that you read RFC 2616 and adhere to the many, many rules that it places on their behaviour. The particular problem you are running into here is documented in section 14.10.

Categories