Update: I posted this code here, after I added all (to 99%) possibilities one by one, and it still gave me a 120sec timeout...buffled.
So, ok, I figured it takes exactly 120sec (ok, 122sec) on my Windows 7 machine, until the FIN handshake is started. I want to do it immediately. HTTP RFC793 says
FIN: No more data from sender
Looks to me I do not send any data anymore. I tried all this bloated code, still a Hello World server...
var http = require('http')
var server = http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'})
res.write('HELLO WORLD!\n')
res.end()
res.setTimeout(0)
req.abort() // 1) TypeError: Object #<IncomingMessage> has no method 'abort'
req.on('socket', function (socket) {
socket.setTimeout(0)
socket.on('timeout', function() {
req.abort()
})
})
})
server.timeout = 0
server.setTimeout(0)
server.listen(1337, '192.168.0.101')
So how to do 1) ? (Actually sends a RST like this...)
And how to do the whole thing HTTP conform?
Of course in the end I will be lucky to use nodejs as in websocket stuff and all this, but if conversion on my website means a thing of two minutes, and I have a million concurrent users (huh?), sending a FIN immediately could mean I have two million concurrent users (do the math). ;) Ok, to be on the sure side: Sending a FIN means the socket is closed?
Ah, eah, ok, since you are on, how do I console.log(res) or console.log(req)? It gives me [object Object]. (Update: I tried console.log(res.toSource()), gives me a TypeError?
Edit: Found the answer here.
If you want to close the connection, send a connection: close header. If you do this, then it will not leave the connection open for reuse.
Related
I am writing a bare minimal node.js server to understand how it works. From what I can tell this line:
res.writeHead(200);
does nothing. If I remove it I get the exact same behavior from the browser. Does it have any purpose? Is it OK to remove it?
// https://nodejs.dev/learn/the-nodejs-http-module
const http = require('http');
const server = http.createServer(handler);
server.listen(3000, () => {
console.log('node: http server: listening on port: ' + 3000);
});
function handler(request, response) {
res.writeHead(200);
res.end('hello world\n');
}
Is it some how related to http headers?
The default http status is 200, so you do not have to tell the response object that the status is 200. If you don't set it, the response will automatically be 200. You can remove res.writeHead(200); and similarly, you don't need the express version of that which would be res.status(200).
The other thing that res.writeHeader(200) does is cause the headers to be written out in the response stream at that point. You also don't need to call that yourself because when you do res.send(...), the headers will automatically be sent first (if they haven't already been sent). In fact, the res.headersSent property keeps track of whether the headers have been sent yet. If not, they are sent as soon as you call any method that starts sending the body (like res.send() or res.write(), etc...).
Is it OK to remove it?
Yes, in this context it is OK to remove it.
I have an API endpoint that accepts some data from the client. There is also a 1 minute timer which is visible to the client.
What I hope to achieve is this:
Whilst the timer is active ( > 0 ) any posts sent to the API are kept in storage or in array ( or something ). Once the timer reaches zero, The client can no longer make a request to the API and any requests that were made and stored whilst the timer was active, are now processed through a function - For sake of example lets just say that this function logs all the data to the screen.
Perhaps i'm thinking of this in the wrong way, but how do I sync a front and backend timer so both the server and the client side know when to stop processing POST requests and to let the server know that it's time to process all the data that was sent during that 1 minute.
var express = require("express");
var app = express();
app.post("/api/data", function(req, res){
// do something here - no clue
});
app.listen(process.env.PORT, () => {
console.log("server running on port: ", process.env.PORT);
});
Apologies if I've explained this poorly.
Appreciate any help I can get, thank you :)
What do you want to achieve, because letting the client decide something is the worst choice you can make.
Maybe pass the server time to the client. Calculate the difference and then start counting. But check on the server if the client is actually allowed to post.
or let the server calculate how many seconds there are left until countdown reached and pass these to the client. But still server needs to check the valid time.
I would pick one of those. Let the server be the deciding factor and don't depend on the client. Since you can easily change PC time.
When the timer on the client side reaches 0 send a api call to the server. When the server gets this api call puts a boolean to false. If this boolean is false, you ignore data. What I mean in code is something like this
var ignore = false;
app.post("/api/data/stop", function(req,res){
ignore = true;
});
app.post("/api/data", function(req, res){
if(!ignore){
// listening to data
}
});
app.listen(process.env.PORT, () => {
console.log("server running on port: ", process.env.PORT);
});
I have a project where I have a iOS Objective-C app trying to talk to a Node.js server. I'm using socket.io (iOS) and socket.io on node.js.
The problem I am trying to solve is to get a message from the device to the server, and have the server return a response. To this end I'm attempting to do it via sending a message and expecting an acknowledgement containing the data the device is after.
The device code looks like this:
void (^serverAck)(uint64_t, void (^)(NSArray *)) = [_socket emitWithAck:#"ListProjects" withItems:#[]];
serverAck(0, ^(NSArray* data) {
if ([data count] == 0) {
NSError *error = [NSError errorWithDomain:#"CRXServer" code:1 userInfo:nil];
failureBlock(error);
} else {
successBlock(data);
}
});
And the node.js code looks like this:
var SocketIO = require('socket.io');
var io = SocketIO(8099);
io.on('connection', function(socket) {
socket.on('ListProjects', function(data, getProjectsCallback) {
database.allProjects(function getAllProjectsCallback(err, rows) {
getProjectsCallback(rows);
});
});
});
When I attempt to run this, getProjectsCallback crashes the server because it is not a function. From comments made on another thread, I understand that this will be a function if the call to the server is correct and expecting an ack.
Anyone know what I've done wrong?
P.S. Her's a dump from socket.o's log showing the request coming in:
engine:socket packet +0ms
socket.io-parser decoded 20["getProjects"] as {"type":2,"nsp":"/","id":0,"data":["getProjects"]} +14ms
socket.io:socket got packet {"type":2,"nsp":"/","id":0,"data":["getProjects"]} +15ms
socket.io:socket emitting event ["getProjects"] +0ms
socket.io:socket attaching ack callback to event +0ms
Getting all projects ...
Releasing connection
Got the project list
/Users/derekclarkson/projects/crux-Server/node_modules/mysql/lib/protocol/Parser.js:82
throw err;
^
TypeError: getProjectsCallback is not a function
at getAllProjectsCallback (/Users/derekclarkson/projects/crux-Server/Server.js:20:13)
at Query.executeCodeblockCallback [as _callback] (/Users/derekclarkson/projects/crux-Server/Database.js:321:17)
So it looks like socket.io is attaching an ack, but somehow it's not being passed to the callback.
Not sure if it's a bug or a protocol limitation, but it doesn't work when you pass an empty array to emitWithAck:withItems:. You'll see that server-side, data contains your callback function, rather than getProjectsCallback as you expect.
So, two options:
in that situation, recognise that the first argument to your listener handler will be the callback, rather than the second
or add any random data to the items array (e.g. #[#"x"])
I think I would go for the second option in case someone fixes this issue in the future.
Is there a way to close a response? I can use res.end() but it doesn't actually close the socket.
What I want to achieve: I am writing a Java program which interfaces with the network, and I am writing a NodeJS server for this. Java code:
String line;
while((line = in.readLine()) != null) {
System.out.println("RES: "+line);
}
But this just keeps hanging. No end connection, still waiting for input from the socket.
Node:
exports.getAll = function (req, res) {
res.set("Content-Type", "text/plain");
res.set(200);
res.send(..data..);
res.end();
}
however res.end() does not close the connection. As said before, Java keeps thinking there will be something next so it is stuck in the while loop.
Solved by setting a HTTP header to close the connection instead of default keep-alive strategy.
res.set("Connection", "close");
In case someone who really wants just to close the connection finds this, you can use res.socket or res.connection to get the net.Socket object which comes with a destroy method:
res.connection.destroy();
I'm creating a reverse HTTP proxy using Node.js for fun. The code is pretty simple at the moment. It listens on 127.0.0.1:8080 for HTTP requests and forwards these to hostname.com, responses from hostname.com are then forwarded back to the client. Nothing fancy is done yet such as rewriting redirect headers, etc. The code is as follows:
var http = require('http');
var server = http.createServer(
function(request, response) {
var proxy = http.createClient(8080, 'hostname.com')
var proxyRequest = proxy.request(request.method, request.url, request.headers);
proxyRequest.on('response', function(proxyResponse) {
proxyResponse.on('data', function(chunk) {
response.write(chunk, 'binary');
});
proxyResponse.on('end', function() {
response.end();
});
response.writeHead(proxyResponse.statusCode, proxyResponse.headers);
});
request.on('data', function(chunk) {
proxyRequest.write(chunk, 'binary');
});
request.on('end', function() {
proxyRequest.end();
});
proxyRequest.on('close', function(err) {
if (err) {
console.log('close error: ' + err + ' for ' + request.url);
}
});
});
server.listen(8080);
server.on('clientError', function(exception) {
console.log('boo a clientError occured :(');
});
All appears to work well until I browse to a page that requires many additional resources (such as images) to be fetched. Naturally the browser will generate a number of GET requests to the reverse proxy to fetch these additional resources.
When I do browse to such a page some of the http.ServerRequests for the additional resources never receive responses. If I restart the page request it almost always results in success as all the resources that were successfully fetched on the first attempt were cached (hence the browser doesn't try GET them again) and so now the browser only needs to grab a few missing ones.
At a guess I would imagine I'm hitting some kind of connection limit although I'm not sure. Any help would be greatly appreciated!
If you set up Wireshark on the proxy, you'll almost certainly see what's happening. (Note that you may need a second machine for this, because some TCP/IP stacks don't provide anything that Wireshark can listen on for loopback traffic - see this)
I'm almost certain that the problem(s) you are running into here are all down to the Connection: header - proxies MUST parse this header and handle it correctly. At a guess, I would say your code is handling the first request in a Connection: keep-alive stream and ignoring the rest. As a proxy, you are supposed to parse and remove/replace this header, and any associated headers (in this case the Keep-Alive: header), before forwarding the request to the server.
If you want to build a HTTP/1.1 proxy, it's very important that you read RFC 2616 and adhere to the many, many rules that it places on their behaviour. The particular problem you are running into here is documented in section 14.10.