So you can make a node HTTP GET request like so:
var http = require('http');
var options = {
host: 'stackoverflow.com',
};
var req = http.get(options, function(res) {
console.log('STATUS: ' + res.statusCode);
console.log('HEADERS: ' + JSON.stringify(res.headers));
});
req.on('error', function(e) {
console.log('ERROR: ' + e.message);
});
I want to see what IP address that http.get is going to? How does the node HTTP lib determine which IP address to use if a given DNS resolves to multiple IPs?
By multiple IPs i mean something like:
dig +short stackoverflow.com
151.101.129.69
151.101.1.69
151.101.193.69
151.101.65.6
Does the node request lib differ?
Your req object should have a connection object on it, which will have the remote address of the server you were connected to.
req.connection.remoteAddress
Of course, you need to wait until you actually have a connection, so I would just use the connect event which has what you need. Untested, but try this:
req.on('connect', (res, socket, head) => {
console.log(socket.remoteAddress);
});
https://nodejs.org/api/net.html#net_socket_remoteaddress
In node 6.x.x as far as I understand it comes down too:
http module uses
net module uses
dns module uses
native module libuv#getaddrinfo (http://docs.libuv.org/en/v1.x/dns.html#c.uv_getaddrinfo)
And specifically your questions which IP to use:
https://github.com/nodejs/node/blob/cd439465cc8b408eb88822c7695def611acfaaf7/lib/dns.js#L82
Very straightforward algorithm which goes like: "If multiple IP addresses are present take first one" :)
Related
I am building an Express app which on certain requests has to make its own HTTP calls. I could use Superagent, request or node's own http.request.
Thing is, I need to log all of those server originating requests and their respective responses. Calling log.info before each and every of those seems silly.
How can you add a pre-filter for all outgoing HTTP calls, and ideally access both req and res?
NOTE: I am not interested in logging requests coming in to the server I am building, only in the requests that the server itself kicks off. Think of my server as a client to another black box server.
What you can do is patch http and https and proxy the request method. This way you can have a global handler that will catch the req & res objects.
var http = require('http');
var https = require('https');
var patch = function(object) {
var original = object.request;
// We proxy the request method
object.request = function(options, callback) {
// And we also proxy the callback to get res
var newCallback = function() {
var res = arguments[0];
// You can log res here
console.log("RES",res.statusCode);
callback.apply(this,arguments);
}
var req = original(options, newCallback);
// You can log your req object here.
console.log(req.method,req.path);
return req;
}
}
patch(http);
patch(https);
http.get("http://www.google.com/index.html", function(res) {
console.log("Got response");
}).on('error', function(e) {
console.log("Got error: " + e.message);
});
Edit: This might work if you use the request npm package as well, as it might just rely on the built-in node.js http.request method anyways.
What server are you going to use for you app?
I would definally bring up such functionality on to server level. Take a look how heroku router is doing it. You can track all of needed information using some of their addons: papertrail, or newrelic ( or use them separately for you app ).
https://papertrailapp.com/
http://newrelic.com/
I like out-of-box solutions in this case, no need extend your app logic for logging such information.
If you want to have your own solution, you can setup nginx to monitor request/response info.
http://nginx.com/resources/admin-guide/logging-and-monitoring/
In the following snippet, a tutorial author shows how to alter the original tutorial to include an http server. Here's the snippet.
var http = require(‘http’),
fs = require(‘fs’),
io = require(‘socket.io’),
index;
fs.readFile(‘./chat.html’, function (err, data) {
if (err) {
throw err;
}
index = data;
});
var server = http.createServer(function(request, response) {
response.writeHeader(200, {“Content-Type”: “text/html”});
response.write(index);
response.end();
}).listen(1223);
//and replace var socket = io.listen(1223, "1.2.3.4"); with:
var socket = io.listen(server);
The code in the original tutorial didn't include the http server, and socket was defined as simply:
var socket = io.listen(1223, "1.2.3.4");
I noticed that he replaces the variable's content io.listen(1223, "1.2.3.4"); with server which doesn't include the ip (1.2.3.4) anywhere.
My Question:
What is the purpose/effect of the referenced IP address?
Why is it excluded when passing an http server to create the socket?
When you are listening on a port, you can optionally include the IP address of a specific interface to listen on. For example, you might have several network interfaces with several IP addresses, and only want your service running on one of them. A more common use case is that you only want your server accessible on localhost, so you might have it listen only on 127.0.0.1.
Now, when you call io.listen(server) where server is an existing Node.js HTTP server, Socket.IO isn't actually opening a new listening connection at all. This is a shortcut for Socket.IO to wrap its methods on the existing HTTP server. If you wanted to specify a specific interface address to listen on, you would need to do it where .listen() is called on the HTTP server, above where you call io.listen(server).
More info in the documentation for raw network sockets in Node.js: http://nodejs.org/api/net.html#net_server_listen_port_host_backlog_callback
I wrote a small reverse proxy for hosting my applications on the same computer using http and node-http-proxy modules. For example:
I have:
proxy running on port 80
website1.com running on port 3000
website2.com running on port 3001
website3.com running on port 3002
If I access the website1.com domain, the proxy will serve the contents from server running on port 3000 using node-http-proxy.
But now I need to measure the bandwidth used for each domain (both incoming/outgoing, or at least outgoing)
I've tried listening for 'data' events on request object, but in documentation they said that readable events isn't emitted on IncomignMessage for some reason.
I wrote a little module for the "base" functionality too, it can be found here:
https://npmjs.org/package/reproxy
See example/example.js
So, how can I accomplish this measure, using the current setup?
The solution I found was setting and 'end' event on RoutingProxy object and grabbing the socket information in the event callback.
var proxy = new require('http-proxy').RoutingProxy();
proxy.on('end', function(req, res, response) {
var host = req.headers.host;
var bytesIn = response.socket._bytesDispatched;
var bytesOut = response.socket.bytesRead;
console.log('request to ' + host);
console.log('request: ' + bytesIn + ' bytes.');
console.log('response: ' + bytesOut + ' bytes.');
});
Note that this is not optimal solution, because the request size includes the headers added by the reverse proxy, such as "x-" headers.
This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
Getting ' bad_request invalid_json' error when trying to insert document into CouchDB from Node.js
The highest voted answer on
CouchDB and Node.js - What module do you recommend?
recommends not to use libraries such as nano or cradle for starting with Node.js and CouchDB.
However I haven't found any tutorial on how to perform standard operations for all DBMSes like create database, create table, add and view data etc. programmatically.
EDIT: (partial answer) after installing and starting CouchDB go to http://localhost:5984/_utils/script/couch.js.
You should start by reading the CouchDB book.
No idea why you don't want to use a module: I think you took an answer out of context (an answer that is probably one year old) and made your decision not to use a module.
That is not likely to be helpful to get stuff done. :) You are just repeating work that is done, and issues that have been fixed, etc.
If you want to learn CouchDB, read the book. You can read nano's source as it maps really closely to the API and should be easy to read, but the book is the full way to go.
If by any reason you decide you still want to implement your own module to do what others already do well, go for it :)
If instead you are looking for resources on using nano there are quite a few:
readme: github
screencast: couchdb and nano
article: nano - a minimalistic couchdb client for nodejs
article: getting started with node.js and couchdb
article: document update handler support
article: nano 3
article: securing a site with couchdb cookie authentication using node.js and nano
article: adding copy to nano
article: how to update a document with nano
article: mock http integration testing in node.js using nock and specify
article: mock testing couchdb in node.js with nock and tap
Thanks to Ruben Verborgh, I compiled micro-tutorial from several sources myself.
var http = require('http')
var sys = require('sys')
var couchdbPath = 'http://localhost:5984/'
request = require('request')
h = {accept: 'application/json', 'content-type': 'application/json'}
request(
{uri: couchdbPath + '_all_dbs', headers:h},
function(err, response, body) { console.log(sys.inspect(JSON.parse(body))); }
)
// add database
request(
{uri: couchdbPath + 'dbname', method:'PUT', headers:h},
function (err, response, body) {
if (err)
throw err;
if (response.statusCode !== 201)
throw new Error("Could not create database. " + body);
}
)
// Modify existing document
var options = {
host: "localhost",
port: 5984,
path: "/dbname",
headers: {"content-type": "application/json"},
method: "PUT"
};
var req = http.request(options, function(res) {
console.log('STATUS: ' + res.statusCode);
//console.log('HEADERS: ' + JSON.stringify(res.headers));
res.setEncoding('utf8');
res.on('data', function (chunk) {
console.log('BODY: ' + chunk);
});
});
req.on('error', function(e) {
console.log('problem with request: ' + e.message);
});
// write data to request body
req.write(JSON.stringify({
"_id":"rabbit",
"_rev":"4-8cee219da7e61616b7ab22c3614b9526",
"Subject":"I like Plankton"
}));
req.end();
I used following documentation:
http.request()
CouchDB Complete HTTP API Reference
Here are a few hands-on examples, thoughts and code-snippets which should help in your study
Simple Blog with Coffeescript, Express and CoudbDB
Thoughts on development using CouchDB and Nodejs
Bind CouchDB and Node.js
Getting Started with Node.js, Express and CouchDB - this link does not seem to be accessible now, but it seems a temporary issue.
Here's one on testing CouchDB - Mock testing CouchDB using Node.js
Hope it helps.
CouchDB is not an SQL database engine. It's in the family of the "NoSQL" ones.
You don't do select, you don't create tables, etc.
It's completely different.
It's actually using a REST API to work. Like, to access all the documents, you access them using an HTTP GET on the following URL: http://some.server/someDbName/_all_docs
For a more thorough introduction, I suggest looking for "CouchDB tutorial" on Google.
You'll find good links like this one or this one. (I'm not vouching for any, they just look good as an introduction.)
To make an http request in node.js, you can use the request method of the built-in http module. A shortcut method is http.get, which you can use like this:
var http = require( 'http' );
http.get( 'http://some.url/with/params', function( res ) {
// res has the values returned
});
Edit after reading your code:
Firstly, the doc you're using if outdated. Node is at v0.8, not 0.4.
Secondly, your request = require('request') must give some problems (does the module exist?). I don't think the first part is even executed.
Thirdly, just try a GET request for now. Something like:
var http = require( 'http' );
http.get( 'http://localhost:5984/_all_dbs', function( res ) {
console.log( res );
});
See if it's working. If it is, you already know how to use couchdb ;)
Lastly, your request at the end doesn't seem wrong. Maybe it's related to require('request') though, so I don't know.
I'm creating a reverse HTTP proxy using Node.js for fun. The code is pretty simple at the moment. It listens on 127.0.0.1:8080 for HTTP requests and forwards these to hostname.com, responses from hostname.com are then forwarded back to the client. Nothing fancy is done yet such as rewriting redirect headers, etc. The code is as follows:
var http = require('http');
var server = http.createServer(
function(request, response) {
var proxy = http.createClient(8080, 'hostname.com')
var proxyRequest = proxy.request(request.method, request.url, request.headers);
proxyRequest.on('response', function(proxyResponse) {
proxyResponse.on('data', function(chunk) {
response.write(chunk, 'binary');
});
proxyResponse.on('end', function() {
response.end();
});
response.writeHead(proxyResponse.statusCode, proxyResponse.headers);
});
request.on('data', function(chunk) {
proxyRequest.write(chunk, 'binary');
});
request.on('end', function() {
proxyRequest.end();
});
proxyRequest.on('close', function(err) {
if (err) {
console.log('close error: ' + err + ' for ' + request.url);
}
});
});
server.listen(8080);
server.on('clientError', function(exception) {
console.log('boo a clientError occured :(');
});
All appears to work well until I browse to a page that requires many additional resources (such as images) to be fetched. Naturally the browser will generate a number of GET requests to the reverse proxy to fetch these additional resources.
When I do browse to such a page some of the http.ServerRequests for the additional resources never receive responses. If I restart the page request it almost always results in success as all the resources that were successfully fetched on the first attempt were cached (hence the browser doesn't try GET them again) and so now the browser only needs to grab a few missing ones.
At a guess I would imagine I'm hitting some kind of connection limit although I'm not sure. Any help would be greatly appreciated!
If you set up Wireshark on the proxy, you'll almost certainly see what's happening. (Note that you may need a second machine for this, because some TCP/IP stacks don't provide anything that Wireshark can listen on for loopback traffic - see this)
I'm almost certain that the problem(s) you are running into here are all down to the Connection: header - proxies MUST parse this header and handle it correctly. At a guess, I would say your code is handling the first request in a Connection: keep-alive stream and ignoring the rest. As a proxy, you are supposed to parse and remove/replace this header, and any associated headers (in this case the Keep-Alive: header), before forwarding the request to the server.
If you want to build a HTTP/1.1 proxy, it's very important that you read RFC 2616 and adhere to the many, many rules that it places on their behaviour. The particular problem you are running into here is documented in section 14.10.