According to this link: request - Node
The callback argument gets 3 arguments:
An error when applicable (usually from http.ClientRequest object) An
http.IncomingMessage object The third is the response body (String or
Buffer, or JSON object if the json option is supplied)
Code:
var r = require("request");
var options= {
url: "http://www.example.com/"
};
var callback = function (err, res, body) {
if (!err && res.statusCode == 200) {
res.on("data", function(chunk) {
console.log("DATA : "+chunk);
});
res.on("finish", function() {
console.log("FINISHED");
});
console.log(body);
}
};
r(options, callback);
But in the above code, only the console.log(body) works, the event emitters don't.
Also, if the callback would be invoked only when the whole response is body is available, then what's the point of making the second argument as http.IncomingMessage(Readable Stream) when I can't stream it.
When you pass a callback like that, request buffers the entire response for you and that is what is available in body. Because of this, that means you won't see data and such events on res, because they've already been taken care of by request.
It looks like you're mixing two different ways to use the 'request' module. Depending on preference you can use either the callback approach or the streaming approach.
The callback approach involves passing a function as well as the options and when all the data is received it will call the callback function.
The streaming approach allows you to attach listeners to the events such as 'response'. I'm guessing you've mixed this code in from an example from receiving http requests and sending a response with a node server as I can't see any reference to 'data' and 'finish' events in the docs for the request module.
Related
_
MY CHALLENGE:
I would like to access a third party Rest API from within my Lambda function. (e.g."http://www.mocky.io/v2/5c62a4523000004a00019907").
This will provide back a JSON file which I will then use for data extraction
_
MY CURRENT CODE:
var http = require('http');
exports.handler = function(event, context, callback) {
console.log('start request to Mocky');
http.get('http://www.mocky.io/v2/5c62a4523000004a00019907', function(res) {
console.log(res);
})
.on('error', function(e) {
console.log("Got error: " + e.message);
});
};
This does not throw an error but also does not seem to provide back the JSON
_
MY OPEN QUESTIONS:
1) How can I extract the JSON so that I can work on it
2) I will probably need to also send through an Authentification in the request header (Bearer) in the future. Will this also be possible with this method?
The problem is likely that your lambda function is exiting before logging the response.
We use Authorization headers all the time to call our lambdas. The issue of if you can use one to call the third party API is up to them, not you, so check the documentation.
Since your HTTP call is executed asynchronously, the execution of the lambda continues while that call is being resolved. Since there are no more commands in the lambda, it exits before your response returns and can be logged.
EDIT: the http.get module is difficult to use cleanly with async/await. I usually use superagent, axios, or request for that reason, or even node-fetch. I'll use request in my answer. If you must use the native module, then see EG this answer. Otherwise, npm install request request-promise and use my answer below.
The scheme that many people use these days for this kind of call uses async/await, for example (Requires Node 8+):
var request = require('request-promise')
exports.handler = async function(event, context, callback) {
console.log('start request to Mocky');
try {
const res = await request.get('http://www.mocky.io/v2/5c62a4523000004a00019907')
console.log(res)
callback(null, { statusCode: 200, body: JSON.stringify(res) })
}
catch(err) {
console.error(err.message)
callback('Got error ' + err.message)
}
};
The async/await version is much easier to follow IMO.
Everything inside an async function that is marked with await with be resolved before the execution continues. There are lots of articles about this around, try this one.
There are a lot of guys having an equal problem already solved... Look at that
or that
I'm new to nodejs and javascript in general. I believe this is an issue with the scope that I'm not understanding.
Given this example:
...
...
if (url == '/'){
var request = require('request');
var body_text = "";
request('http://www.google.com', function (error, response, body) {
console.log('error:', error);
console.log('statusCode:', response && response.statusCode);
console.log('body:', body);
body_text=body;
});
console.log('This is the body:', body_text)
//I need the value of body returned from the request here..
}
//OUTPUT
This is the body: undefined
I need to be able to get the body from response back and then do some manipulation and I do not want to do all the implementation within the request function. Of course, if I move the log line into:
request( function { //here })
It works. But I need to return the body in some way outside the request. Any help would be appreciated.
You can't do that with callbacks because this will works asynchronously.
Work with callbacks is kind of normal in JS. But you can do better with Promises.
You can use the request-promise-native to do what you want with async/await.
async function requestFromClient(req, res) {
const request = require('request-promise-native');
const body_text = await request('http://www.google.com').catch((err) => {
// always use catches to log errors or you will be lost
})
if (!body_text) {
// sometimes you won't have a body and one of this case is when you get a request error
}
console.log('This is the body:', body_text)
//I need the value of body returned from the request here..
}
As you see, you always must be in a function scope to use the async/await in promises.
Recommendations:
JS the right way
ES6 Fetures
JS clean coding
More best practices...
Using promises
I am currently buildings proxy using nodejs, which use following syntax for sending and receiving https request and response. However in my project, the response is a liitle bit larger, so typically, req.on('data', callback) will be called 5~7 times before req.on('end', callback) being called.
Here is the simplified code structure:
var http = require("https");
var options = {
hostname: '<WEB SERVICE>',
port: 80,
path: '<WEB PATH>',
method: 'POST',
headers: {
'Content-Type': 'application/json',
}
};
var response = "";
var req = http.request(options, function(res) {
res.setEncoding('utf8');
res.on('data', function (body) {
console.log("data");
response += body;
});
res.on('end', function () {
console.log("end");
response = "";
});
});
req.on('error', function(e) {
console.log('problem with request: ' + e.message);
});
// write data to request body
req.write('<SOMETHING>');
req.end();
Ideally, when multiple request comes in, the logging sequence shall be:
data, data, data, data, data, end, data, data, data, data, end
i.e. once one request is done, end will be called once.
However, after doing several tests, when response is big. The sequence becomes:
<response 1 comes>data, data, data ..... data <response 2 comes> data, data, data, ..., data, end
i.e. the end for request 1 is missing.
In short, we need to make sure the callback of 'end' is called exactly once immediate after doing several call back of req.on('data', callback).
I believe there must be some common method for solving this issues (seems a classic bugs in node) and would be appreciated if anyone can indicate how to solve this property.
Thanks for the help!
From the code that you included it is impossible to make two requests. It makes one request with this:
var req = http.request(options, function(res) { ... });
Then binds the error handler here:
req.on('error', function(e) { ... });
And then immediately before even waiting for any response to the request, before even a connection is being made, it calls .write() and .end() methods on the request object:
req.write('<SOMETHING>');
req.end();
Nothing here can possibly cause two requests being made at the same time. But even if the first (and only) request hasn't started yet you already call .write() and .end() methods so maybe there's your problem.
In addition to that you should expect having one request being started before the other one finishes if you are going to do few requests in parallel as you're saying you'd like to.
i have the same issue before, i fixed like this:
res.on('finish', function () {
console.log("res finished");
});
Look here nodejs.org event-finish
Coming from a .net world where synchronicity is a given I can query my data from a back end source such as a database, lucene, or even another API, I'm having a trouble finding a good sample of this for node.js where async is the norm.
The issue I'm having is that a client is making an API call to my hapi server, and from there I need to take in the parameters and form an Elasticsearch query to call, using the request library, and then wait for the instance to return before populating my view and sending it back to the client, problem being is that the request library uses a callback once the data is returned, and the empty view has long been returned to the client by then.
Attempting to place the return within the call back doesn't work since the EOF for the javascript was already hit and null returned in it's place, what is the best way to retrieve data within a service call?
EX:
var request = require('request');
var options = {
url: 'localhost:9200',
path: {params},
body: {
{params}
}
}
request.get(options, function(error, response){
// do data manipulation and set view data
}
// generate the view and return the view to be sent back to client
Wrap request call in your hapi handler by nesting callbacks so that the async tasks execute in the correct logic order. Pseudo hapi handler code is as following
function (request, reply) {
Elasticsearch.query((err, results) => {
if (err) {
return reply('Error occurred getting info from Elasticsearch')
}
//data is available for view
});
}
As I said earlier in your last question, use hapi's pre handlers to help you do async tasks before replying to your client. See docs here for more info. Also use wreck instead of request it is more robust and simpler to use
I am trying to call multiple URL in a single URL call and push it's json response in an array and send that array in response to the end user.
My code look like this:
var express = require('express');
var main_router = express.Router();
var http = require('http');
urls = [
"http://localhost:3010/alm/build_tool",
"http://localhost:3010/alm/development_tool",
"http://localhost:3010/alm/project_architecture"];
var responses = [];
main_router.route('/')
.get(function (req, res) {
var completed_requests = 0;
for (url in urls) {
http.get(url, function(res) {
responses.push(res.body);
completed_request++;
if (completed_request == urls.length) {
// All download done, process responses array
}
});
}
res.send(responses);
});
I have also tried this using npm request module.
When i run this code it only return NULL or some random output that have only headers.
My aim is to call multiple URL's in a single node get request and append it's JSON output on a array and send to the end user.
Thanks
Here, try this code,
const async = require('async');
const request = require('request');
function httpGet(url, callback) {
const options = {
url : url,
json : true
};
request(options,
function(err, res, body) {
callback(err, body);
}
);
}
const urls= [
"http://localhost:3010/alm/build_tool",
"http://localhost:3010/alm/development_tool",
"http://localhost:3010/alm/project_architecture"
];
async.map(urls, httpGet, function (err, res){
if (err) return console.log(err);
console.log(res);
});
Explanation :
This code uses async and request node packages. async.map by definition takes 3 params, first one being an array, second being the iterator function you want to call with each element of that array, and the callback function, called when async.map has finished processing.
map(arr, iterator, [callback])
Produces a new array of values by mapping each value in arr through
the iterator function. The iterator is called with an item from arr
and a callback for when it has finished processing. Each of these
callback takes 2 arguments: an error, and the transformed item from
arr. If iterator passes an error to its callback, the main callback
(for the map function) is immediately called with the error.
Note: All calls to iterator function are parallel.
Inside your httpGet function, you are calling request function with passed url, and explicitly telling the response format to be json. request, when finished processing, calls the callback function with three params, err - if any, res - server response, body - response body.
In case there is no err from request, async.map collects the results from these callbacks as an array, and passes that array at the end to its third, callback function. Otherwise,if (err) is true, the async.map function stops the execution and calls its callback with an err.
I suggest to use the async library.
async.map(urls, http.get, function(err, responses){
if (err){
// handle error
}
else {
res.send responses
}
})
The snippet above will perform a http.get call for each of the urls in parallel, and will call your callback function with the results of all of the calls after all the responses were received.
If you want to call the urls in series, you can use async.mapSeries instead. If you want to limit the number of concurrent requests you can use async.mapLimit.