I'm building an NPM module that needs to make an HTTP request to itself (the running web server). For example:
var url = "http://127.0.0.1:" + (process.env.PORT || 3000) + path;
request(url, function(error, response, body){
...
});
Is there a way to process a request through the NodeJS pipeline without actually doing an HTTP request?
Or is there a better way to form the URL? I'm nervous that 127.0.0.1 isn't the most robust way to handle this for production sites.
Self Consuming JSON API
In a self consuming JSON API, you define some functionality in some standalone controller functions and then wire the functionality up to express after the fact. Let's use a library application as an example:
books.js
module.exports = {
browse: function () {
return Book.findAll()
},
read: function (options) {
return Book.findById(options.book)
},
processLateFees: function () {
// Do a bunch of things to process late fees
}
}
to-http.js
In this file we build a function that converts a controller function to an HTTP route. We take the query params and pass that to our controller as options:
module.exports = function toHTTP (func) {
return function (req, res) {
func(req.params).then(function (data) {
res.send(data)
})
}
}
router.js
And then we connect up our controller to our http router
var express = require('express')
var books = require('./books')
var toHTTP = require('./to-http')
var app = express()
app.get('/books', toHTTP(books.browse))
app.get('/books/:book', toHTTP(books.read))
app.get('/batch-jobs/process-late-fees', toHTTP(books.processLateFees))
So we now have an express application connected up to controller functionality. And the wonderful thing is that we can call these controller functions manually too.
var books = require('./books')
books.processLateFees().then(function () {
// late fees have been processed
})
If you need a more in depth example of this, the Ghost blog codebase is built around this pattern. It is a very informative read.
You can put that method inside your Model or Controller and call it inside the app if you just have one nodejs application, it needs fewer resources than to create a new request.
If you have more than one nodejs apps (or other services), it is normal to create a request to other web services with specific URL and port.
I do it in one of my project and that works fine.
I use it in dev and prod without issues so far, because I use several nodejs applications with 3 differents web services that call themself to log in or check auhtentication. I use both express.js and sails.js (based on express.js)
i think using request module is acceptable, it is quite fast and i use this approach when i do unit tests
Related
I'm running a small Angular application with a Node/Express backend.
In one of my Angular factories (i.e. on the client side) I make a $http request to Github to return user info. However, a Github-generated key (which is meant to be kept secret) is required to do this.
I know I can't use process.env.XYZ on the client side. I'm wondering how I could keep this api key a secret? Do I have to make the request on the back end instead? If so, how do I transfer the returned Github data to the front end?
Sorry if this seems simplistic but I am a relative novice, so any clear responses with code examples would be much appreciated. Thank you
Unfortunately you have to proxy the request on your backend to keep the key secret. (I am assuming that you need some user data that is unavailable via an unauthenticated request like https://api.github.com/users/rsp?callback=foo because otherwise you wouldn't need to use API keys in the first place - but you didn't say specifically what you need to do so it is just my guess).
What you can do is something like this: In your backend you can add a new route for your frontend just for getting the info. It can do whatever you need - using or not any secret API keys, verify the request, process the response before returning to your client etc.
Example:
var app = require('express')();
app.get('/github-user/:user', function (req, res) {
getUser(req.params.user, function (err, data) {
if (err) res.json({error: "Some error"});
else res.json(data);
});
});
function getUser(user, callback) {
// a stub function that should do something more
if (!user) callback("Error");
else callback(null, {user:user, name:"The user "+user});
}
app.listen(3000, function () {
console.log('Listening on port 3000');
});
In this example you can get the user info at:
http://localhost:3000/github-user/abc
The function getUser should make an actual request to GitHub and before you call it you can change if that is really your frontend that is making the request e.g. by cheching the "Referer" header or other things, validate the input etc.
Now, if you only need a public info then you may be able to use a public JSON-P API like this - an example using jQuery to make things simple:
var user = prompt("User name:");
var req = $.getJSON('https://api.github.com/users/'+user);
req.then(function (data) {
console.log(data);
});
See DEMO
I am concerned about security of my react/redux application as my api url is exposed to the public inside bundled app.js file. I've been researching this and some developers proxy it somehow i.e. instead of using my api url I can use api/ whenever I perform calls with libraries like axios or superagent and it gets proxied to my api url, but this way users can only see api/ on their side.
I'm trying to figure this out, I assume this is set up within express config?
You have a valid concern.
Typically you would have your clientside code make calls to, say, /api, and in express (or whatever server you use) create a route for "/api" that proxies that request to the actual api url.
This way you can obscure any sensitive information from the client. For example authentication tokens, api keys, etc.
In express you could do something like this:
app.use('/api', (req, res) => {
const method = req.method.toLowerCase();
const headers = req.headers;
const url = 'your_actual_api_url';
// Proxy request
const proxyRequest = req.pipe(
request({
url
headers,
method,
})
);
const data = [];
proxyRequest.on('data', (chunk) => {
data.push(chunk);
});
proxyRequest.on('end', () => {
const { response } = proxyRequest;
const buf = Buffer.concat(data).toString();
res.status(response.statusCode).send(buf);
});
});
This example is a bit more elaborate that is has to be, but it will probably work for you.
It would be beneficial, if there is a optimized way of communication between two Node.js routes?
I use mainly Express and I don't know of a Express module, that provides such functionality. I am unsure whether I am missing the right framework or the right NPM script.
I know that this functionality can be implemented via simple Ajax, but I am looking for cleaner solution, such as websockets. I use socket.io, but it seems focused on communication between the client and the server, not between two servers.
Update - What I want to achieve?
Instead of placing all request I need in one file, such as:
router.post('/entode', function(req, res) {
var word_rf = req.body.word1;
if( /^[a-zA-Z]+$/g.test( word_rf ) && word_rf.length<18 ) {
entode.entode_function_rf(word_rf, function (res1) {
io.sockets.emit('series_results', res1);
res.send('done');
});
} else {
io.sockets.emit('error_rf', 'incorrect input');
res.send('done');
}
});
//-------------------------
router.post('/yandex_short', function(req, res) {
var word_rf = req.body.word1;
...
});
//-------------------------
router.post('/yandex_long', function(req, res) {
var word_rf = req.body.word1;
...
});
I prefer having something like:
router.post('/', function(req, res) {
var word_rf = req.body.word1;
var aPromise = new Promise(function(resolve,reject) {
makeRequestFn('./some_route/'+word_rf,function(returnedData){
resolve(returnedData)
});
});
aPromise.then(function(data) {
io.sockets.emit('series_results', data);
res.send('done');
});
//The other routes stay inside this scope
});
In this way I don't need to require the modules, as I transfer the logic directly to other files.
Sorry I am late to the party but I think what you want is inter-process communication. Also instead of reasoning in terms of routes, think in terms of microservices. So what you end up having are decoupled mini apps that can share data between themselves via websocket, message queues, http, etc.
Read this excellent article from the NGINX blog. It is part of a series that will guide you through the process of building your own microservices.
Cheers,
I am building an Express app which on certain requests has to make its own HTTP calls. I could use Superagent, request or node's own http.request.
Thing is, I need to log all of those server originating requests and their respective responses. Calling log.info before each and every of those seems silly.
How can you add a pre-filter for all outgoing HTTP calls, and ideally access both req and res?
NOTE: I am not interested in logging requests coming in to the server I am building, only in the requests that the server itself kicks off. Think of my server as a client to another black box server.
What you can do is patch http and https and proxy the request method. This way you can have a global handler that will catch the req & res objects.
var http = require('http');
var https = require('https');
var patch = function(object) {
var original = object.request;
// We proxy the request method
object.request = function(options, callback) {
// And we also proxy the callback to get res
var newCallback = function() {
var res = arguments[0];
// You can log res here
console.log("RES",res.statusCode);
callback.apply(this,arguments);
}
var req = original(options, newCallback);
// You can log your req object here.
console.log(req.method,req.path);
return req;
}
}
patch(http);
patch(https);
http.get("http://www.google.com/index.html", function(res) {
console.log("Got response");
}).on('error', function(e) {
console.log("Got error: " + e.message);
});
Edit: This might work if you use the request npm package as well, as it might just rely on the built-in node.js http.request method anyways.
What server are you going to use for you app?
I would definally bring up such functionality on to server level. Take a look how heroku router is doing it. You can track all of needed information using some of their addons: papertrail, or newrelic ( or use them separately for you app ).
https://papertrailapp.com/
http://newrelic.com/
I like out-of-box solutions in this case, no need extend your app logic for logging such information.
If you want to have your own solution, you can setup nginx to monitor request/response info.
http://nginx.com/resources/admin-guide/logging-and-monitoring/
I'm having an angular app(angular-seed app) which should call a function in nodejs(web-server.js).
The function in nodejs is just calls a batch file.
If I understood this correctly you want a click on the client-side (angular app) to call a batch file on the server side. You can do this in several ways depending on your requirements, but basically you want the client-side to send a http-request to the server (either with ajax call or form submit) and process this on the server that will call the batch file.
Client-side
On the client-side you need to have a button that uses the angular ng-click directive:
<button ng-click="batchfile()">Click me!</button>
In your angular controller you'll need to use the $http service to make a HTTP GET request to your server on some particular url. What that url is depends how you've set up your express app. Something like this:
function MyCtrl($scope, $http) {
// $http is injected by angular's IOC implementation
// other functions and controller stuff is here...
// this is called when button is clicked
$scope.batchfile = function() {
$http.get('/performbatch').success(function() {
// url was called successfully, do something
// maybe indicate in the UI that the batch file is
// executed...
});
}
}
You can validate that this HTTP GET request is made by using e.g. your browser's developer tools such as Google Chrome's network tab or a http packet sniffer such as fiddler.
Server-side
EDIT: I incorrectly assumed that angular-seed was using expressjs, which it doesn't. See basti1302's answer on how to set it up server-side "vanilla style" node.js. If you're using express you can continue below.
On the server side you need to set up the url in your express app that will perform the batch file call. Since we let the client-side above make a simple HTTP GET request to /performbatch we'll set it up that way:
app.get('/performbatch', function(req, res){
// is called when /performbatch is requested from any client
// ... call the function that executes the batch file from your node app
});
Calling the batch file is done in some ways but you can read the stackoverflow answer here for a solution:
node.js shell command execution
Hope this helps
The OP didn't mention express so I'll provide an alternative for the server side (Node.js part) without using any additional frameworks (which would require installing it via npm). This solution uses just node core:
web-server.js:
'use strict';
var http = require('http')
var spawn = require('child_process').spawn
var url = require('url')
function onRequest(request, response) {
console.log('received request')
var path = url.parse(request.url).pathname
console.log('requested path: ' + path)
if (path === '/performbatch') {
// call your already existing function here or start the batch file like this:
response.statusCode = 200
response.write('Starting batch file...\n')
spawn('whatever.bat')
response.write('Batch file started.')
} else {
response.statusCode = 400
response.write('Could not process your request, sorry.')
}
response.end()
}
http.createServer(onRequest).listen(8888)
Assuming you are on Windows, I would at first use a batch file like this to test it:
whatever.bat:
REM Append a timestamp to out.txt
time /t >> out.txt
For the client side, there is nothing to add to Spoike's solution.