I am new to the node JS ,In the nodeJS exercise no.8 of learnyounode ,my solution produces the same require result.I am confused when to use http.get and Request
goal:
Write a program that performs an HTTP GET request to a URL provided to you
as the first command-line argument. Collect all data from the server (not
just the first "data" event) and then write two lines to the console
(stdout).
The first line you write should just be an integer representing the number
of characters received from the server. The second line should contain the
complete String of characters sent by the server.
official solution
var http = require('http')
var bl = require('bl')
http.get(process.argv[2], function (response) {
response.pipe(bl(function (err, data) {
if (err)
return console.error(err)
data = data.toString()
console.log(data.length)
console.log(data)
}))
})
my solution
var request=require('request')
request(process.argv[2],function(err,response,body){
console.log(body.length);
console.log(body);
})
From nodeJS documentation:
Since most requests are GET requests without bodies, Node.js provides
this convenience method. The only difference between this method and
http.request() is that it sets the method to GET and calls req.end()
automatically. Note that response data must be consumed in the
callback for reasons stated in http.ClientRequest section.
So, what exactly that means, is that you could do it your way without any problem. But request is not a module shipped with node itself, it is a module to make http(s) requests easier on the developers. So I'm guessing here, that you are learning NodeJS and not using third-parties should be the way to go.
I'm not familiar with request but it's seems like it's just a npm package that wraps the functionality of the standard library. You can use both but I would suggest reading through the documentation of http.get and request and if you find the standard library function (http.get) sufficient for your needs I don' t see a reason you should use the request package.
Related
I am new to node.js and am acquainting myself with Express. The following code is my source of confusion:
var server = http.createServer(handleRequest);
function handleRequest(req, res) {
var path = req.url;
switch (path) {
case "/n":
return renderPage_1(req, res);
default:
return renderPage_2(req, res);
}
}
I understand that the server needs to accept an HTTP request(req). However, if we are returning a response, why is the response also an argument in the callback function? I keep running into a dead-end thinking that it has to do with the scope of the response object, though I am not sure.
I would greatly appreciate clarification on this matter. I have not been able to find a resource that delineates my confusion.
Best,
Abid
I think the answer to your question is that this is how the authors of express decided to implement the library. At a high level, express is really just a light-ish weight wrapper that makes it easy to build middleware based http services with NodeJS. The reason that both the req & res objects are passed to each express middleware function is that in practice, web services are rarely able to fulfill an entire request in a single step. Often services are built as layer of middleware the build up a response in multiple steps.
For example, you might have a middleware function that looks for identity information in the request and fetches any relevant identity metadata while setting some auth specific headers on the response. The request might then flow to an authorization middleware that uses the fetched metadata to determine if the current user is authorized and if the user is not authorized can end the request early by closing the response stream. If the user is authorized then the request will continue to the next piece of middleware etc. To make this work, each middleware function (step of the stack) needs to be able to access information from the request as well as write information to the response. Express handles this by passing the request and response objects as arguments to the middleware function but this is just one way to do it.
Now the authors could have decided to implement the library differently such that each route handler was supposed to return an object such as { status: 200, content: "Hello, world" } instead of calling methods on the response object but this would be a matter of convention and you could pretty easily write a wrapper around express that let you write your services like this if you wanted.
Hope this helps.
I am connecting to a sql db and returning data on the view using res.json. The client sends a request - my server uses a mssql driver and a connection string to connect to that database and retrieve some data. So I've got a connection using GET, POST, ect.
However I am encountering a logical problem as I want to pass some data from the sql db to a module which will then use that data to prepare a json response. When I hard code an array with couple of parameters it works, but it don't know how to send a request from node.js to a db and propagate that array for a module to consume when a client sends a request. (When a client sends a request the module send a request to that db and the db returns some parameters which then the module can use to prepare a response.)
Any ideas? Could you point me in the right direction? How from a logical point of view such solution can work?
I am using node.js, express, mssql module to connect to db. I am not looking for specific code just to point me in the right direction, and if you've got any examples of course I'm happy to see those.
You will probably need to have a chain of callbacks and pass data through them, something like this:
app.get('/users', function (req, res) {
database.find('find criteria', function (err, data) {
mymodule.formatData(data, function(err, json) {
res.json(json);
})
});
});
So you just nest the callbacks until you have everything you need to send the response.
You need to get used to this style of programming in node.js.
There are also some solutions to avoid too deep callbacks nesting - split your callbacks into individual functions or use async, promises, es6 generators.
I m actually building a REST API that I need to test.
Actually, I have many unit tests to check each method behaviour and now I need to test if I get the expected result when requesting an endpoint, like checking the HTTP response code.
I m working with nodejs and it seems to be a good idea to use supertest module to send HTTP request and to check response codes.
The fact is that if I send request on my real REST API endpoints, I can have many bad data managed in database (when testing PUT / POST / PATCH methods).
But in the other hand, I don't have (in my mind) any way to "mock" or simulate my business job inside of the tests :
(Mocha syntax with ES6)
describe('get /v1/clients ', function() {
it('Should get 200 ', function(done) {
request.get('/v1/clients')
.query('access_token=' + token + '').expect(200, done);
});
});
So you've probably got it :
I want to test each API endpoint to be sure that I get what I should get
I want to use a standard access_token, not one my database (like a fake one)
I want to "simulate" by API, without using real data.
Is that possible ? If yes, How ?
Is this a better choice to check on my real data or not ?
I've been doing a series of load tests on a simple server to try and determine what is negatively impacting the load on my much more complicated node/express/mongodb app. One of the things that consistently comes up is string manipulation require for converting an in-memory object to JSON in the express response.
The amount of data that I'm pulling from mongodb via node and sending over the wire is ~200/300 KB uncompressed. (Gzip will turn this into 28k which is much better.)
Is there a way to have the native nodejs mongodb driver stringify the results for me? Right now for each request with the standard .toArray() we're doing the following:
Query the database, finding the results and transferring them to the node native driver
Native driver then turns them into an in-memory javascript object
My code then passes that in-memory object to express
Express then converts it to a string for node's http response.send using JSON.stringify() (I read the source Luke.)
I'm looking to get the stringify work done at a c++/native layer so that it doesn't add processing time to my event loop. Any suggestions?
Edit 1:
It IS a proven bottleneck.
There may easily be other things that can be optimized, but here's what the load tests are showing.
We're hitting the same web sever with 500 requests over a few seconds. With this code:
app.get("/api/blocks", function(req, res, next){
db.collection('items').find().limit(20).toArray(function(err, items){
if(err){
return next(err);
}
return res.send(200, items);
});
});
overall mean: 323ms, 820ms for 95th%
If instead I swap out the json data:
var cached = "[{... "; //giant json blob that is a copy+paste of the response in above code.
app.get("/api/blocks", function(req, res, next){
db.collection('items').find().limit(20).toArray(function(err, items){
if(err){
return next(err);
}
return res.send(200, cached);
});
});
mean is 164 ms, 580 for 95th%
Now you might say, "Gosh Will a mean of 323ms is great, what's your problem?" My problem is that this is an example in which stringify is causes a doubling of the response time.
From my testing I can also tell you these useful things:
Gzip was a 2x or better gain on response time. The above is with gzip
Express adds a nearly imperceptible amount over overhead compared to generic nodejs
Batching the data by doing cursor.each and then sending each individual item to the response is way worse
Update 2:
Using a profiling tool: https://github.com/baryshev/look
This is while hitting my production code on the same database intensive process over and over. The request includes a mongodb aggregate and sends back ~380KB data (uncompressed).
That function is very small and includes the var body = JSON.stringify(obj, replacer, spaces); line.
It sounds like you should just stream directly from Mongo to Express.
Per this question that asks exactly this:
cursor.stream().pipe(JSONStream.stringify()).pipe(res);
I am writing a web app in node.js. Now every processing on the server is always in the context of a session which is either retrieved or created at the very first stage when the request hits the server. After this the execution flows through multiple modules and callbacks within them. What I am struggling with is in creating a programming pattern so that at any point in the code the session object is available without the programmer requiring it to pass it as an argument in each function call.
If all of the code was in one single file I could have had a closure but if there are function calls to other modules in other files how do I program so that the session object is available in the called function without passing it as an argument. I feel there should be some link between the two functions in the two files but how to arrange that is where I am getting stuck.
In general I would like to say there is always a execution context which could be a session or a network request whose processing is spread across multiple files and the execution context object is to be made available at all points. There can actually be multiple use cases like having one Log object for each network request or one Log object per session. And the plumbing required to make this work should be fitted sideways without the application programmer bothering about it. He just knows that that execution context is available at all places.
I think it should fairly common problem faced by everyone so please give me some ideas.
Following is the problem
MainServer.js
app = require('express').createServer();
app_module1 = require('AppModule1');
var session = get_session();
app.get('/my/page', app_module1.func1);
AppModule1.js
app_module2 = require('AppModule2');
exports.func1 = function(req,res){
// I want to know which the session context this code is running for
app_module2.func2(req,res);
}
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
}
You can achieve this using Domains -- a new node 0.8 feature. The idea is to run each request in it's own domain, providing a space for per-request data. You can get to the current request's domain without having to pass it all over via process.domain.
Here is an example of getting it setup to work with express:
How to use Node.js 0.8.x domains with express?
Note that domains in general are somewhat experimental and process.domain in particular is undocumented (though apparently not going away in 0.8 and there is some discussion on making it permanent). I suggest following their recommendation and adding an app-specific property to process.domain.data.
https://github.com/joyent/node/issues/3733
https://groups.google.com/d/msg/nodejs-dev/gBpJeQr0fWM/-y7fzzRMYBcJ
Since you are using Express, you can get session attached to every request. The implementation is following:
var express = require('express');
var app = express.createServer();
app.configure('development', function() {
app.use(express.cookieParser());
app.use(express.session({secret: 'foo', key: 'express.sid'}));
});
Then upon every request, you can access session like this:
app.get('/your/path', function(req, res) {
console.log(req.session);
});
I assume you want to have some kind of unique identifier for every session so that you can trace its context. SessionID can be found in the 'express.sid' cookie that we are setting for each session.
app.get('/your/path', function(req, res) {
console.log(req.cookies['express.sid']);
});
So basically, you don't have to do anything else but add cookie parser and enable sessions for your express app and then when you pass the request to these functions, you can recognize the session ID. You MUST pass the request though, you cannot build a system where it just knows the session because you are writing a server and session is available upon request.
What express does, and the common practice for building an http stack on node.js is use http middleware to "enhance" or add functionality to the request and response objects coming into the callback from your server. It's very simple and straight-forward.
module.exports = function(req, res, next) {
req.session = require('my-session-lib');
next();
};
req and res are automatically passed into your handler, and from their you'll need to keep them available to the appropriate layers of your architecture. In your example, it's available like so:
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
req.session; // <== right here
}
Nodetime is a profiling tool that does internally what you're trying to do. It provides a function that instruments your code in such a way that calls resulting from a particular HTTP request are associated with that request. For example, it understands how much time a request spent in Mongo, Redis or MySQL. Take a look at the video on the site to see what I mean http://vimeo.com/39524802.
The library adds probes to various modules. However, I have not been able to see how exactly the context (url) is passed between them. Hopefully someone can figure this out and post an explanation.
EDIT: Sorry, I think this was a red-herring. Nodetime is using the stack trace to associate calls with one another. The results it presents are aggregates across potentially many calls to the same URL, so this is not a solution for OP's problem.