I am connecting to a sql db and returning data on the view using res.json. The client sends a request - my server uses a mssql driver and a connection string to connect to that database and retrieve some data. So I've got a connection using GET, POST, ect.
However I am encountering a logical problem as I want to pass some data from the sql db to a module which will then use that data to prepare a json response. When I hard code an array with couple of parameters it works, but it don't know how to send a request from node.js to a db and propagate that array for a module to consume when a client sends a request. (When a client sends a request the module send a request to that db and the db returns some parameters which then the module can use to prepare a response.)
Any ideas? Could you point me in the right direction? How from a logical point of view such solution can work?
I am using node.js, express, mssql module to connect to db. I am not looking for specific code just to point me in the right direction, and if you've got any examples of course I'm happy to see those.
You will probably need to have a chain of callbacks and pass data through them, something like this:
app.get('/users', function (req, res) {
database.find('find criteria', function (err, data) {
mymodule.formatData(data, function(err, json) {
res.json(json);
})
});
});
So you just nest the callbacks until you have everything you need to send the response.
You need to get used to this style of programming in node.js.
There are also some solutions to avoid too deep callbacks nesting - split your callbacks into individual functions or use async, promises, es6 generators.
Related
I am new to node.js and am acquainting myself with Express. The following code is my source of confusion:
var server = http.createServer(handleRequest);
function handleRequest(req, res) {
var path = req.url;
switch (path) {
case "/n":
return renderPage_1(req, res);
default:
return renderPage_2(req, res);
}
}
I understand that the server needs to accept an HTTP request(req). However, if we are returning a response, why is the response also an argument in the callback function? I keep running into a dead-end thinking that it has to do with the scope of the response object, though I am not sure.
I would greatly appreciate clarification on this matter. I have not been able to find a resource that delineates my confusion.
Best,
Abid
I think the answer to your question is that this is how the authors of express decided to implement the library. At a high level, express is really just a light-ish weight wrapper that makes it easy to build middleware based http services with NodeJS. The reason that both the req & res objects are passed to each express middleware function is that in practice, web services are rarely able to fulfill an entire request in a single step. Often services are built as layer of middleware the build up a response in multiple steps.
For example, you might have a middleware function that looks for identity information in the request and fetches any relevant identity metadata while setting some auth specific headers on the response. The request might then flow to an authorization middleware that uses the fetched metadata to determine if the current user is authorized and if the user is not authorized can end the request early by closing the response stream. If the user is authorized then the request will continue to the next piece of middleware etc. To make this work, each middleware function (step of the stack) needs to be able to access information from the request as well as write information to the response. Express handles this by passing the request and response objects as arguments to the middleware function but this is just one way to do it.
Now the authors could have decided to implement the library differently such that each route handler was supposed to return an object such as { status: 200, content: "Hello, world" } instead of calling methods on the response object but this would be a matter of convention and you could pretty easily write a wrapper around express that let you write your services like this if you wanted.
Hope this helps.
I'm connecting to an external API on my backend.
Data flow : External API -> My backend -> Client side
I know that exists modules like request or http that helps this process.
But, When I receive the data from External API, I need to modify it and add some information. After I'll send this "modified data" to the Client side.
I am searching a tool similar to BackBone Collections on the backend to help me. BB Collections have awesome functions like fetch/sort/each.
Every time I google only I found frameworks like BackBone on the client side not on the server side.
Edit 1
Tool would have to help me iterating over array (received from external API) or accessing to item with specific attr given.
Solved
After studying both options (Lodash and Unirest), finally I decided to use Lodash combined with request.
Try lodash to handle arrays in server side.
var unirest = require('unirest');
app.get('/api', function(req, res){
unirest.get('http://localhost:3000/getlist')
.header('Accept', 'application/json')
.end(function (response) {
response.body.forEach(function(item){
//handle item
});
res.send();
});
});
Maybe Unirest?
I've been doing a series of load tests on a simple server to try and determine what is negatively impacting the load on my much more complicated node/express/mongodb app. One of the things that consistently comes up is string manipulation require for converting an in-memory object to JSON in the express response.
The amount of data that I'm pulling from mongodb via node and sending over the wire is ~200/300 KB uncompressed. (Gzip will turn this into 28k which is much better.)
Is there a way to have the native nodejs mongodb driver stringify the results for me? Right now for each request with the standard .toArray() we're doing the following:
Query the database, finding the results and transferring them to the node native driver
Native driver then turns them into an in-memory javascript object
My code then passes that in-memory object to express
Express then converts it to a string for node's http response.send using JSON.stringify() (I read the source Luke.)
I'm looking to get the stringify work done at a c++/native layer so that it doesn't add processing time to my event loop. Any suggestions?
Edit 1:
It IS a proven bottleneck.
There may easily be other things that can be optimized, but here's what the load tests are showing.
We're hitting the same web sever with 500 requests over a few seconds. With this code:
app.get("/api/blocks", function(req, res, next){
db.collection('items').find().limit(20).toArray(function(err, items){
if(err){
return next(err);
}
return res.send(200, items);
});
});
overall mean: 323ms, 820ms for 95th%
If instead I swap out the json data:
var cached = "[{... "; //giant json blob that is a copy+paste of the response in above code.
app.get("/api/blocks", function(req, res, next){
db.collection('items').find().limit(20).toArray(function(err, items){
if(err){
return next(err);
}
return res.send(200, cached);
});
});
mean is 164 ms, 580 for 95th%
Now you might say, "Gosh Will a mean of 323ms is great, what's your problem?" My problem is that this is an example in which stringify is causes a doubling of the response time.
From my testing I can also tell you these useful things:
Gzip was a 2x or better gain on response time. The above is with gzip
Express adds a nearly imperceptible amount over overhead compared to generic nodejs
Batching the data by doing cursor.each and then sending each individual item to the response is way worse
Update 2:
Using a profiling tool: https://github.com/baryshev/look
This is while hitting my production code on the same database intensive process over and over. The request includes a mongodb aggregate and sends back ~380KB data (uncompressed).
That function is very small and includes the var body = JSON.stringify(obj, replacer, spaces); line.
It sounds like you should just stream directly from Mongo to Express.
Per this question that asks exactly this:
cursor.stream().pipe(JSONStream.stringify()).pipe(res);
When you have a RESTful server which only responds with JSON by fetching some information from the database, and then you have a client-side application, such as Backbone, Ember or Angular, from which side do you test an application?
Do I need two tests - one set for back-end testing and another set for front-end testing?
The reason I ask is testing REST API by itself is kind of difficult. Consider this code example (using Mocha, Supertest, Express):
var request = require('supertest');
var should = require('chai').should();
var app = require('../app');
describe('GET /api/v1/people/:id', function() {
it('should respond with a single person instance', function(done) {
request(app)
.get('/api/v1/people/:id')
.expect(200)
.end(function(err, res) {
var json = res.body;
json.should.have.property('name');
done();
});
});
});
Notice that :id in the url? That's an ObjectId of a specific person. How do I know what to pass there? I haven't even looked into the database at this point. Does that I mean I need to import Person model, connect to database and do queries from within the tests? Maybe I should just move my entire app.js into tests? (sarcasm :P). That's a lot of coupling. Dependency on mongoose alone means I need to have MongoDB running locally in order to run this test. I looked into sinon.js, but I am not sure if it's applicable here. There weren't many examples on how to stub mongoose.
I am just curious how do people test these kinds of applications?
Have you tried using mongoose-model-stub in your server-side test? It will free you from having to remember or hardcode database info for your tests.
As for testing the client side, your "webapp" is basically two apps: a server API and a client-side frontend. You want tests for both ideally. You already know how to test your server. On the client you would test your methods using stubbed out "responses" (basically fake json strings that look like what your web service spits out) from your API. These don't have to be live urls; rather it's probably best if they're just static files that you can edit as needed.
I would use nock..https://github.com/pgte/nock
What you want to test is the code you have written for your route.
So what you do is, create a response that will be sent when the end point is hit.
Basically its a fake server..
Something like this..
Your actual method..
request({
method: "GET",
url: "http://sampleserver.com/account"
}, function(err, res, data){
if (err) {
done(err);
} else {
return done(null,data);
}
});
Then..
var nockObj = nock("http://sampleserver.com")
.get("/account")
.reply(200,mockData.arrayOfObjects);
//your assertions here..
This way you don't alter the functionality of your code.. Its like saying.. instead of hitting the live server..hit this fake server and get mock data. All you have to do is make sure your mock data is in sync with the expected data..
I am writing a web app in node.js. Now every processing on the server is always in the context of a session which is either retrieved or created at the very first stage when the request hits the server. After this the execution flows through multiple modules and callbacks within them. What I am struggling with is in creating a programming pattern so that at any point in the code the session object is available without the programmer requiring it to pass it as an argument in each function call.
If all of the code was in one single file I could have had a closure but if there are function calls to other modules in other files how do I program so that the session object is available in the called function without passing it as an argument. I feel there should be some link between the two functions in the two files but how to arrange that is where I am getting stuck.
In general I would like to say there is always a execution context which could be a session or a network request whose processing is spread across multiple files and the execution context object is to be made available at all points. There can actually be multiple use cases like having one Log object for each network request or one Log object per session. And the plumbing required to make this work should be fitted sideways without the application programmer bothering about it. He just knows that that execution context is available at all places.
I think it should fairly common problem faced by everyone so please give me some ideas.
Following is the problem
MainServer.js
app = require('express').createServer();
app_module1 = require('AppModule1');
var session = get_session();
app.get('/my/page', app_module1.func1);
AppModule1.js
app_module2 = require('AppModule2');
exports.func1 = function(req,res){
// I want to know which the session context this code is running for
app_module2.func2(req,res);
}
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
}
You can achieve this using Domains -- a new node 0.8 feature. The idea is to run each request in it's own domain, providing a space for per-request data. You can get to the current request's domain without having to pass it all over via process.domain.
Here is an example of getting it setup to work with express:
How to use Node.js 0.8.x domains with express?
Note that domains in general are somewhat experimental and process.domain in particular is undocumented (though apparently not going away in 0.8 and there is some discussion on making it permanent). I suggest following their recommendation and adding an app-specific property to process.domain.data.
https://github.com/joyent/node/issues/3733
https://groups.google.com/d/msg/nodejs-dev/gBpJeQr0fWM/-y7fzzRMYBcJ
Since you are using Express, you can get session attached to every request. The implementation is following:
var express = require('express');
var app = express.createServer();
app.configure('development', function() {
app.use(express.cookieParser());
app.use(express.session({secret: 'foo', key: 'express.sid'}));
});
Then upon every request, you can access session like this:
app.get('/your/path', function(req, res) {
console.log(req.session);
});
I assume you want to have some kind of unique identifier for every session so that you can trace its context. SessionID can be found in the 'express.sid' cookie that we are setting for each session.
app.get('/your/path', function(req, res) {
console.log(req.cookies['express.sid']);
});
So basically, you don't have to do anything else but add cookie parser and enable sessions for your express app and then when you pass the request to these functions, you can recognize the session ID. You MUST pass the request though, you cannot build a system where it just knows the session because you are writing a server and session is available upon request.
What express does, and the common practice for building an http stack on node.js is use http middleware to "enhance" or add functionality to the request and response objects coming into the callback from your server. It's very simple and straight-forward.
module.exports = function(req, res, next) {
req.session = require('my-session-lib');
next();
};
req and res are automatically passed into your handler, and from their you'll need to keep them available to the appropriate layers of your architecture. In your example, it's available like so:
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
req.session; // <== right here
}
Nodetime is a profiling tool that does internally what you're trying to do. It provides a function that instruments your code in such a way that calls resulting from a particular HTTP request are associated with that request. For example, it understands how much time a request spent in Mongo, Redis or MySQL. Take a look at the video on the site to see what I mean http://vimeo.com/39524802.
The library adds probes to various modules. However, I have not been able to see how exactly the context (url) is passed between them. Hopefully someone can figure this out and post an explanation.
EDIT: Sorry, I think this was a red-herring. Nodetime is using the stack trace to associate calls with one another. The results it presents are aggregates across potentially many calls to the same URL, so this is not a solution for OP's problem.