I've read through node-postgresās API documentation.
It recommended that we use the pg object to create pooled clients. And in pg.connect api documentation said:
The connect method retrieves a Client from the client pool, or if all
pooled clients are busy and the pool is not full, the connect method
will create a new client passing its first argument directly to the
Client constructor.
So according to the recommendation, does using pg.connect mean "using the pg object to create pooled clients"? If it's not, what does it actually mean?
And in my implementation example, I made several queries in my route:
app.get('/post', function(req, res) {
pg.connect(dbconfig, function(err, client, done) {
client.query('SELECT * FROM post', function(err, result) {
res.render('post/list', { posts: result.rows });
});
});
});
app.get('/post/new', function(req, res) {
res.render('post/new');
});
app.post('/api/v1/post', function(req, res) {
var b = req.body;
pg.connect(dbconfig, function(err, client, done) {
client.query('INSERT INTO post (title, content) VALUES ($1, $2)',
[b.title, b.content],
function(err, result) {
done();
res.redirect('/post');
});
});
});
Is it the right way to call pg.connect each time I want to make query? If not, what is the better idea?
It does look, according to the documentation that pg.connect() does handle pooled connections. I would suggest however one thing you could likely do better (assuming you only have one set of credentials your app is using).
If I were looking at doing this, I would work on saving duplication of effort/keystrokes/opportunities for error a bit and look at wrapping pg.connect() in some sort of function you could use that would return client. This would enable you to do something more like:
app.get('/post', function(req, res) {
db.run( function(client) {
client.query('SELECT * FROM post', function(err, result) {
res.render('post/list', { posts: result.rows });
});
});
});
However, given the way you have things done, I am not convinced you have a lot to gain by such an approach, so I don't see anything wrong with your approach.
This might be a little outdated now, but take a look at this:
https://github.com/aichholzer/Bodega
It will take care of any worries and make your coding experience a little more pleasant.
:)
Related
I would like to post at the path /users and then immediately post to /users/:id, but the actions need to be different at each of these URLs, so I can't use the array method for applying the same middleware to different URLs
The idea is that POST(/users/:id, ...) will never be called by the client. It only gets called immediately after POST(/users, ...)
When using express, you are providing a handler function for a specific endpoint. Actually it's an array of those functions (middlewares). That means that you can switch from :
route.post('/users/`, (req, res, next) => {
// do your magic
});
to
route.post('/users/', handleMyCall);
This way you can easily reuse those functions in multiple endpoints without your need to actually make requests:
route.post('/users/', (req, res) => {
// do something +
handleMyCall(req, res);
// either return the result of this call, or another result
});
route.post('/users/:userID', (req, res) => {
// do another operation +
handleMyCall(req, res);
});
Update:
Using GET or POST differs in the way the data is sent to the server. You can use both for your cases, and it really depends on the testing client you have.
Typically, a GET request is done to query the database and not do any actions. POST is usually used to create new entities in the database.
In your scenario, I'd guess you would have post('/users/) in order to create a user. And then have get('/users/:userID') to find that user and return it to the client.
You can easily have different endpoints with different handles for those cases.
As I understood from the comments, you'll need a POST request on /users (to persist data in some database) and GET /users/:id to retrieve these data, which is very different from POSTing the same thing on 2 different endpoints.
POST is generally used to persist and GET to retrieve data.
I'll assume you use some kind of NoSQL DB, perhaps MongoDB. MongoDB generate a unique ID for each document you persist in it.
So you'll have to have 2 routes :
const postUser = async (req, res, next) => {
try {
// persist your user here, perhaps with mongoose or native mongo driver
} catch (e) {
return next(e);
}
}
const getUserById = async (req, res, next) => {
try {
// get your user here thanks to the id, in req.params.id
} catch (e) {
return next(e);
}
}
export default (router) => {
router.route('/users').post(postUser);
router.route('/users/:id').get(getUserById);
};
I'm working on a very simple web app that doesn't do much, however it does connect to a database for INSERT and SELECT operations on one table. I have a function that I utilized while browsing through several great tutorials, however I'm having trouble returning the rows from the SELECT query. Keeping in mind I'm learning Node.JS -- how would I display the data returned from the query (SELECT) to this block?
app.post("/getcsv", function(req,res){
var sqlselall = "SELECT * FROM office";
var rows = handle_database(sqlselall);
res.json(rows);
res.end();
The function for handling the database connections (using pooling):
function handle_database(sqlstmt){
pool.getConnection(function(err,connection){
if(err) {
res.json({"code" : 100, "status" : "Error in connection to database."});
return;
}
console.log('connected as id ' + connection.threadId);
connection.query(sqlstmt, function(err,rows){
connection.release();
if(!err){
console.log("Number of rows affected: " + rows.affectedRows);
}
});
connection.on('error', function(err) {
res.json({"code": 100, "status" : "Error in connection to database."});
return;
});
I realize that the rows in the inner function contains the data I need, however I'm at a loss as to how to return it when I call the function.
So if I could have commented on you answer I would have. This is something that I would like to supplement to your answer, because it looks like your answer should work to me, although I have not tested it personally.
From experience of trying to learn the callback style I think this might help you. It does help keep the code a little more modular.
app.post("/getcsv", function(req, res) {
var sqlselall = "SELECT * FROM office"
select_query(sqlselall, function(results){
res.send(results)
})
})
function select_query(sqlstmt, callback) {
pool.query(sqlstmt, function(err, results, fields) {
//I suppose if this is how you want to handle errors
//for debugging purposes I like returning them as well
//returning it helps both you and others who might be working on
//the front end to know whats happening
if (err) throw err
callback(JSON.stringify(results))
})
}
This way your select_query function doesn't require the res to get passed in, and doesn't rely on a parameter that has a function in order to work. Some times that cant be helped, but when it can I find its easier for maintenance to take that into account.
In the hypothetical situation that you might have another end point that needs to query as well, but needs to append modify the information before you send it, you would still be able to use your select_query function, and just modify your callback that you pass into it. So you would end up with something like this: (I also changed the error handling a little)
app.post("/getcsv", function(req, res) {
var sqlselall = "SELECT * FROM office"
select_query(sqlselall, function(err, results){
if(err){
res.send(err)
//throw an error if you would like to here
}
res.send(results)
})
})
app.post("/modifyCSV", function(req, res){
var sql = "{sql statement}"
select_query(sql, function(err, results){
if(err){
res.send("Aww an error: "+err)
//throw an error if you would like to here
}res.send(reuslts + "\nHello to you too")
})
})
function select_query(sqlstmt, callback) {
pool.query(sqlstmt, function(err, results, fields) {
if (err)
callback(JSON.stringify(err), null)
callback(null, JSON.stringify(results))
})
}
Like I said, I am not saying your way is wrong, it works, and perhaps it will work better for you. I have just found that this helped me get a handle on callbacks, and actually start to enjoy using them.
Answering my own question, especially considering some may consider it a duplicate (which due to my lack of knowledge in regards to Node.JS and JS in general is likely), seems inappropriate, however I discovered that once I did some research (thanks #Paul - and callback hell) and gained clarification on some fundamentals regarding functions, callbacks, anonymous functions, and the nature of a function's scope in Javascript I was able to come up with a solution to my problem. So for my connection to the DB I created a new simplified function which is passed the parameter 'res' from the callback parameter (which I now somewhat understand) from app.post:
app.post("/getcsv", function(req, res) {
var sqlselall = "SELECT * FROM office"
var thisData = select_query(sqlselall, res)
})
function select_query(sqlstmt, res) {
pool.query(sqlstmt, function(err, results, fields) {
if (err) throw err
res.send(JSON.stringify(results))
})
}
Getting started with Node.js and Heroku; I am trying to make sense of the following code, in order to build something of my own:
app.get('/db', function (request, response) {
pg.connect(process.env.DATABASE_URL, function(err, client, done) {
client.query('SELECT * FROM test_table', function(err, result) {
done();
if (err)
{ console.error(err); response.send("Error " + err); }
else
{ response.render('pages/db', {results: result.rows} ); }
});
});
});
Where can I find a tutorial or some comments or explanations for that?
Even though I can do some guessing, a good deal of this code is pretty mysterious.
Currently my main concerns are:
What happens if I change the SQL query, replacing it by 'SELECT
count(*) FROM test_table'? How do I then render the result?
What does "done();" do? Is it something I can modify or make use
of?
The parameter "request" is never used. Can it be used for
something at some point?
Before handling heroku, you should first look at tutorials about web application in node.js which will answers your last question.
You can see how works express.js, a web framework.
Then look at node-postgre documentation. You will find your answers about the second question here :
//this initializes a connection pool
//it will keep idle connections open for a 30 seconds
//and set a limit of maximum 10 idle clients
var pool = new pg.Pool(config);
// to run a query we can acquire a client from the pool,
// run a query on the client, and then return the client to the pool
pool.connect(function(err, client, done) {
if(err) {
return console.error('error fetching client from pool', err);
}
client.query('SELECT $1::int AS number', ['1'], function(err, result) {
//call `done()` to release the client back to the pool
done();
if(err) {
return console.error('error running query', err);
}
console.log(result.rows[0].number);
//output: 1
});
});
And finanlly, why don't you just log result output after changing the SQL query and look what you get ?
I've checked two similar questions here and neither of the things suggested in the comments are working for me.
app.get('/:id', function(req,res) {
console.log(req.params.id);
});
app.get('/:id', function(req, res) {
db.query("SELECT * FROM entries WHERE id = $1", [req.params.id], function(err, dbRes) {
if (!err) {
res.render('show', { entry: dbRes.rows[0] });
}
});
});
As you can see, I've tried logging the result to the console to see what's going on. Visiting the URL in question just makes the page load until it times out. In the console, I get "undefined".
How do I define req.params? Or where is it's definition being pulled and why isn't it returning the values?
Full context: http://pastebin.com/DhWrPvjP
Just tested your code and it works fine. I think you might be missing your url parameter. It should be http://localhost:3000/1 - or whatever ID you're trying to retrieve. Try it out.
Also, you should pass the extended option to your bodyParser.urlencode method: express throws error as `body-parser deprecated undefined extended`
Edit: To specifically answer your question about defining request parameters. You don't have to do anything to define request parameters other than make sure that you're passing in the correct URL. Express takes care of parsing the URL and defining the request parameters for you. So, if you go to the URL http://localhost/jimbob on your server then the value passed in for the id parameter will be available as req.params.id. See this link on request parameters for more info.
Edit 2: You could try debugging your app to see what you get. Here's a link on how to enable debugging in Express and how to use node-inspector for debugging. I saw that your running this on Ubuntu. So, there may be something weird there that I'm not aware of. (I'm running it on a Mac.)
I would also check the version of Node that you're running on the computer(s) that the app works on and check the version of Node on your Ubuntu environment (or whatever computers the app doesn't work on).
app.get('/:id', function(req, res) {
db.query("SELECT * FROM entries WHERE id = $1", [req.params.id], function(err, dbRes) {
if (!err) {
res.render('show', { entry: dbRes.rows[0] });
}
});
});
in your code the url would be localhost/some-id req.params.id would equal some-id, params are pulls straight from the url string, if you are trying to send info with post or get methods you want to use req.body and req.query respectively. I dont see any reason you wouldnt be able to get the id unless the url is wrong
or if you need to do it manually
app.get('/:id', function(req, res) {
//if no req.params and assuming the id is the last item in the url
var urlArray = req.url.split('/'),
id = urlArray[urlArray.length-1];
db.query("SELECT * FROM entries WHERE id = $1", [req.params.id], function(err, dbRes) {
if (!err) {
res.render('show', { entry: dbRes.rows[0] });
}
});
});
try this req.param('id') :D. It may be working for you
I know I'm late to the party but this post helped me debug my issue so I figured I'll add my suggestion in hopes it will help someone else.
If you are using mysql2 with promises
const mysql = require("mysql2");
const pool = mysql.createPool({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_DATABASE
});
module.exports = pool.promise();
Then you need to use promises in your request.
router.get("/:id", (req, res) => {
mysql
.execute("SELECT * FROM entries WHERE id = $1", [req.params.id])
.then(result => {
res.send(result[0]);
})
.catch(err => {
console.log(err);
});
});
I spent hours debugging my code only to realize I was using promise() in my connection. Hope this helps as this post helped me debug.
A web app I'm building will send out invoices to clients every third month. This will be a scheduled event that is run in the middle of the night, but under development I have put this code into a route so I can test it.
In short i want the code to do the following.
QUery all unsent invoices from DB.
Make a call to Mandrill for each invoice (In this call I'm also invoking a function creating a Mandrill message object from the invoice).
For every message Mandrill send, Update the DB invoice sent: true.
When all invoices are sent, make a final callback in the async.waterfall
The code below works. but i have some concerns regarding the _.each.
invoices.post('/invoices/send/', function(req, res, next) {
async.waterfall([
// Query all unsent invoices
function(callback) {
db.invoices.find({sent: false}).toArray(callback);
},
// Send all unsent invoices
function(invoices, callback) {
if (invoices.length === 0) {
var err = new Error('There are no unsent invoices');
err.status = 400;
return next(err); //Quick escape if there are no matching invoice to process
}
// Make a call to Mandrill transactional email service for every invoice.
_.each(invoices, function(invoice) {
mandrillClient.messages.sendTemplate({template_name: "planpal-invoice", template_content: null, message: mandrillClient.createInvoiceMessage(invoice)}, function(sendResult) {
console.log(sendResult);
db.invoices.updateById(invoice._id, {$set: {sent: true}}, function(err, saveResult) {
console.log(saveResult);
});
}, function(err) {
return next(err);
});
});
callback(null, 'done');
}
],
function(err, result) {
if (err) {
return next(err);
}
res.json(result);
});
});
I'm thinking I should use async.eachLimit instead.... but I dont know how to write it.
I have no idea what i should set the limit to, but I guess several parallel request would be better than running all mandrill request in serie like above, am I wrong? EDIT _.each run the callbacks in parallel. The difference from a async.each is that I dont get a "final callback"
Conclusion: Should i use a async.eachLimit above? If Yes, what is a good limit value?
I think you can use the https://github.com/caolan/async#each function.
it will execute the queries in parallel too