Coming from a .net world where synchronicity is a given I can query my data from a back end source such as a database, lucene, or even another API, I'm having a trouble finding a good sample of this for node.js where async is the norm.
The issue I'm having is that a client is making an API call to my hapi server, and from there I need to take in the parameters and form an Elasticsearch query to call, using the request library, and then wait for the instance to return before populating my view and sending it back to the client, problem being is that the request library uses a callback once the data is returned, and the empty view has long been returned to the client by then.
Attempting to place the return within the call back doesn't work since the EOF for the javascript was already hit and null returned in it's place, what is the best way to retrieve data within a service call?
EX:
var request = require('request');
var options = {
url: 'localhost:9200',
path: {params},
body: {
{params}
}
}
request.get(options, function(error, response){
// do data manipulation and set view data
}
// generate the view and return the view to be sent back to client
Wrap request call in your hapi handler by nesting callbacks so that the async tasks execute in the correct logic order. Pseudo hapi handler code is as following
function (request, reply) {
Elasticsearch.query((err, results) => {
if (err) {
return reply('Error occurred getting info from Elasticsearch')
}
//data is available for view
});
}
As I said earlier in your last question, use hapi's pre handlers to help you do async tasks before replying to your client. See docs here for more info. Also use wreck instead of request it is more robust and simpler to use
Related
I have a database that holds a table of a list of users (and their respective usernames and passwords). I have a login system that runs a function that returns false when the user is not found and the username/password when it is found. That is then used in the login system but that's irrelevant.
My problem is that that function has to send a query to the database. But the callback actually can't return anything and so I can't just use
await database.query('query things', data = await function(err, res) {function things});
I tried declaring a variable and then inside the query callback setting that variable to the fetched data, and then outside of that query the await holds the rest of the code back, but it just returns undefined. The await does not do anything because of the multiple promise resolves (I think). So what ends up happening is the function returns undefined and the server crashes.
Is there a way that I can pass the fetched data through the function and back to the login system?
Structure:
login calls fetch function; fetch function queries database; callback sets a passthrough; fetch function receives that passthrough and parses it and returns things to the login.
Thank you for being helpful, internet. Unlike last time.
It turns out that because there is a callback inside it does not return a promise so async/await is not useable. I still wanted to not have the server instantly crash when an error occurred but try/catch works fine.
I've got a small Express JS api that I'm building to handle and process multiple incoming requests from the browser and am having some trouble figuring out the best approach to handle them.
The use case is that there's a form, with potentially up-to 30 or so people submitting form data to the Express JS api at any given time, the API then POSTS this data of to some place using axios, and each one needs to return a response back to the browser of the person that submitted the data, my endpoint so far is:
app.post('/api/process', (req, res) => {
if (!req.body) {
res.status(400).send({ code: 400, success: false, message: "No data was submitted" })
return
}
const application = req.body.Application
axios.post('https://example.com/api/endpoint', application)
.then(response => {
res.status(200).send({ code: 200, success: true, message: response })
})
.catch(error => {
res.status(200).send({ code: 200, success: false, message: error })
});
})
If John and James submit form data from different browsers to my Express JS api, which is forwarded to another api, I need the respective responses to go back to the respective browsers...
Let's make clear for you, A response of a request will only send to the requester, But if you need to send a process request and send a response like, hey i received your request and you can use another get route to get the result sometimes later, then you need to determine which job you mean. So You can generate a UUID when server receives a process request and send it back to the sender as response, Hey i received your process request, you can check the result of process sometimes later and this UUID is your reference code. Then you need to pass the UUID code as GETparam or query param and server send you the correct result.
This is the usual way when you are usinf WebSockettoo. send a process req to server and server sends back a reference UUID code, sometime later server sends the process result to websocket of requester and says Hey this is the result of that process with that UUID reference code.
I hope i said clear enough.
I know how to send an http request to a server using angular js. With the promise returned, I know how to listen for a response and manipulate the ui thereafter. But this approach cannot be used for what I have in mind.
However, what I cannot figure out, is how to send a request to a website.
I have a server localhost:800/receiveData which receives a POST request and then manipulate the UI and DoM on the angularjs site
app.get('/', function(req,res){
res.sendFile(__dirname+'/index.html')
})
app.post('/receiveData', function(req,res){
var data = req.body.data
// assume data is a boolean
if(data){
//show a view in index.html using angular js or anything else
}else {
//show a different view in index.html
}
});
Any help will be greatly appreciated. I have a need for angular js. Having a SPA is imperative. I am completely open to adding additional stacks if neccessary.
EDIT:
As pointed out by MarcoS, manipulation of dom should ideally not happen from the server side. I am combining IPFS with node js and angular js to develop a single page application. The swarm of nodes set up using IPFS has an open line of communication with my server (by design). Based on packets of data sent via the comm line to my server, I need to convey messages to the user via the index.html.
I think your approach is wrong: on server-side, you should NOT manipulate the UI and DOM...
You should just do server activity (update a database, send an email, ..., return a static page).
Then you can output a result (JSON/XML/... format) for your client-side calling script to read.
Following OP edit, what I do understand is he wants server push to the client.
To get serve side pushes, you should poll on the client.
In a controller:
function getServerState(changeState) {
return $http.get("/receiveData").then(function(res) {
changeState(res.data); // notify the watcher
}).catch(function(e) {
/* handle errors here */
}).then(function() {
return getServerState(changeState); // poll again when done call
});
}
Consuming it this way:
getServerState(function(status) {
$scope.foo = status; // changes to `foo` $scope variable will reflect instantly on the client
});
And, server side:
app.post('/receiveData', function(req, res) {
var data = req.body.data; // assume data is a boolean
res.end(JSON.stringify(data);
});
I'm running a small Angular application with a Node/Express backend.
In one of my Angular factories (i.e. on the client side) I make a $http request to Github to return user info. However, a Github-generated key (which is meant to be kept secret) is required to do this.
I know I can't use process.env.XYZ on the client side. I'm wondering how I could keep this api key a secret? Do I have to make the request on the back end instead? If so, how do I transfer the returned Github data to the front end?
Sorry if this seems simplistic but I am a relative novice, so any clear responses with code examples would be much appreciated. Thank you
Unfortunately you have to proxy the request on your backend to keep the key secret. (I am assuming that you need some user data that is unavailable via an unauthenticated request like https://api.github.com/users/rsp?callback=foo because otherwise you wouldn't need to use API keys in the first place - but you didn't say specifically what you need to do so it is just my guess).
What you can do is something like this: In your backend you can add a new route for your frontend just for getting the info. It can do whatever you need - using or not any secret API keys, verify the request, process the response before returning to your client etc.
Example:
var app = require('express')();
app.get('/github-user/:user', function (req, res) {
getUser(req.params.user, function (err, data) {
if (err) res.json({error: "Some error"});
else res.json(data);
});
});
function getUser(user, callback) {
// a stub function that should do something more
if (!user) callback("Error");
else callback(null, {user:user, name:"The user "+user});
}
app.listen(3000, function () {
console.log('Listening on port 3000');
});
In this example you can get the user info at:
http://localhost:3000/github-user/abc
The function getUser should make an actual request to GitHub and before you call it you can change if that is really your frontend that is making the request e.g. by cheching the "Referer" header or other things, validate the input etc.
Now, if you only need a public info then you may be able to use a public JSON-P API like this - an example using jQuery to make things simple:
var user = prompt("User name:");
var req = $.getJSON('https://api.github.com/users/'+user);
req.then(function (data) {
console.log(data);
});
See DEMO
I'm trying to figure out a way to cache my knockoutJS SPA data and I've been experimenting with amplifyJS. Here's one of my GET functions:
UserController.prototype.getUsers = function() {
var self = this;
return $.ajax({
type: 'GET',
url: self.Config.api + 'users'
}).done(function(data) {
self.usersArr(ko.utils.arrayMap(data.users, function(item) {
// run each item through model
return new self.Model.User(item);
}));
}).fail(function(data) {
// failed
});
};
Here's the same function, "amplified":
UserController.prototype.getUsers = function() {
var self = this;
if (amplify.store('users')) {
self.usersArr(ko.utils.arrayMap(amplify.store('users'), function(item) {
// run each item through model
return new self.Model.User(item);
}));
} else {
return $.ajax({
type: 'GET',
url: self.Config.api + 'users'
}).done(function(data) {
self.usersArr(ko.utils.arrayMap(data.users, function(item) {
// run each item through model
return new self.Model.User(item);
}));
}).fail(function(data) {
// failed
});
};
This works as expected, but I'm not sure about the approach I used, because it will also require extra work on the addUser, removeUser and editUser functions. And seeing as I have many more similar functions throughout my app, I'd like to avoid the extra code if possible.
I've found a way of handling things with the help of ko.extenders, like so:
this.usersArr = ko.observableArray().extend({ localStore: 'users' });
Then use the ko.extenders.localStore function to update the local storage data whenever it detects a change inside the observableArray. So on init it will write to the observableArray in case local storage data exists for users key and on changes it will update the local storage data.
My problem with this approach is that I need to run my data through the model and I couldn't find a way to do that from the localStore function, which is kept on a separate page.
Has any of you worked with KO and Amplify? What approach did you use? Should I use the first one or try a combination of the two and rewrite the extender in a way that it only updates the local storage without writing to the observableArray on init?
Following the discussion in the question's comments, I suggested to use native HTTP caching instead of adding another caching layer on the client by means of an extra library.
This would require implementing a conditional request scheme.
Such a scheme relies on freshness information in the Ajax response headers via the Last-Modified (or E-Tag) HTTP headers and other headers that influence browser caching (like Cache-Control: with its various options).
The browser transparently sends an If-Modified-Since (or If-None-Match) header to the server when the same resource (URL) is requested subsequently.
The server can respond with HTTP 304 Not Modified if the client's information is still up-to-date. This can be a lot faster than re-creating a full response from scratch.
From the Ajax request's point of view (jQuery or otherwise) a response works the same way, no matter if it actually came from the server or if it came from the browser's cache, the latter is only a lot faster.
Carefully adapting the server side is necessary for this, the client side on the other hand does not need much change.
The benefit of implementing conditional requests is reduced load on the server and faster response behavior on the client.
A specialty of Knockout to improve this even further:
If you happen to use the mapping plugin to map raw server data to a complex view model, you can define - as part of the options that control the mapping process - a key function. Its purpose is to match parts of your view model against parts of the source data.
This way parts of the data that already have been mapped will not be mapped again, the others are updated. That can help reduce the client's processing time for data it already has and, potentially, unnecessary screen updates as well.