Is making a post request inside a get route unRESTful? - javascript

In the code below my express server is handling a get request coming from a weather app. A function is called when the page initially loads to get the location. However, the api I am using, 'geolocation', uses a post request to get location data.
Is is unRESTful for me to be making a post request inside my get route like this?
router.get('/', function(req,res ){
axios.post(`https://www.googleapis.com/geolocation/v1/geolocate?key=${googleGeo}`, {
considerIp: "true",
})
.then((data) => {
return {'lat': data.data.location.lat, 'lng':data.data.location.lng }
})
.catch(error => {
console.log(error)
})
});

I wouldn't get hung up on the word post, especially when you're using someone else's API. What matters is that when someone makes a GET request, it should not mutate state. In your case, the API probably uses the POST method so that you don't need to stick the request object into the query string or perhaps due to size limits of GET requests. It's just getting geo data - not mutating state.
If it mutates state, it should be a POST, PUT or DELETE. If it's really read only, GET is always appropriate, regardless of the APIs you're calling underneath the hood.

As long as your API toward your client is RESTful in how it presents its resources, it shouldn't matter how you get/store/manage that data in the backend, at least as far as REST is concerned.
That said, don't forget to send a response (e.g. res.json({lat:..., lng:...})) instead of returning from your .then() handler, and to send an error status code (e.g. res.sendStatus(500)) in your .catch() handler.

Related

How to limit the axios get request results

I am trying use an axios request to fetch data from github api, but for some reason _limit is not returning the limited number of results?
await axios.get(`https://api.github.com/users/freeCodeCamp/repos?_limit=10`)
.then(
(res) => {
console.log(res.data);
}
)
The following http request is working perfectly by limiting the results
https://jsonplaceholder.typicode.com/todos?_limit=2
Whereas the following http request is not limiting the data
https://api.github.com/users/freeCodeCamp/repos?_limit=2
What's the difference between the above two requests?
The _limit parameter you see in https://jsonplaceholder.typicode.com is specific to their json-server software.
From the Github REST API documentation, you want to use the per_page parameter
const { data } = await axios.get("https://api.github.com/users/freeCodeCamp/repos", {
params: {
per_page: 10
}
})
My general advice for using any REST API... always read the documentation specific to the resource you're consuming.
Sorting options for API results are limited to created, updated, pushed and full_name (default). If you want to sort by something else, you'll need to do that client-side, eg
data.sort((a, b) => a.stargazers_count - b.stargazers_count);
For GitHub, the correct property is per_page.
Just bear in mind that limiting results has nothing to do with Axios or any front-end tool. It is a backend implementation, and any backend developer is free to do it the way they want. Although there are some standards, such as cursor pagination.
In a real project, the backend and frontend developer would have a "contract" for how this would work, so both know how the property will work.

HTTP method for Express REST API with Postgres

I am implementing REST APIS using Express and Postgres. In an endpoint, I would like to first delete all the instances from a table by FK user_id, and then insert several new instances with the same user_id. I'm wondering which http method should I use in this case? Currently I use POST but I don't know if this is the appropriate way. It seems that using PUT also works fine.
router.post('/myTable', auth, async (req, res) => {
const client = await pool.connect();
try {
await client.query('BEGIN');
const { records } = req.body;
await client.query('DELETE FROM my_table WHERE user_id=$1', [req.user_id]);
for (i in records) {
await client.query('INSERT INTO my_table (name, user_id) VALUES ($1, $2)',[records[i], req.user_id]);
}
await client.query('COMMIT');
res.send();
} catch (error) {
console.log(error);
await client.query('ROLLBACK');
} finally {
client.release();
}
});
PUT is for creating/replacing the resource at the URI you specified.
So if a resource exists, it has a URI that client knows, and with a PUT request you are replacing what's there, PUT makes the most sense.
One great benefit of PUT over POST is that PUT is idempotent.
So if you are sending a PUT request to a /myTable endpoint, the implied meaning is that you are replacing myTable, and a subsequent GET request on that same endpoint would give you a semantically similar response of what you just sent.
If any of my above assumptions are wrong, chances are you'll want POST, which is more of a general catch-all method for making changes with fewer restrictions. The downside is that I think it's less obvious what the operation of a given POST request is without inspecting/understanding the body and you lose the idempotence benefit too.
Currently I use POST but I don't know if this is the appropriate way.
Rule #1: if you aren't sure, it is okay to use POST.
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.”
It seems that using PUT also works fine.
In a sense, any method "works fine" at the origin server. HTTP defines request semantics -- what the messages mean. It doesn't constrain the implementation.
However, general purpose clients are going to assume that your server understands GET/HEAD/POST/PUT etc exactly the same way that every other web server understands them. That's a big part of the power of the REST architectural style - any standards compliant client can talk to any standards compliant server, and it just works. Furthermore, it continues to work exactly the same way if we stick any standards compliant cache/proxy in between them.
But if you respond to a PUT request with 204 No Content, then general purpose components are going to understand that to mean the same thing that any other server would return. Which is to say, your server is responsible if your deviation from the standard results in loss of property.
You can check the answers here for your reference. They are very well explained.
PUT vs. POST in REST
But since both could serve the same purpose and would only depend on your preference or requirements, I usually use post for creating a resource and put for updating one as a practice.

Retrieve Cookie, store, and use within Node

I'm using npm package 'request' to make API calls. Upon initial login, I should receive a cookie back, I need to store that cookie indefinitely to make subsequent calls.
I'm doing this in Python with requests like so:
#set up the session
s = requests.session()
#logs in and stores the cookie in session to be used in future calls
request = s.post(url, data)
How do I accomplish this in node? I'm not tied to anything right now, the request package seems easy to work with, except I'm having issues getting known username and passwords to work, that said, I'm sure that's mostly my inexperience with JS/node.js.
This is all backend code, no browsers involved.
I need to essentially run a logon function, store the returned encrypted cookie and use for all subsequent calls against that API. These calls can have any number of parameters so I'm not sure a callback in the logon function would be a good answer, but am toying with that, although that would defeat the purpose of 'logon once, get encrypted cookie, make calls'.
Any advice, direction appreciated on this, but really in need of a way to get the cookie data retrieved/stored for future use.
The request package can retain cookies by setting jar: true -
let request = request.defaults({jar: true})
request('http://www.google.com', function () {
request('http://images.google.com')
})
The above is copied near-verbatim from the request documentation: https://github.com/request/request/blob/master/README.md#requestoptions-callback

What is the most efficient way to make a batch request to a Firebase DB based on an array of known keys?

I need a solution that makes a Firebase DB API call for multiple items based on keys and returns the data (children) of those keys (in one response).
Since I don't need data to come real-time, some sort of standard REST call made once (rather than a Firebase DB listener), I think it would be ideal.
The app wouldn't have yet another listener and WebSocket connection open. However, I've looked through Firebase's API docs and it doesn't look like there is a way to do this.
Most of the answers I've seen always suggest making a composite key/index of some sort and filter accordingly using the composite key, but that only works for searching through a range. Or they suggest just nesting the data and not worrying about redundancy and disk space (and it's quicker), instead of retrieving associated data through foreign keys.
However, the problem is I am using Geofire and its query method only returns the keys of the items, not the items' data. All the docs and previous answers would suggest retrieving data either by the real-time SDK, which I've tried by using the once method or making a REST call for all items and filter with the orderBy, startAt, endAt params and filtering locally by the keys I need.
This could work, but the potential overhead of retrieving a bunch of items I don't need only to filter them out locally seems wasteful. The approach using the once listener seems wasteful too because it's a server roundtrip for each item key. This approach is kind of explained in this pretty good post, but according to this explanation it's still making a roundtrip for each item (even if it's asynchronously and through the same connection).
This poor soul asked a similar question, but didn't get many helpful replies (that really address the costs of making n number of server requests).
Could someone, once and for all explain the approaches on how this could be done and the pros/cons? Thanks.
Looks like you are looking for Cloud Functions. You can create a function called from http request and do every database read inside of it.
These function are executed in the cloud and their results are sent back to the caller. HTTP call is one way to trigger a Cloud Function but you can setup other methods (schedule, from the app with Firebase SDK, database trigger...). The data are not charged until they leave the server (so only in your request response or if you request a database of another region). Cloud Function billing is based on CPU used, number of invocations and running intances, more details on the quota section.
You will get something like :
const database = require('firebase-admin').database();
const functions = require('firebase-functions');
exports.getAllNodes = functions.https.onRequest((req, res) => {
let children = [ ... ]; // get your node list from req
let promises = [];
for (const i in children) {
promises.push(database.ref(children[i]).once('value'));
}
Promise.all(promises)
.then(result => {
res.status(200).send(result);
})
.catch(error => {
res.status(503).send(error);
});
});
That you will have to deploy with the firebase CLI.
I need a solution that makes a Firebase DB API call for multiple items based on keys and returns the data (children) of those keys (in one response).
One solution might be to set up a separate server to make ALL the calls you need to your Firebase servers, aggregate them, and send it back as one response.
There exists tools that do this.
One of the more popular ones recently spec'd by the Facebook team is GraphQL.
https://graphql.org/
Behind the scenes, you set up your graphql server to map your queries which would all make separate API calls to fetch the data you need to fit the query. Once all the API calls have been completed, graphql will then send it back as a response in the form of a JSON object.
This is how you can do a one time call to a document in javascript, hope it helps
// Get a reference to the database service
let database = firebase.database();
// one time call to a document
database.ref("users").child("demo").get().then((snapshot) => {
console.log("value of users->demo-> is", snapshot.node_.value_)
});

Node/React: How to handle jQuery AJAX when rendering on server?

I have a small webapp in Node/Express that renders initial HTML server side with react-dom. The page is then populated client side with a $.ajax call to the API inside componentDidMount. The HTML loads immediately, but there's no useful content until React starts and completes that GET.
This is wasteful. It would be better to hit the API while rendering the initial HTML. But. I don't know a clean way to implement this. Seems like I could get what I want by declaring a global $ in node with a stubbed get method, but this feels dirty.
How do I implement $.ajax when rendering a React component server side?
The code is public on Github. Here's a component with $.get and here's my API.
componentDidMount doesnt run on the server, it runs only client side for the first render, so the ajax request will never happen on the server. You should do it in a static method (there are other ways of do it)
It would be better if you choose superagent or axios - that can made ajax requests client and server side
You then have to put the result of the ajax request as the initial state on a global variable.
It's better if you follow some repos, like this:
See https://github.com/erikras/react-redux-universal-hot-example
Here's how I solved this.
Moved my ajax out of componentDidMount so that it is called while rendering initial HTML on the server.
Declared my own global $ in Node with a get method that calls the router directly. This is what it looks like:
global.$ = {
get: (url, cb) => {
const req = {url: url};
const res = {
send: data => cb(data),
status: () => {
return {send: data => cb(data)};
}
};
return api_router(req, res);
}
};
Some caveats
If this feels like a questionable hack to you, that's ok. It feels like a questionable hack to me too. I'm still open to suggestions.
#stamina-loop's suggestion of replacing jQuery's AJAX with module that works for both the server and client is a good one that would solve this problem. For most people I would recommend that approach. I chose not to because it seemed wasteful to go over the network just to call a route handler that is adjacent in code. Could be made less wasteful with a fancy nginx config that redirects outbound API calls back to the same box without making a round trip. I'm thinking on that.
I've since learned that using jQuery alongside React is likely to cause problems. I'll be replacing it with something else down the road.
For most use cases it will still make sense to keep the AJAX in componentDidMount and to load initial HTML without it. That way time-to-first-byte is as low as possible. The types of things that are loaded from restful APIs are usually not needed for SEO and are things that users are used to waiting a few extra milliseconds for (Facebook does it so can you).

Categories