I'm creating a REST API.
I would like to implement an indepotent PUT operation, that either creates or updates the specific resource in database.
I'm using node.js, postgreSQL and sequelize.
The problem is that sequelize upsert returns either true or false depending on wheter the resource got updated or created.
But I need to be able to send unique identifier (column id) back to the client, if the resource got created.
One solution that I tried was trying to find the exact same resource by specifing every single column send from client in "where" property of sequelize findOne query. But it throws errors, if client send additional columns that are not in database. And this shouldn't be the case in my implementation.
Is it possible to implement this? Optimally without some performance overhead.
Thanks
Getting the ID back from upsert in Sequelize is not possible at the moment. For more information see: https://github.com/sequelize/sequelize/issues/3354
A possible solution to this issue is to force the user (or the REST API client in this case) to supply the ID of a record in case update is requested, and to not supply an ID in case of creating a new record. This is generally a good idea as it prevents a round trip to the DB, though it makes your API a bit more strict.
Related
I'm trying to make a simple todo app in order to understand how frontend and backend are connected. I read some of the websites showing a tutorial for using and connecting rest API, express server, and database, but still, I was not able to get the fake data from a database. Anyway, I wanted to check if my understanding of how they are connected and talk to each other is correct or not. So could you give some advice please?
First of all, I'm planning to use either Javascript & HTML or React for frontend, Express for server, and Postgres for the database. My plan is a user can add & delete his or her task. I have already created a server in my index.js file and created a database using psql command. Now if I type "" it takes me to the page saying "Hello" (I made this endpoint), and I'm failing to seed my data to the database. Here are my questions↓
After I was able to seed my fake data into the database, how should I get the data from the database and send to the frontend? I think in my index.js file, create a new endpoint something like "app.get("/api/todo", (res, req) => ..." and inside of the callback function, I should write something like "select * from [table name]". Also, form the front end, I should probably access certain endpoints using fetch. Is this correct?
Also, how can I store data which is sent from the frontend? For example, if I type my new todo to <input> field and click the add <button>, what is the sequence of events looks like? Adding event listener to button and connect to the server, then create post method in the server and insert data, kind of (?) <= sorry this part it's super unclear for me.
Displaying task on the frontend is also unclear for me. If I use an object like {task: clean up my room, finished: false (or 0 ?)} in the front end, it makes sense but, when I start using the database, I'm confused about how to display items that are not completed yet. In order to display each task, I won't use GET method to get the data from the database, right?
Also, do I need to use knex to solve this type of problem? (or better to have knex and why?)
I think my problem is I kind of know what frontend, server, database for, but not clear how they are connected with each other...
I also drew some diagrams as well, so I hope it helps you to understand my vague questions...
how should I get the data from the database and send to the frontend?
I think in my index.js file, create a new endpoint something like
"app.get("/api/todo", (res, req) => ..." and inside of the callback
function, I should write something like "select * from [table name]".
Typically you use a controller -> service -> repository pattern:
The controller is a thin layer, it's basically the callback method you refer to. It just takes parameters from the request, and forwards the request to the service in the form of a method call (i.e. expose some methods on the service and call those methods). It takes the response from the service layer and returns it to the client. If the service layer throws custom exceptions, you also handle them here, and send an appropriate response to the client (error status code, custom message).
The service takes the request and forwards it to the repository. In this layer, you can perform any custom business logic (by delegating to other isolated services). Also, this layers will take care of throwing custom exceptions, e.g. when an item was not found in the database (throw new NotFoundException)
The repository layer connects to the database. This is where you put the custom db logic (queries like you mention), eg when using a library like https://node-postgres.com/. You don't put any other logic here, the repo is just a connector to the db.
Also, form the front end, I should probably access certain endpoints
using fetch. Is this correct?
Yes.
Also, how can I store data which is sent from the frontend? For
example, if I type my new todo to field and click the add , what is
the sequence of events looks like? Adding event listener to button and
connect to the server, then create post method in the server and
insert data, kind of (?) <= sorry this part it's super unclear for me.
You have a few options:
Form submit
Ajax request, serialize the data in the form manually and send a POST request through ajax. Since you're considering a client library like React, I suggest using this approach.
Displaying task on the frontend is also unclear for me. If I use an
object like {task: clean up my room, finished: false (or 0 ?)} in the
front end, it makes sense but, when I start using the database, I'm
confused about how to display items that are not completed yet. In
order to display each task, I won't use GET method to get the data
from the database, right?
If you want to use REST, it typically implies that you're not using backend MVC / server rendering. As you mentioned React, you're opting for keeping client state and syncing with the server over REST.
What it means is that you keep all state in the frontend (in memory / localstorage) and just sync with the server. Typically what is applied is what is referred to as optimistic rendering; i.e. you just manage state in the frontend as if the server didn't exist; yet when the server fails (you see this in the ajax response), you can show an error in the UI, and rollback state.
Alternatively you can use spinners that wait until the server sync is complete. It makes for less interesting user perceived performance, but is just as valid technical wise.
Also, do I need to use knex to solve this type of problem? (or better
to have knex and why?) I think my problem is I kind of know what
frontend, server, database for, but not clear how they are connected
with each other...
Doesn't really matter what you use. Personally I would go with the stack:
Node Express (REST), but could be Koa, Restify...
React / Redux client side
For the backend repo layer you can use Knex if you want to, I have used node-postgres which worked well for me.
Additional info:
I would encourage you to take a look at the following, if you're doubtful how to write the REST endpoints: https://www.youtube.com/watch?v=PgrP6r-cFUQ
After I was able to seed my fake data into the database, how should I get the data from the database and send to the frontend? I think in my index.js file, create a new endpoint something like "app.get("/api/todo", (res, req) => ..." and inside of the callback function, I should write something like "select * from [table name]". Also, form the front end, I should probably access certain endpoints using fetch. Is this correct?
You are right here, you need to create an endpoint in your server, which will be responsible for getting data from Database. This same endpoint has to be consumed by your Frontend application, in case you are planning to use ReactJS. As soon as your app loads, you need to get the current userID and make a fetch call to the above-created endpoint and fetch the list of todos/any data for that matter pertaining to the concerned user.
Also, how can I store data which is sent from the frontend? For example, if I type my new todo to field and click the add , what is the sequence of events looks like? Adding event listener to button and connect to the server, then create post method in the server and insert data, kind of (?) <= sorry this part it's super unclear for me.
Okay, so far, you have connected your frontend to your backend, started the application, user is present and you have fetched the list of todos, if any available for that particular user.
Now coming to adding new todo the most minimal flow would look something like this,
User types the data in a form and submits the form
There is a form submit handler which will take the form data
Check for validation for the form data
Call the POST endpoint with payload as the form data
This Post endpoint will be responsible for saving the form data to DB
If an existing todo is being modified, then this should be handled using a PATCH request (Updating the state, if task is completed or not)
The next and possibly the last thing would be to delete the task, you can have a DELETE endpoint to remove the todo item from the list of todos
Displaying task on the frontend is also unclear for me. If I use an object like {task: clean up my room, finished: false (or 0 ?)} in the front end, it makes sense but, when I start using the database, I'm confused about how to display items that are not completed yet. In order to display each task, I won't use GET method to get the data from the database, right?
Okay, so as soon as you load the frontend for the first time, you will make a GET call to the server and fetch the list of TODOS. Store this somewhere in the application, probably redux store or just the application local state.
Going by what you have suggested already,
{task: 'some task name', finished: false, id: '123'}
Now anytime there has to be any kind of interaction with any of the TODO item, either PATCH or DELETE, you would use the id for each TODO and call the respective endpoint.
Also, do I need to use knex to solve this type of problem? (or better to have knex and why?) I think my problem is I kind of know what frontend, server, database for, but not clear how they are connected with each other...
In a nutshell or in the most minimal sense, think of Frontend as the presentation layer and backend and DB as the application layer.
the overall game is of sending some kind of request and receiving some response for those sent requests. Frontend is what enables any end-user to create these so-called requests, the backend (server & database) is where these requests are processed and response is sent back to the presentational layer for the end user to be notified.
These explanations are very minimal to make sure you get the gist of it. Since this question almost revolves around the entire scope of web development. I would suggest you read a few articles about both these layers and how they connect with each other.
You should also spend some time understanding what is RESTful API. That should be a great help.
Is it possible to query the Apollo client cache to get a list of filtered data on the client?
After the client fetches data from the graphql server, the data can be seen as being on the local cache from Apollo dev tools.
How can I get a list of 'Item' types which match a set of 'tags' without making a trip to the server?
type Item {
id: ID
text: String
tags: [String]
}
I would assume this is viable with Apollo-link-state custom resolvers, but so far haven't been able to figure out the strategy for it or find an example anywhere online.
I'm aware that Apollo cache's the data by the queries that have been executed and can access it with the ID and .readFragment, but if the data already exists in the client cache, it should be possible to get a list of data for a certain condition?
Update:
The exact requirement is as follows
Get first 100 results for getItem from the Server
User filters the results by some tags on the client
Show the filtered items from 100 records already fetched
FetchMore records to match the filter criteria from the server to fill up the rest of the page upto 100 items.
Allow pagination based on filter criteria.
As more usage happens, we would have most of the items in cache providing a instant filtering experience for most of the data.
The exact question is can we use .readFragment or .readQuery to access the raw list of records and filter on the fly in the client (if so how/example)? Or is there another way to look at this?
This kind of functionality can be achieved using apollo-link-state.
example - 'internal' query can be forced to be cache-only by fetchPolicy
Also consider simple filtering in component state (or other options - it all depends on (sharing filtered result) requirements.
In my meteor app, I need to be able to capture some form of unique element (session id/client id, etc...) to help keep track of the actions that user makes. At this point, I'm not using accounts package, so I'm just looking for a way to capture some sort of client-unique data element, which I could use in the data model to track activities/steps of every unique user.
What application / browser / session element could I use in place of this unique id string?
This what you are looking for?
You can just use Random.id() to get a unique id string. I have used this a number of times to track temporary objects and it is super helpful. There are other Random methods too that may come in handy.
What you can try to use is DDP connection's session ID Meteor.connection._lastSessionId. But be careful, because it is basically websocket session. So when the client open a new tab or refresh the page, this sessionId will be different .
If you want to keep the same session in the browser you can try to implement your own localStorage based session.
Some thoughts: Instead of using Meteor's Random.id(), insert a new doc into a Mongo collection, this way you are guaranteed there will be no collisions. As there is always a tiny chance that Random.id() will return a duplicate. Is that a correct assumption?
Meteor docs say likely to be unique, i.e. Not guaranteed to be unique.
http://docs.meteor.com/#/full/random
Store the _id of the doc in a Session.set() call, to use as your anonymous userId, i.e. Session.get("anonUserId") When done you can delete it to keep the collection trim.
I'm currently researching how to add persistence to a realtime twitter json feed in node.
I've got my stream setup, it's broadcasting to the client, but how do i go about storing this data in a json database such as couchdb, so i can access the stores json when the client first visits the page?
I can't seem to get my head around couchdb.
var array = {
"tweet_id": tweet.id,
"screen_name": tweet.user.screen_name,
"text" : tweet.text,
"profile_image_url" : tweet.user.profile_image_url
};
db.saveDoc('tweet', strencode(array), function(er, ok) {
if (er) throw new Error(JSON.stringify(er));
util.puts('Saved my first doc to the couch!');
});
db.allDocs(function(er, doc) {
if (er) throw new Error(JSON.stringify(er));
//client.send(JSON.stringify(doc));
console.log(JSON.stringify(doc));
util.puts('Fetched my new doc from couch:');
});
These are the two snippets i'm using to try and save / retrieve tweet data. The array is one individual tweet, and needs to be saved to couch each time a new tweet is received.
I don't understand the id part of saveDoc - when i make it unique, db.allDocs only lists ID's and not the content of each doc in the database - and when it's not unique, it fails after the first db entry.
Can someone kindly explain the correct way to save and retrieve this type of json data to couchdb?
I basically want to to load the entire database when the client first views the page. (The database will have less than 100 entries)
Cheers.
You need to insert the documents in the database. You can do this by inserting the JSON that comes from the twitter API or you can insert one status at a time (for loop)
You should create a view that exposes that information. If you saved the JSON directly from Twitter you are going to need to emit several times in your map function
There operations (ingestion and querying) are not the same thing, so you should really do them at the different times in your program.
You should consider running a bg process (maybe in something as simple as a setInterval) that updates your database. Or you can use something like clarinet (http://github.com/dscape/clarinet) to parse the Twitter streaming API directly.
I'm the author of nano, and here is one of the tests that does most of what you need:
https://github.com/dscape/nano/blob/master/tests/view/query.js
For the actual query semantics and for you learn a bit more of how CouchDB works I would suggest you read:
http://guide.couchdb.org/editions/1/en/index.html
I you find it useful I would suggest you buy the book :)
If you want to use a module to interact with CouchDB I would suggest cradle or nano.
You can also use the default http module you find in Node.js to make requests to CouchDB. The down-side is that the default http module tends to be a little verbose. There are alternatives that give you an better API to deal with http requests. The request is really popular.
To get data you need to make a GET request to a view you can find more information here. If you want to create a document you have to use PUT request to your database.
I am more into front end development and have recently started exploring Backbone.js into my app. I want to persist the model data to the server.
Could you please explain me the various way to save the Model data (using json format). I am using Java on server side. Also I have mainly seen REST being used to save data. As i am more into front end dev, i am not aware of REST and other similar stuff.
It would be great if someone could please explain me the process with some simple example.
Basically Models have a property called attributes which are the various values a certain model may have. Backbone uses JSON objects as a simple way to populate these values using various methods that take JSON objects. Example:
Donuts = Backbone.Model.extend({
defaults: {
flavor: 'Boston Cream', // Some string
price: '0.50' // Dollars
}
});
To populate the model there are a few ways to do so. For example, you can set up your model instance by passing in a JSON OR use method called set() which takes a JSON object of attributes.
myDonut = new Donut({'flavor':'lemon', 'price':'0.75'});
mySecondHelping = new Donut();
mySecondHelping.set({'flavor':'plain', 'price':'0.25'});
console.log(myDonut.toJSON());
// {'flavor':'lemon', 'price':'0.75'}
console.log(mySecondHelping.toJSON());
// {'flavor':'plain', 'price':'0.25'}
So this brings us up to saving models and persisting them either to a server. There is a whole slew of details regarding "What is REST/RESTful?" And it is kind of difficult to explain all this in a short blurb here. Specifically with regard to REST and Backbone saving, the thing to wrap your head around is the semantics of HTTP requests and what you are doing with your data.
You're probably used to two kinds of HTTP requests. GET and POST. In a RESTful environment, these verbs have special meaning for specific uses that Backbone assumes. When you want to get a certain resource from the server, (e.g. donut model I saved last time, a blog entry, an computer specification) and that resource exists, you do a GET request. Conversely, when you want to create a new resource you use POST.
Before I got into Backbone, I've never even touched the following two HTTP request methods. PUT and DELETE. These two verbs also have specific meaning to Backbone. When you want to update a resource, (e.g. Change the flavor of lemon donut to limon donut, etc.) you use a PUT request. When you want to delete that model from the server all together, you use a DELETE request.
These basics are very important because with your RESTful app, you probably will have a URI designation that will do the appropriate task based on the kind of request verb you use. For example:
// The URI pattern
http://localhost:8888/donut/:id
// My URI call
http://localhost:8888/donut/17
If I make a GET to that URI, it would get donut model with an ID of 17. The :id depends on how you are saving it server side. This could just be the ID of your donut resource in your database table.
If I make a PUT to that URI with new data, I'd be updating it, saving over it. And if I DELETE to that URI, then it would purge it from my system.
With POST, since you haven't created a resource yet it won't have an established resource ID. Maybe the URI target I want to create resources is simply this:
http://localhost:8888/donut
No ID fragment in the URI. All of these URI designs are up to you and how you think about your resources. But with regard to RESTful design, my understanding is that you want to keep the verbs of your actions to your HTTP request and the resources as nouns which make URIs easy to read and human friendly.
Are you still with me? :-)
So let's get back to thinking about Backbone. Backbone is wonderful because it does a lot of work for you. To save our donut and secondHelping, we simply do this:
myDonut.save();
mySecondHelping.save();
Backbone is smart. If you just created a donut resource, it won't have an ID from the server. It has something called a cID which is what Backbone uses internally but since it doesn't have an official ID it knows that it should create a new resource and it sends a POST request. If you got your model from the server, it will probably have an ID if all was right. In this case, when you save() Backbone assumes you want to update the server and it will send a PUT. To get a specific resource, you'd use the Backbone method .fetch() and it sends a GET request. When you call .destroy() on a model it will send the DELETE.
In the previous examples, I never explicitly told Backbone where the URI is. Let's do that in the next example.
thirdHelping = Backbone.Model.extend({
url: 'donut'
});
thirdHelping.set({id:15}); // Set the id attribute of model to 15
thirdHelping.fetch(); // Backbone assumes this model exists on server as ID 15
Backbone will GET the thirdHelping at http://localhost:8888/donut/15 It will simply add /donut stem to your site root.
If you're STILL with me, good. I think. Unless you're confused. But we'll trudge on anyway. The second part of this is the SERVER side. We've talked about different verbs of HTTP and the semantic meanings behind those verbs. Meanings that you, Backbone, AND your server must share.
Your server needs to understand the difference between a GET, POST, PUT, and DELETE request. As you saw in the examples above, GET, PUT, and DELETE could all point to the same URI http://localhost:8888/donut/07 Unless your server can differentiate between these HTTP requests, it will be very confused as to what to do with that resource.
This is when you start thinking about your RESTful server end code. Some people like Ruby, some people like .net, I like PHP. Particularly I like SLIM PHP micro-framework. SLIM PHP is a micro-framework that has a very elegant and simple tool set for dealing with RESTful activities. You can define routes (URIs) like in the examples above and depending on whether the call is GET, POST, PUT, or DELETE it will execute the right code. There are other solutions similar to SLIM like Recess, Tonic. I believe bigger frameworks like Cake and CodeIgniter also do similar things although I like minimal. Did I say I like Slim? ;-)
This is what excerpt code on the server might look (i.e. specifically regarding the routes.)
$app->get('/donut/:id', function($id) use ($app) {
// get donut model with id of $id from database.
$donut = ...
// Looks something like this maybe:
// $donut = array('id'=>7, 'flavor'=>'chocolate', 'price'=>'1.00')
$response = $app->response();
$response['Content-Type'] = 'application/json';
$response->body(json_encode($donut));
});
Here it's important to note that Backbone expects a JSON object. Always have your server designate the content-type as 'application/json' and encode it in json format if you can. Then when Backbone receives the JSON object it knows how to populate the model that requested it.
With SLIM PHP, the routes operate pretty similarly to the above.
$app->post('/donut', function() use ($app) {
// Code to create new donut
// Returns a full donut resource with ID
});
$app->put('/donut/:id', function($id) use ($app) {
// Code to update donut with id, $id
$response = $app->response();
$response->status(200); // OK!
// But you can send back other status like 400 which can trigger an error callback.
});
$app->delete('/donut/:id', function($id) use ($app) {
// Code to delete donut with id, $id
// Bye bye resource
});
So you've almost made the full round trip! Go get a soda. I like Diet Mountain Dew. Get one for me too.
Once your server processes a request, does something with the database and resource, prepares a response (whether it be a simple http status number or full JSON resource), then the data comes back to Backbone for final processing.
With your save(), fetch(), etc. methods - you can add optional callbacks on success and error. Here is an example of how I set up this particular cake:
Cake = Backbone.Model.extend({
defaults: {
type: 'plain',
nuts: false
},
url: 'cake'
});
myCake = new Cake();
myCake.toJSON() // Shows us that it is a plain cake without nuts
myCake.save({type:'coconut', nuts:true}, {
wait:true,
success:function(model, response) {
console.log('Successfully saved!');
},
error: function(model, error) {
console.log(model.toJSON());
console.log('error.responseText');
}
});
// ASSUME my server is set up to respond with a status(403)
// ASSUME my server responds with string payload saying 'we don't like nuts'
There are a couple different things about this example that. You'll see that for my cake, instead of set() ing the attributes before save, I simply passed in the new attributes to my save call. Backbone is pretty ninja at taking JSON data all over the place and handling it like a champ. So I want to save my cake with coconuts and nuts. (Is that 2 nuts?) Anyway, I passed in two objects to my save. The attributes JSON object AND some options. The first, {wait:true} means don't update my client side model until the server side trip is successful. The success call back will occur when the server successfully returns a response. However, since this example results in an error (a status other than 200 will indicate to Backbone to use the error callback) we get a representation of the model without the changes. It should still be plain and without nuts. We also have access to the error object that the server sent back. We sent back a string but it could be JSON error object with more properties. This is located in the error.responseText attribute. Yeah, 'we don't like nuts.'
Congratulations. You've made your first pretty full round trip from setting up a model, saving it server side, and back. I hope that this answer epic gives you an IDEA of how this all comes together. There are of course, lots of details that I'm cruising past but the basic ideas of Backbone save, RESTful verbs, Server-side actions, Response are here. Keep going through the Backbone documentation (which is super easy to read compared to other docs) but just keep in mind that this takes time to wrap your head around. The more you keep at it the more fluent you'll be. I learn something new with Backbone every day and it gets really fun as you start making leaps and see your fluency in this framework growing. :-)
EDIT: Resources that may be useful:
Other Similar Answers on SO:
How to generate model IDs with Backbone
On REST:
http://rest.elkstein.org/
http://www.infoq.com/articles/rest-introduction
http://www.recessframework.org/page/towards-restful-php-5-basic-tips