Related
I'm trying to make a simple todo app in order to understand how frontend and backend are connected. I read some of the websites showing a tutorial for using and connecting rest API, express server, and database, but still, I was not able to get the fake data from a database. Anyway, I wanted to check if my understanding of how they are connected and talk to each other is correct or not. So could you give some advice please?
First of all, I'm planning to use either Javascript & HTML or React for frontend, Express for server, and Postgres for the database. My plan is a user can add & delete his or her task. I have already created a server in my index.js file and created a database using psql command. Now if I type "" it takes me to the page saying "Hello" (I made this endpoint), and I'm failing to seed my data to the database. Here are my questions↓
After I was able to seed my fake data into the database, how should I get the data from the database and send to the frontend? I think in my index.js file, create a new endpoint something like "app.get("/api/todo", (res, req) => ..." and inside of the callback function, I should write something like "select * from [table name]". Also, form the front end, I should probably access certain endpoints using fetch. Is this correct?
Also, how can I store data which is sent from the frontend? For example, if I type my new todo to <input> field and click the add <button>, what is the sequence of events looks like? Adding event listener to button and connect to the server, then create post method in the server and insert data, kind of (?) <= sorry this part it's super unclear for me.
Displaying task on the frontend is also unclear for me. If I use an object like {task: clean up my room, finished: false (or 0 ?)} in the front end, it makes sense but, when I start using the database, I'm confused about how to display items that are not completed yet. In order to display each task, I won't use GET method to get the data from the database, right?
Also, do I need to use knex to solve this type of problem? (or better to have knex and why?)
I think my problem is I kind of know what frontend, server, database for, but not clear how they are connected with each other...
I also drew some diagrams as well, so I hope it helps you to understand my vague questions...
how should I get the data from the database and send to the frontend?
I think in my index.js file, create a new endpoint something like
"app.get("/api/todo", (res, req) => ..." and inside of the callback
function, I should write something like "select * from [table name]".
Typically you use a controller -> service -> repository pattern:
The controller is a thin layer, it's basically the callback method you refer to. It just takes parameters from the request, and forwards the request to the service in the form of a method call (i.e. expose some methods on the service and call those methods). It takes the response from the service layer and returns it to the client. If the service layer throws custom exceptions, you also handle them here, and send an appropriate response to the client (error status code, custom message).
The service takes the request and forwards it to the repository. In this layer, you can perform any custom business logic (by delegating to other isolated services). Also, this layers will take care of throwing custom exceptions, e.g. when an item was not found in the database (throw new NotFoundException)
The repository layer connects to the database. This is where you put the custom db logic (queries like you mention), eg when using a library like https://node-postgres.com/. You don't put any other logic here, the repo is just a connector to the db.
Also, form the front end, I should probably access certain endpoints
using fetch. Is this correct?
Yes.
Also, how can I store data which is sent from the frontend? For
example, if I type my new todo to field and click the add , what is
the sequence of events looks like? Adding event listener to button and
connect to the server, then create post method in the server and
insert data, kind of (?) <= sorry this part it's super unclear for me.
You have a few options:
Form submit
Ajax request, serialize the data in the form manually and send a POST request through ajax. Since you're considering a client library like React, I suggest using this approach.
Displaying task on the frontend is also unclear for me. If I use an
object like {task: clean up my room, finished: false (or 0 ?)} in the
front end, it makes sense but, when I start using the database, I'm
confused about how to display items that are not completed yet. In
order to display each task, I won't use GET method to get the data
from the database, right?
If you want to use REST, it typically implies that you're not using backend MVC / server rendering. As you mentioned React, you're opting for keeping client state and syncing with the server over REST.
What it means is that you keep all state in the frontend (in memory / localstorage) and just sync with the server. Typically what is applied is what is referred to as optimistic rendering; i.e. you just manage state in the frontend as if the server didn't exist; yet when the server fails (you see this in the ajax response), you can show an error in the UI, and rollback state.
Alternatively you can use spinners that wait until the server sync is complete. It makes for less interesting user perceived performance, but is just as valid technical wise.
Also, do I need to use knex to solve this type of problem? (or better
to have knex and why?) I think my problem is I kind of know what
frontend, server, database for, but not clear how they are connected
with each other...
Doesn't really matter what you use. Personally I would go with the stack:
Node Express (REST), but could be Koa, Restify...
React / Redux client side
For the backend repo layer you can use Knex if you want to, I have used node-postgres which worked well for me.
Additional info:
I would encourage you to take a look at the following, if you're doubtful how to write the REST endpoints: https://www.youtube.com/watch?v=PgrP6r-cFUQ
After I was able to seed my fake data into the database, how should I get the data from the database and send to the frontend? I think in my index.js file, create a new endpoint something like "app.get("/api/todo", (res, req) => ..." and inside of the callback function, I should write something like "select * from [table name]". Also, form the front end, I should probably access certain endpoints using fetch. Is this correct?
You are right here, you need to create an endpoint in your server, which will be responsible for getting data from Database. This same endpoint has to be consumed by your Frontend application, in case you are planning to use ReactJS. As soon as your app loads, you need to get the current userID and make a fetch call to the above-created endpoint and fetch the list of todos/any data for that matter pertaining to the concerned user.
Also, how can I store data which is sent from the frontend? For example, if I type my new todo to field and click the add , what is the sequence of events looks like? Adding event listener to button and connect to the server, then create post method in the server and insert data, kind of (?) <= sorry this part it's super unclear for me.
Okay, so far, you have connected your frontend to your backend, started the application, user is present and you have fetched the list of todos, if any available for that particular user.
Now coming to adding new todo the most minimal flow would look something like this,
User types the data in a form and submits the form
There is a form submit handler which will take the form data
Check for validation for the form data
Call the POST endpoint with payload as the form data
This Post endpoint will be responsible for saving the form data to DB
If an existing todo is being modified, then this should be handled using a PATCH request (Updating the state, if task is completed or not)
The next and possibly the last thing would be to delete the task, you can have a DELETE endpoint to remove the todo item from the list of todos
Displaying task on the frontend is also unclear for me. If I use an object like {task: clean up my room, finished: false (or 0 ?)} in the front end, it makes sense but, when I start using the database, I'm confused about how to display items that are not completed yet. In order to display each task, I won't use GET method to get the data from the database, right?
Okay, so as soon as you load the frontend for the first time, you will make a GET call to the server and fetch the list of TODOS. Store this somewhere in the application, probably redux store or just the application local state.
Going by what you have suggested already,
{task: 'some task name', finished: false, id: '123'}
Now anytime there has to be any kind of interaction with any of the TODO item, either PATCH or DELETE, you would use the id for each TODO and call the respective endpoint.
Also, do I need to use knex to solve this type of problem? (or better to have knex and why?) I think my problem is I kind of know what frontend, server, database for, but not clear how they are connected with each other...
In a nutshell or in the most minimal sense, think of Frontend as the presentation layer and backend and DB as the application layer.
the overall game is of sending some kind of request and receiving some response for those sent requests. Frontend is what enables any end-user to create these so-called requests, the backend (server & database) is where these requests are processed and response is sent back to the presentational layer for the end user to be notified.
These explanations are very minimal to make sure you get the gist of it. Since this question almost revolves around the entire scope of web development. I would suggest you read a few articles about both these layers and how they connect with each other.
You should also spend some time understanding what is RESTful API. That should be a great help.
I have a piece of open source software written in python which uses the bottle web server to display forms in a web browser. The form data are send via "method = post" to the web server. Until now the server process is running on the same (PC) host as the browser, so there is no issue with the internet connection.
Now I have to rewrite this software so that it can be used on mobile devices, with the server somewhere in the internet. The environment in which data entry is to take place will be such that an unstable or lost internet connection is likely. So I have to have provisions for the case that the website containing the form is loaded first (in the office via WLAN, say), then data entry takes place (in the "field") and during data entry, internet connection is lost, so that saving data to the server won't work. In this case it would be great to be able to save the form data locally, in order to send the post-request later on. (Probably it won't be possible to keep the website open all the time until this is possible. The latest when battery goes low, I'd run into problems.)
Probably I'm not the first with this problem, so my question is: is there a "standard" (or well tested) solution for the task to buffer form data on the client side for the case when a post-request is not answered, and send the same request later on? If not, how would you go about to solve this issue? In particular, I see the following (sub-)problems:
How to detect (on the client side) that a post request failed? Probably some kind of timeout mechanism in javascript would have to be employed, but how?
How to save data? My first idea would be to save data to a cookie using javascript. Do I overlook something here?
How to send data back later on?
I'm sufficiently proficient in python to dare this project, but rather new to web technologies, so please excuse if some part of the question is rather stupid. In this case, I'd be grateful to be told so... (... with a hint on how to ask a better question.)
Thanks a lot for any help.
I will try to answer based on (sub-)problems:
How to detect (on the client side) that a post request failed? Probably some kind of timeout mechanism in javascript would have to be employed, but how?
To detect if request failed
Only send status code 200 if you received data and it's saved to backend!
Don't send 200 if there is an error! (use error status code like 5xx or 4xx)
There is a timeout option in jquery to cancel the request if it takes more than given time to complete
When failed, Save data to localStorage
If you are not using jquery, I guess you can do something similar using fetch in vanilla javascript (Click here to know more about fetch)
$.ajax({
timeout: 3000 // sets timeout to 3 seconds
}).done(function () {
console.log("success");
}).fail(function () {
console.log("error");
var _local = localStorage.getItem('data-saved'); //get localStorage data
_local.push({"key": "value"}) // Append JSON based Form data
localStorage.setItem('data-saved', JSON.stringify(_local)); // Update localStorage
});
How to save data? My first idea would be to save data to a cookie using javascript. Do I overlook something here?
Save data using localStorage
In LocalStorage, you can't store JSON however, you can save using JSON.stringify and load back using JSON.parse
// Get data
var get_local_data = JSON.parse(localStorage.getItem('data-saved'));
// Update Data
get_local_data.append({"Name": "value", "age": 10})
// Update localStorage
localStorage.setItem('data-saved', JSON.stringify(get_local_data));
How to send data back later on?
Sending data back using setTimeout method in javascript
Check continuously if there is any data in localStorage's key. If any send an ajax request to back-end!
// Run in each 5 Sec
setTimeout(function () {
// Check if we have any failed data
var get_local_data = JSON.parse(localStorage.getItem('data-saved'));
if(get_local_data.length > 0){
//Make a ajax request
//Update localStorage if success (You need to remove the data from the localStorage),
//Ignored failed case
}
}, 5000);
Is there a way to track delivery report of a particular message that have been sent via FCM to android device, i found we can add delivery_receipt_requested to track delivery and i have added that my json data as follows,
{"to":"KEY",
"data":{
"data":{
"title":"test message",
"message":"sent",
"image":null}
},
"notification":{
"delivery_receipt_requested":true
}
}
and i receive a response
{"multicast_id":6417448921485349071,"success":1,"failure":0,"canonical_ids":0,"results":[{"error":"false"}]}
In php or javascript i need something like if we pass that multicast_id need to get the current status of the text. I found it was almost nightmare to get the desired result, but its not impossible is there anyway guys?
There is currently no way to manually ask the FCM server about the status of the sent message.
Based from your post, it seems you already did your homework on checking the FCM service. Implementing the delivery receipts is the only way (AFAIK) that you could attain the behavior you mentioned in your post.
Implementing the delivery receipt not only needs the delivery_receipt_requested parameter in your payload, you have to implement an XMPP server protocol as well. Along with the Upstream Messaging part on your client app (for the acknowledgement part).
I'm building an app and would like some feedback on my approach to building the data sync process and API that supports it. For context, these are the guiding principles for my app/API:
Free: I do not want to charge people at all to use the app/API.
Open source: the source code for both the app and API are available to the public to use as they wish.
Decentralised: the API service that supports the app can be run by anyone on any server, and made available for use to users of the app.
Anonymous: the user should not have to sign up for the service, or submit any personal identifying information that will be stored alongside their data.
Secure: the user's data should be encrypted before being sent to the server, anyone with access to the server should have no ability to read the user's data.
I will implement an instance of the API on a public server which will be selected in the app by default. That way initial users of the app can sync their data straight away without needing to find or set up an instance of the API service. Over time, if the app is popular then users will hopefully set up other instances of the API service either for themselves or to make available to other users of the app should they wish to use a different instance (or if the primary instance runs out of space, goes down, etc). They may even access the API in their own apps. Essentially, I want them to be able to have the choice to be self sufficient and not have to necessarily rely on other's providing an instance on the service for them, for reasons of privacy, resilience, cost-saving, etc. Note: the data in question is not sensitive (i.e. financial, etc), but it is personal.
The user's sync journey works like this:
User downloads the app, and creates their data in the process of using the app.
When the user is ready to initially sync, they enter a "password" in the password field, which is used to create a complex key with which to encrypt their data. Their password is stored locally in plain text but is never sent to the server.
User clicks the "Sync" button, their data is encrypted (using their password) and sent to the specified (or default) API instance and responds by giving them a unique ID which is saved by the app.
For future syncs, their data is encrypted locally using their saved password before being sent to the API along with their unique ID which updates their synced data on the server.
When retrieving synced data, their unique ID is sent to the API which responds with their encrypted data. Their locally stored password is then used to decrypt the data for use by the app.
I've implemented the app in javascript, and the API in Node.js (restify) with MongoDB as a backend, so in practice a sync requests to the server looks like this:
1. Initial sync
POST /api/data
Post body:
{
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd..."
}
Response:
{
"id":"507f191e810c19729de860ea",
"lastUpdated":"2016-07-06T12:43:16.866Z"
}
2. Get sync data
GET /api/data/507f191e810c19729de860ea
Response:
{
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd...",
"lastUpdated":"2016-07-06T12:43:16.866Z"
}
3. Update synced data
POST /api/data/507f191e810c19729de860ea
Post body:
{
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd..."
}
Response:
{
"lastUpdated":"2016-07-06T13:21:23.837Z"
}
Their data in MongoDB will look like this:
{
"id":"507f191e810c19729de860ea",
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd...",
"lastUpdated":"2016-07-06T13:21:23.837Z"
}
Encryption is currently implemented using CryptoJS's AES implementation. As the app provides the user's password as a passphrase to the AES "encrypt" function, it generates a 256-bit key which which to encrypt the user's data, before being sent to the API.
That about sums up the sync process, it's fairly simple but obviously it needs to be secure and reliable. My concerns are:
As the MongoDB ObjectID is fairly easy to guess, it is possible that a malicious user could request someone else's data (as per step 2. Get sync data) by guessing their ID. However, if they are successful they will only retrieve encrypted data and will not have the key with which to decrypt it. The same applies for anyone who has access to the database on the server.
Given the above, is the CryptoJS AES implementation secure enough so that in the real possibility that a user's encrypted data is retrieved by a malicious user, they will not realistically be able to decrypt the data?
Since the API is open to anyone and doesn't audit or check the submitted data, anyone could potentially submit any data they wish to be stored in the service, for example:
Post body:
{
"data":"This is my anyold data..."
}
Is there anything practical I can do to guard against this whilst adhering to the guiding principles above?
General abuse of the service such as users spamming initial syncs (step 1 above) over and over to fill up the space on the server; or some user's using disproportionately large amounts of server space. I've implemented some features to guard against this, such as logging IPs for initial syncs for one day (not kept any longer than that) in order to limit a single IP to a set number of initial syncs per day. Also I'm limiting the post body size for syncs. These options are configurable in the API however, so if a user doesn't like these limitations on a public API instance, they can host their own instance and tweak the settings to their liking.
So that's it, I would appreciate anyone who has any thoughts or feedback regarding this approach given my guiding principles. I couldn't find any examples where other apps have attempted a similar approach, so if anyone knows of any and can link to them I'd be grateful.
I can't really comment on whether specific AES algorithms/keys are secure or not, but assuming they are (and the keys are generated properly), it should not be a problem if other users can access the encrypted data.
You can maybe protect against abuse, without requiring other accounts, by using captchas or similar guards against automatic usage. If you require a catcha on new accounts, and set limits to all accounts on data volume and call frequency, you should be ok.
To guard against accidental clear-text data, you might generate a secondary key for each account, and then check on the server with the public secondary key whether the messages can be decrypted. Something like this:
data = secondary_key(user_private_key(cleartext))
This way the data will always be encrypted, and in worst case the server will be able to read it, but others wouldn't.
A few comments to your API :) If you're already using HTTP and POST, you don't really need an id. The POST usually returns a URI that points to the created data. You can then GET that URI, or PUT it to change:
POST /api/data
{"data": "..."}
Response:
Location: /api/data/12345
{"data": "...", "lastmodified": "..." }
To change it:
PUT /api/data/12345
{"data": "..."}
You don't have to do it this way, but it might be easier to implement on the client side, and maybe even help with caching and cache invalidation.
I'm currently attempting to create a simple video chat service using WebRTC with Ajax for the signalling method.
As per the recommendation of another Stack Overflow user, in order to make sure I was understanding the flow of a standard WebRTC app properly, I first created a simple WebRTC video chat service in which I printed the created offer or answer and ICE candidates out to the screen, and manually copied and pasted that info into a text area in the other client window to process everything. Upon doing that, I was able to successfully get both videos to pop up.
After getting that to work properly, I decided to try and use Ajax as the signalling method. However, I can't seem to get it to work now.
In my current implementation, every time offer/answer or ICE candidate info is created, I instantly create a new Ajax object, which is used to add that info (after the JSON.stringify method has been executed on it) to a DB table. Both clients are constantly polling that DB table, searching for new info from the other client.
I've been echoing a lot of information out to the console, and as far as I can tell, a valid offer is always sent from one client to another, but upon receiving that offer, successfully setting it as the remote description, and creating an answer, any attempts I make to set the local description of the "answerer" fails.
Is there any particular reason why this might happen? Here's a snippet of my code:
var i,
len;
for (i = 0, len = responseData.length; i < len; i += 1) {
message = JSON.parse(responseData[i]);
if (message.type === 'offer') {
makeAnswer(message);
}
// Code omitted,
}
...
makeAnswer = function (offer) {
pc.setRemoteDescription(new RTCSessionDescription(offer), function () {
pc.createAnswer(function (desc) {
// An answer is always properly generated here.
pc.setLocalDescription(desc, function () {
// This success callback function is never executed.
setPayload(JSON.stringify(pc.localDescription));
}, function () {
// I always end up here.
});
});
});
};
In essence, I loop through any data retrieved from the DB (sometimes there's both an offer and lots of candidate info that's gathered all at once), and if the type property of a message is 'offer', I call the makeAnswer function, and from there, I set the remote description to the received offer, create an answer, and try to set the answer to the local description, but it always fails at that last step.
If anyone can offer any advice as to why this might be happening, I would be very appreciative.
Thank you very much.
Well, I figured out the problem. It turns out that I wasn't encoding the SDP and ICE info before sending it to a PHP script via Ajax. As a result, any plus signs (+) in the SDP/ICE info were being turned into spaces, thus causing the strings to differ between the local and remote clients and not work.
I've always used encodeURIComponent on GET requests with Ajax, but I never knew you had to use that function with POST requests as well. That's good to know.
Anyway, after I started using the encodeURIComponent function with the posted data, and then fixed my logic up a bit so that ICE candidates are never set until after both local and remote descriptions are set, it started working like a charm every time.
That's the good news. The bad news is that everything was working fine on my local host, but as soon as I ported the exact same code over to my web-hosted server, even though the console was reporting that the offer/answer and ICE info were all properly being received and set, the remote video isn't popping up.
Sigh. One more hurdle to cross before I can be done with this.
Anyway, just to let everyone know, the key is to use encodeURIComponent before sending the SDP/ICE info to a server-side script, so that the string received on the other end is exactly the same.