This a snippet from the todo list tutorial. Variable checked is represented both on the client and the server side? How the client and the server communicate to make checked consistent?
Template.task.events({
'click .toggle-checked'() {
// Set the checked property to the opposite of its current value
Tasks.update(this._id, {
$set: { checked: ! this.checked },
});
},
'click .delete'() {
Tasks.remove(this._id);
},
});
checked is an attrubite defined on a Tasks object, as defined in this app.
In Meteor, the definitive record of this object is stored on the server (in MongoDB), however there is a client side cache that is also being manipulated here, known as MiniMongo. The Meteor framework does a lot of work in the background (via the DDP protocol) to keep the server and client side objects in sync.
In this case the following is happening when a user clicks on a checkbox (firing the 'click .toggle-checked' event code) in the Tasks.update method:
First update client side MiniMongo Cache - this is known as Optimistic UI, and enables the client UI to respond fast (without waiting for the server)
Send a message to the server (Meteor Method) that the client wants to update the Tasks object, by setting the clicked variable to a new value.
Message requesting update received by server, which checks this is a valid operation, and either processes it (updating MongoDB version of the Tasks object, or refuses to process the update as appropriate.
Server will send out a DDP update of the resulting status of the Tasks object to all clients that have subscribed to a publication that includes it.
Clients that have previously subscribed will receive this DDP update, and will replace their MiniMongo version with the Server's version of the Tasks object, ensuring that all Clients are in sync with the Server.
Now in the ideal case, when the server accepts the clients changes, the new version of Tasks received (in step 5) by the initiating client will match the object it optimistically updated (in step 1).
However by implementing all these steps the Meteor framework also synchronizes other clients, and handles the case when the server rejects the update, or possibly modifies additional fields, as appropriate for the application.
Luckily though, this is all handled by the Meteor framework, and all you need to do is call Tasks.update for all this magic to happen!
Meteor likes the blur the lines between client and server. There are things you can do to abstract code -- for instance, javascript files (among all files) inside the /server directory to restrict access to it. This means that client users can't see this code.
/client obviously is the opposite. You can check a file with isClient and isServer.
Now, what does this mean to your code?
Depending on where your code is, there are different access levels. However, inside the script, there basically is no difference. checked is known on server/client inside that script because that's how Meteor runs, the blurred line between client and server makes this possible.
Meteor employs something called "database everywhere" which means it doesn't matter where the code is called, because it will run.
Related
I'm trying to make a simple todo app in order to understand how frontend and backend are connected. I read some of the websites showing a tutorial for using and connecting rest API, express server, and database, but still, I was not able to get the fake data from a database. Anyway, I wanted to check if my understanding of how they are connected and talk to each other is correct or not. So could you give some advice please?
First of all, I'm planning to use either Javascript & HTML or React for frontend, Express for server, and Postgres for the database. My plan is a user can add & delete his or her task. I have already created a server in my index.js file and created a database using psql command. Now if I type "" it takes me to the page saying "Hello" (I made this endpoint), and I'm failing to seed my data to the database. Here are my questions↓
After I was able to seed my fake data into the database, how should I get the data from the database and send to the frontend? I think in my index.js file, create a new endpoint something like "app.get("/api/todo", (res, req) => ..." and inside of the callback function, I should write something like "select * from [table name]". Also, form the front end, I should probably access certain endpoints using fetch. Is this correct?
Also, how can I store data which is sent from the frontend? For example, if I type my new todo to <input> field and click the add <button>, what is the sequence of events looks like? Adding event listener to button and connect to the server, then create post method in the server and insert data, kind of (?) <= sorry this part it's super unclear for me.
Displaying task on the frontend is also unclear for me. If I use an object like {task: clean up my room, finished: false (or 0 ?)} in the front end, it makes sense but, when I start using the database, I'm confused about how to display items that are not completed yet. In order to display each task, I won't use GET method to get the data from the database, right?
Also, do I need to use knex to solve this type of problem? (or better to have knex and why?)
I think my problem is I kind of know what frontend, server, database for, but not clear how they are connected with each other...
I also drew some diagrams as well, so I hope it helps you to understand my vague questions...
how should I get the data from the database and send to the frontend?
I think in my index.js file, create a new endpoint something like
"app.get("/api/todo", (res, req) => ..." and inside of the callback
function, I should write something like "select * from [table name]".
Typically you use a controller -> service -> repository pattern:
The controller is a thin layer, it's basically the callback method you refer to. It just takes parameters from the request, and forwards the request to the service in the form of a method call (i.e. expose some methods on the service and call those methods). It takes the response from the service layer and returns it to the client. If the service layer throws custom exceptions, you also handle them here, and send an appropriate response to the client (error status code, custom message).
The service takes the request and forwards it to the repository. In this layer, you can perform any custom business logic (by delegating to other isolated services). Also, this layers will take care of throwing custom exceptions, e.g. when an item was not found in the database (throw new NotFoundException)
The repository layer connects to the database. This is where you put the custom db logic (queries like you mention), eg when using a library like https://node-postgres.com/. You don't put any other logic here, the repo is just a connector to the db.
Also, form the front end, I should probably access certain endpoints
using fetch. Is this correct?
Yes.
Also, how can I store data which is sent from the frontend? For
example, if I type my new todo to field and click the add , what is
the sequence of events looks like? Adding event listener to button and
connect to the server, then create post method in the server and
insert data, kind of (?) <= sorry this part it's super unclear for me.
You have a few options:
Form submit
Ajax request, serialize the data in the form manually and send a POST request through ajax. Since you're considering a client library like React, I suggest using this approach.
Displaying task on the frontend is also unclear for me. If I use an
object like {task: clean up my room, finished: false (or 0 ?)} in the
front end, it makes sense but, when I start using the database, I'm
confused about how to display items that are not completed yet. In
order to display each task, I won't use GET method to get the data
from the database, right?
If you want to use REST, it typically implies that you're not using backend MVC / server rendering. As you mentioned React, you're opting for keeping client state and syncing with the server over REST.
What it means is that you keep all state in the frontend (in memory / localstorage) and just sync with the server. Typically what is applied is what is referred to as optimistic rendering; i.e. you just manage state in the frontend as if the server didn't exist; yet when the server fails (you see this in the ajax response), you can show an error in the UI, and rollback state.
Alternatively you can use spinners that wait until the server sync is complete. It makes for less interesting user perceived performance, but is just as valid technical wise.
Also, do I need to use knex to solve this type of problem? (or better
to have knex and why?) I think my problem is I kind of know what
frontend, server, database for, but not clear how they are connected
with each other...
Doesn't really matter what you use. Personally I would go with the stack:
Node Express (REST), but could be Koa, Restify...
React / Redux client side
For the backend repo layer you can use Knex if you want to, I have used node-postgres which worked well for me.
Additional info:
I would encourage you to take a look at the following, if you're doubtful how to write the REST endpoints: https://www.youtube.com/watch?v=PgrP6r-cFUQ
After I was able to seed my fake data into the database, how should I get the data from the database and send to the frontend? I think in my index.js file, create a new endpoint something like "app.get("/api/todo", (res, req) => ..." and inside of the callback function, I should write something like "select * from [table name]". Also, form the front end, I should probably access certain endpoints using fetch. Is this correct?
You are right here, you need to create an endpoint in your server, which will be responsible for getting data from Database. This same endpoint has to be consumed by your Frontend application, in case you are planning to use ReactJS. As soon as your app loads, you need to get the current userID and make a fetch call to the above-created endpoint and fetch the list of todos/any data for that matter pertaining to the concerned user.
Also, how can I store data which is sent from the frontend? For example, if I type my new todo to field and click the add , what is the sequence of events looks like? Adding event listener to button and connect to the server, then create post method in the server and insert data, kind of (?) <= sorry this part it's super unclear for me.
Okay, so far, you have connected your frontend to your backend, started the application, user is present and you have fetched the list of todos, if any available for that particular user.
Now coming to adding new todo the most minimal flow would look something like this,
User types the data in a form and submits the form
There is a form submit handler which will take the form data
Check for validation for the form data
Call the POST endpoint with payload as the form data
This Post endpoint will be responsible for saving the form data to DB
If an existing todo is being modified, then this should be handled using a PATCH request (Updating the state, if task is completed or not)
The next and possibly the last thing would be to delete the task, you can have a DELETE endpoint to remove the todo item from the list of todos
Displaying task on the frontend is also unclear for me. If I use an object like {task: clean up my room, finished: false (or 0 ?)} in the front end, it makes sense but, when I start using the database, I'm confused about how to display items that are not completed yet. In order to display each task, I won't use GET method to get the data from the database, right?
Okay, so as soon as you load the frontend for the first time, you will make a GET call to the server and fetch the list of TODOS. Store this somewhere in the application, probably redux store or just the application local state.
Going by what you have suggested already,
{task: 'some task name', finished: false, id: '123'}
Now anytime there has to be any kind of interaction with any of the TODO item, either PATCH or DELETE, you would use the id for each TODO and call the respective endpoint.
Also, do I need to use knex to solve this type of problem? (or better to have knex and why?) I think my problem is I kind of know what frontend, server, database for, but not clear how they are connected with each other...
In a nutshell or in the most minimal sense, think of Frontend as the presentation layer and backend and DB as the application layer.
the overall game is of sending some kind of request and receiving some response for those sent requests. Frontend is what enables any end-user to create these so-called requests, the backend (server & database) is where these requests are processed and response is sent back to the presentational layer for the end user to be notified.
These explanations are very minimal to make sure you get the gist of it. Since this question almost revolves around the entire scope of web development. I would suggest you read a few articles about both these layers and how they connect with each other.
You should also spend some time understanding what is RESTful API. That should be a great help.
I am building a "TODO" application which uses Service Workers to cache the request's responses and in case a user is offline, the cached data is displayed to the user.
The Server exposes an REST-ful endpoint which has POST, PUT, DELETE and GET endpoints exposed for the resources.
Considering that when the user is offline and submitting a TODO item, I save that to local IndexedDB, but I can't send this POST request for the server since there is no network connection. The same is true for the PUT, DELETE requests where a user updates or deletes an existing TODO item
Questions
What patterns are in use to sync the pending requests with the REST-ful Server when the connection is back online?
What patterns are in use to sync the pending requests with the REST-ful Server when the connection is back online?
Background Sync API will be suitable for this scenario. It enables web applications to synchronize data in the background. With this, it can defer actions until the user has a reliable connection, ensuring that whatever the user wants to send is actually sent. Even if the user navigates away or closes the browser, the action is performed and you could notify the user if desired.
Since you're saving to IndexDB, you could register for a sync event when the user add, delete or update a TODO item
function addTodo(todo) {
return addToIndeDB(todo).then(() => {
// Wait for the scoped service worker registration to get a
// service worker with an active state
return navigator.serviceWorker.ready;
}).then(reg => {
return reg.sync.register('add-todo');
}).then(() => {
console.log('Sync registered!');
}).catch(() => {
console.log('Sync registration failed :(');
});
}
You've registered a sync event of type add-todo which you'll listen for in the service-worker and then when you get this event, you retrieve the data from the IndexDB and do a POST to your Restful API.
self.addEventListener('sync', event => {
if (event.tag == 'add-todo') {
event.waitUntil(
getTodo().then(todos => {
// Post the messages to the server
return fetch('/add', {
method: 'POST',
body: JSON.stringify(todos),
headers: { 'Content-Type': 'application/json' }
}).then(() => {
// Success!
});
})
})
);
}
});
This is just an example of how you could achieve it using Background Sync. Note that you'll have to handle conflict resolution on the server.
You could use PouchDB on the client and Couchbase or CouchDB on the server. With PouchDB on the client, you can save data on the client and set it to automatically sync/replicate the data whenever the user is online. When the database synchronizes and there are conflicting changes, CouchDB will detect this and will flag the affected document with the special attribute "_conflicts":true. It determines which one it'll use as the latest revision, and save the others as the previous revision of that record. It does not attempt to merge the conflicting revision. It is up to you to dictate how the merging should be done in your application. It's not so different from Couchbase too. See the links below for more on Conflict Resolution.
Conflict Management with CouchDB
Understanding CouchDB Conflict
Resolving Couchbase Conflict
Demystifying Conflict Resolution in Couchbase Mobile
I've used pouchDB and couchbase/couchdb/IBM cloudant but I've done that through Hoodie It has user authentication out-of-the box, handles conflict management, and a few more. Think of it like your backend. In your TODO application, Hoodie will be a great fit. I've written something on how to use Hoodie, see links Below:
How to build offline-smart application with Hoodie
Introduction to offline data storage and sync with PouchBD and Couchbase
At the moment I can think of two approaches and it depend on what storage options you are using at your backend.
If you are using an RDBMS to backup all data:
The problem with offline first systems in this approach is the possibility of conflict that you may face when posting new data or updating existing data.
As a first measure to avoid conflicts from happening you will have to generate unique IDs for all objects from your clients and in such a way that they remain unique when posted on the server and saved in a data base. For this you can safely rely on UUIDs for generating unique IDs for objects. UUID guarantees uniqueness across systems in a distributed system and depending on what your language of implementation is you will have methods to generate UUIDs without any hassle.
Design your local database such that you can use UUIDs as primary key in your local database. On the server end you can have both, an integer type auto incremented and indexed, primary key and a VARCHAR type to hold the UUIDs. The primary key on server uniquely identifies objects in that table while UUID uniquely identifies records across tables and databases.
So when posting your object to server at the time of syncing you will have to just check if any object with the UDID is already present and take appropriate action from there. When your are fetching objects from the server send both the primary key of the object from your table and the UDID for the objects. This why when you serialise the response in model objects or save them in local database you can tell the objects which have been synced from the ones which haven't as the objects that needs syncing will not have a primary key in your local database, just the UUID.
There may be a case when your server malfunctions and refuses to save data when you are syncing. In this case you can keep an integer variable in your objects that will keep a count of the number of times you have tried syncing it. If this number exceed by a certain value, say 3, you move on to sync the next object. Now what you do with the unsynced objects is up you the policy you have for such objects, as a solution you could discard them or keep them just locally.
If you are not using RDBMS
As an alternate approach, instead of keeping all objects you could keep transactions that each client perform locally to the server. Each client syncs just the transactions and the while fetching you get the current state by working all the transactions from bottom up. This is very similar to what Git uses. It saves changes in your repository in form of transactions like what has been added (or removed) and by whom. The current state of the repository for each user is worked from the transactions. This approach will not result in conflicts but as you can see its a little tricky to develop.
I'm just trying to verify if an Account exists with a particular email, however I learned that Accounts.findUserByEmail() only works server-side.
It would appear the repeatedly-suggested way would be to define a Meteor.method() and do all the work in there. Unfortunately I apparently have no idea what I'm doing because I'm getting an error that no one else has been getting.
component.js:
Meteor.call('confirm', email);
methods.js:
Meteor.methods({
'confirm': (email) => {
if (Accounts.findUserByEmail(email)) {
return;
}
}
});
All I get is this error:
Exception while simulating the effect of invoking 'confirm' TypeError: Accounts.findUserByEmail is not a function
Am I completely misunderstanding the dynamic of Meteor.methods + Meteor.call? Is it not actually server-side??
Currently using Meteor package, accounts-password#1.3.3
Meteor simulates method calls in the front-end too by running "stubs" of your methods. The idea is to have a better user experience because the UI is updated immediately before the server has responded. However this also means that if you run server-only code in Meteor methods, you have to make sure that the code is only run on the server:
Meteor.methods({
'confirm': (email) => {
if (Meteor.isServer && Accounts.findUserByEmail(email)) {
return;
}
}
});
Alternatively, you can place the above method definition in a file that is only loaded on the server, like any file on the /server-directory or (recommended) in /imports to a file that is only included by server code. Then you shouldn't need to use Meteor.isServer separately.
If your client-side code includes a method definition, it is treated as a stub, which means that it is run in a special mode that provides "optimistic UI" and its effects on data are undone once the actual server method returns its response to the client.
It could be worthwhile to implement different versions of (at least some of the) methods for the client and server, and to avoid including some of them on the client altogether.
If you choose to use the same function on both the client and the server, there are Meteor.isServer, Meteor.isClient and this.isSimulation (the latter is specifically for the methods), that allow you to execute some of the blocks only on the client/server.
Note that the code in your question does not do what you expect it to, and you do not check the method argument.
For this specific use case, you should probably only implement the method on the server (simply don't import its code in your client build):
Meteor.methods({
isEmailInSystem(email) {
check(email, String);
return !!Accounts.findUserByEmail(email);
}
});
You can read more about the method lifecycle in The Meteor Guide.
From the guide (gist, some details omitted):
Method simulation runs on the client - If we defined this Method in client and server code, as all Methods should be, a Method simulation is executed in the client that called it.
The client enters a special mode where it tracks all changes made to client-side collections, so that they can be rolled back later. When this step is complete, the user of your app sees their UI update instantly with the new content of the client-side database, but the server hasn’t received any data yet.
A method DDP message is sent to the server
Method runs on the server
Return value is sent to the client
Any DDP publications affected by the Method are updated
updated message sent to the client, data replaced with server result, Method callback fires
After the relevant data updates have been sent to the correct client, the server sends back the last message in the Method life cycle - the DDP updated message with the relevant Method ID. The client rolls back any changes to client side data made in the Method simulation in step 1, and replaces them with the actual changes sent from the server in step 5.
Lastly, the callback passed to Meteor.call actually fires with the return value from step 4. It’s important that the callback waits until the client is up to date, so that your Method callback can assume that the client state reflects any changes done inside the Method.
I have a node server which is connecting to CloudMQTT and receiving messages in app.js. I have my client web app running on the same node server and want to display my messages received in app.js elsewhere in a .ejs file, I'm struggling as to how best to do this.
app.js
// Create a MQTT Client
var mqtt = require('mqtt');
// Create a client connection to CloudMQTT for live data
var client = mqtt.connect('xxxxxxxxxxx', {
username: 'xxxxx',
password: 'xxxxxxx'
});
client.on('connect', function() { // When connected
console.log("Connected to CloudMQTT");
// Subscribe to the temperature
client.subscribe('Motion', function() {
// When a message arrives, do something with it
client.on('message', function(topic, message, packet) {
// ** Need to pass message out **
});
});
});
Basically you need a way for the client (browser code with EJS - HTML, CSS and JS) to receive live updates. There are basically two ways to do this from the client to the node service:
A websocket session instantiated by the client.
A polling approach.
What's the difference?
Under the hood, a websocket is full-duplex communication mechanism. That means that you can open a socket from the client (browser) to the node server and they can talk to each other both ways over a long-lived session. The pro is that updates are often times instantaneous without having to incur the cost of making another HTTP request as in the polling case. The con is that it uses a socket connection that may be long-lived, and there is typically a socket pool on any server that has limited ability to deal with many sockets. There are ways to scale around this issue, but if it's a big concern for you, you may want to go with polling.
Polling is where you set up an endpoint on your server that the client JS code hits every now and then. That endpoint will return you the updated information. The con is that you are now making a new request in order to get updates, which may not be desirable if a lot of updates are expected to come through and the app is expected to be updated in the timeliest manner possible (most of the time polling is sufficient though). The pro is that you do not have a live connection open on the server indefinitely.
Again, there are many more pros and cons, these are just the obvious ones. You decide how to implement it. When the client receives the data from either of these mechanisms, you may update the UI in any suitable manner.
From the server end, you will need a way to persist the information coming from CloudMQTT. There are multiple ways to do this. If you do not care about memory consumption and are ok with potentially throwing away old data if a client does not ask for it for a while, then it may be ok to just store this in memory in a regular javascript object {}. If you do care about persisting the data between server restarts/crashes (probably best), then you can persist to something like Redis, Mongo, any of the SQL stores if your data is relational in nature, or even a regular JSON file on disk (see fs.writeFile).
Hope this helped give you a step in the right direction!
I have a calendar application and it loads all of the event data using ajax and json results. the issue is that i have different view and right now i have to re call the server when i change views.
Is there any recommendation for ways i can cache this data on the client side and check if i have loaded these events already before firing off more ajax calls.
What is the best practice for this ?
Like hvgotcodes said, an MVC framework would help; try backbone.js (http://documentcloud.github.com/backbone/), for instance.
Alternatively, you might want to consider using jStorage (http://www.jstorage.info/). Every time you need to make an AJAX call, check first if it's in your storage object, then run the AJAX call if it isn't. On the other end, whenever you finish an AJAX call, store the results in the storage object. Make sure you have some kind of index (a CalendarEvent id) to reference when looking it up in the data store. Might want to add some kind of "expire time" to the data in your storage, too ... a timestamp after the AJAX call, and re-request up front if it's out of date.
It's called MVC.
You need to construct a data model for you application, write some sort of Record objects, and then you can determine their status. So your application would have some sort of CalendarEvent model, and when you load data from the server, you would instantiate instances.
So when changing views, you would first check to see if you had the model object for that view, and if you did, you wouldn't need to load it from the server (unless you want to check for changes).
Your scheme doesn't need to be that complicated. If you load events by Id, you can do something like
window.App = {};
window.App.Models = {};
when you load a record you could put
window.App.Models[id] = InstanceOfYourRecord
and that way its pretty fast to look for records. Or just use a framework (like Sproutcore) that has a robust data layer.
I had similar issues on a recent project.
Conceptually, I have the "real" data model (DM) kept on the server, persisted to a database.
To make life sane, the client keeps its own local data model. Outside of the client DM, all the client code thinks it's pulling results locally.
When reading data (GET) from the client DM it:
checks the cache for existing results
invokes appropriate AJAX queries when cached data is not available, then caches the results.
When changing data (POST) via the client DM it:
invalidates the cache as appropriate
invokes appropriate AJAX queries
emits custom jQuery event indicating client DM changed
Note that this client DM also:
centralizes AJAX error handling
tracks AJAX calls still in-flight. (Lets us warn users when leaving pages with unsaved changes).
allows a drop-in, dummy replacement for unit testing, where all the calls hit local data and are completely synchronous.
Implementation notes:
I coded this as a JavaScript class called DataModel. As the design becomes more complex, it makes sense to further break-down the responsibilities in to separate objects.
jQuery's custom events let you easily implement the observer pattern. Client components update themselves from the client DM whenever it indicates data has changed.
JSON in your remote API helps simplify the code. My client DM stores the JSON results directly in its cache.
The client dm function arguments include call-backs so everything can naturally be passed along via AJAX when needed: function listAll( contactId, cb ) { ... }
My project only allowed single user logins. If outside parties can change the server datamodel, some sort of has-data-changed probe should be fired regularly to ensure the client cache is still valid.
For my app, multiple client components would request the same data when receiving a client DM changed event. This resulted in multiple AJAX calls with the same info. I fixed this problem with a getJsonOnce() helper, which manages a queue of client component call-backs awaiting the same result.
Example function in my implementation:
listAll:
function( contactId, cb ) {
// pull from cache
if ( contactId in this.notesCache ) {
cb( this.notesCache[contactId] );
return;
}
// init queue if needed
this.listAllQueue[contactId] = this.listAllQueue[contactId] || [];
// pull from server
var self = this;
dataModelHelpers.getJsonOnce(
'/teafile/api/notes.php',
{'req': 'listAll', 'contact': contactId},
function(resp) { self.notesCache[contactId] = resp; },
this.listAllQueue[contactId],
cb
);
}
The getJsonOnce() helper makes sure that if multiple client components request the exact same (uncached) data, that we only send out a single AJAX request and inform everyone once it comes in.
The notesCache is just a simple javascript object:
this.notesCache = {};