how to await subscriptions established? - javascript

I have the following js code:
stompClient.subscribe('/topic/clients', function (calResult) {
updateClientsTable(JSON.parse(calResult.body));
});
$.get("/clients", null);
and following server code(last line invokes it):
#GetMapping(value = {"/clients"})
#ResponseBody
public void loadClients() {
brokerMessagingTemplate.convertAndSend("/topic/clients", clientService.getClientList());
}
Sometime front-end misses result of $.get("/clients", null);
As I understand problem: at the moment of result getting on front end, subscriptions is not happens.
if to put $.get("/clients", null); below in the code - all works fine.
Can you explain how to await subscriptions established?

I think it would make more sense to not mix REST requests with this messaging pattern.
Have you considered sending the "updateClients" command through SockJS into an "/apps/updateClients" channel which replies to the "/topic/clients" channel?

As #light_303 already mentioned, mixing HTTP requests with notification mechanism isn't good. You can register moment, when client connects (GET request on /clients), but you can't register when he disconnects.
You should think in one of the next ways. When user subscribes to /topic/clients:
You individually send him response with all client list and then push updates only.
You individually send him current server time or some kind of ID and then push updates only. User uses given time/ID in GET request to /clients and receives full client list on that moment. This option can be good in situation, when you have incremental updates (i. e. adding new elements to list) and otherwise not so good.
Check this question: Sending message to specific user on Spring Websocket.
This is actually ridiculous, how Spring can complicate things. I recommend you to look on another frameworks for real-time web communication, such as Vert.x or Netty and on Go programming language. Use WebSockets or SockJS instead of STOMP. All that technologies can give you more flexible and performant solution in obvious way. Also, check Centrifugo project, maybe it's relevant to your task.

You can use #SubscribeMapping annotation from spring-messaging.
If you have spring-messaging configured as described here and here, the server-side code could look like following:
#Controller
public class MessagingController {
#SubscribeMapping("/clients")
public List<Client> loadClients() {
return clientService.getClientList();
}
}
This way you don't have to call $.get("/clients", null); because JS message handler receives result of loadClients() call right after subscription happens. JS code would look like:
stompClient.subscribe('/topic/clients', function (calResult) {
updateClientsTable(JSON.parse(calResult.body));
});

Related

Making really simple app including front & backend skills (js, node.js, psql, react...)

I'm trying to make a simple todo app in order to understand how frontend and backend are connected. I read some of the websites showing a tutorial for using and connecting rest API, express server, and database, but still, I was not able to get the fake data from a database. Anyway, I wanted to check if my understanding of how they are connected and talk to each other is correct or not. So could you give some advice please?
First of all, I'm planning to use either Javascript & HTML or React for frontend, Express for server, and Postgres for the database. My plan is a user can add & delete his or her task. I have already created a server in my index.js file and created a database using psql command. Now if I type "" it takes me to the page saying "Hello" (I made this endpoint), and I'm failing to seed my data to the database. Here are my questions↓
After I was able to seed my fake data into the database, how should I get the data from the database and send to the frontend? I think in my index.js file, create a new endpoint something like "app.get("/api/todo", (res, req) => ..." and inside of the callback function, I should write something like "select * from [table name]". Also, form the front end, I should probably access certain endpoints using fetch. Is this correct?
Also, how can I store data which is sent from the frontend? For example, if I type my new todo to <input> field and click the add <button>, what is the sequence of events looks like? Adding event listener to button and connect to the server, then create post method in the server and insert data, kind of (?) <= sorry this part it's super unclear for me.
Displaying task on the frontend is also unclear for me. If I use an object like {task: clean up my room, finished: false (or 0 ?)} in the front end, it makes sense but, when I start using the database, I'm confused about how to display items that are not completed yet. In order to display each task, I won't use GET method to get the data from the database, right?
Also, do I need to use knex to solve this type of problem? (or better to have knex and why?)
I think my problem is I kind of know what frontend, server, database for, but not clear how they are connected with each other...
I also drew some diagrams as well, so I hope it helps you to understand my vague questions...
how should I get the data from the database and send to the frontend?
I think in my index.js file, create a new endpoint something like
"app.get("/api/todo", (res, req) => ..." and inside of the callback
function, I should write something like "select * from [table name]".
Typically you use a controller -> service -> repository pattern:
The controller is a thin layer, it's basically the callback method you refer to. It just takes parameters from the request, and forwards the request to the service in the form of a method call (i.e. expose some methods on the service and call those methods). It takes the response from the service layer and returns it to the client. If the service layer throws custom exceptions, you also handle them here, and send an appropriate response to the client (error status code, custom message).
The service takes the request and forwards it to the repository. In this layer, you can perform any custom business logic (by delegating to other isolated services). Also, this layers will take care of throwing custom exceptions, e.g. when an item was not found in the database (throw new NotFoundException)
The repository layer connects to the database. This is where you put the custom db logic (queries like you mention), eg when using a library like https://node-postgres.com/. You don't put any other logic here, the repo is just a connector to the db.
Also, form the front end, I should probably access certain endpoints
using fetch. Is this correct?
Yes.
Also, how can I store data which is sent from the frontend? For
example, if I type my new todo to field and click the add , what is
the sequence of events looks like? Adding event listener to button and
connect to the server, then create post method in the server and
insert data, kind of (?) <= sorry this part it's super unclear for me.
You have a few options:
Form submit
Ajax request, serialize the data in the form manually and send a POST request through ajax. Since you're considering a client library like React, I suggest using this approach.
Displaying task on the frontend is also unclear for me. If I use an
object like {task: clean up my room, finished: false (or 0 ?)} in the
front end, it makes sense but, when I start using the database, I'm
confused about how to display items that are not completed yet. In
order to display each task, I won't use GET method to get the data
from the database, right?
If you want to use REST, it typically implies that you're not using backend MVC / server rendering. As you mentioned React, you're opting for keeping client state and syncing with the server over REST.
What it means is that you keep all state in the frontend (in memory / localstorage) and just sync with the server. Typically what is applied is what is referred to as optimistic rendering; i.e. you just manage state in the frontend as if the server didn't exist; yet when the server fails (you see this in the ajax response), you can show an error in the UI, and rollback state.
Alternatively you can use spinners that wait until the server sync is complete. It makes for less interesting user perceived performance, but is just as valid technical wise.
Also, do I need to use knex to solve this type of problem? (or better
to have knex and why?) I think my problem is I kind of know what
frontend, server, database for, but not clear how they are connected
with each other...
Doesn't really matter what you use. Personally I would go with the stack:
Node Express (REST), but could be Koa, Restify...
React / Redux client side
For the backend repo layer you can use Knex if you want to, I have used node-postgres which worked well for me.
Additional info:
I would encourage you to take a look at the following, if you're doubtful how to write the REST endpoints: https://www.youtube.com/watch?v=PgrP6r-cFUQ
After I was able to seed my fake data into the database, how should I get the data from the database and send to the frontend? I think in my index.js file, create a new endpoint something like "app.get("/api/todo", (res, req) => ..." and inside of the callback function, I should write something like "select * from [table name]". Also, form the front end, I should probably access certain endpoints using fetch. Is this correct?
You are right here, you need to create an endpoint in your server, which will be responsible for getting data from Database. This same endpoint has to be consumed by your Frontend application, in case you are planning to use ReactJS. As soon as your app loads, you need to get the current userID and make a fetch call to the above-created endpoint and fetch the list of todos/any data for that matter pertaining to the concerned user.
Also, how can I store data which is sent from the frontend? For example, if I type my new todo to field and click the add , what is the sequence of events looks like? Adding event listener to button and connect to the server, then create post method in the server and insert data, kind of (?) <= sorry this part it's super unclear for me.
Okay, so far, you have connected your frontend to your backend, started the application, user is present and you have fetched the list of todos, if any available for that particular user.
Now coming to adding new todo the most minimal flow would look something like this,
User types the data in a form and submits the form
There is a form submit handler which will take the form data
Check for validation for the form data
Call the POST endpoint with payload as the form data
This Post endpoint will be responsible for saving the form data to DB
If an existing todo is being modified, then this should be handled using a PATCH request (Updating the state, if task is completed or not)
The next and possibly the last thing would be to delete the task, you can have a DELETE endpoint to remove the todo item from the list of todos
Displaying task on the frontend is also unclear for me. If I use an object like {task: clean up my room, finished: false (or 0 ?)} in the front end, it makes sense but, when I start using the database, I'm confused about how to display items that are not completed yet. In order to display each task, I won't use GET method to get the data from the database, right?
Okay, so as soon as you load the frontend for the first time, you will make a GET call to the server and fetch the list of TODOS. Store this somewhere in the application, probably redux store or just the application local state.
Going by what you have suggested already,
{task: 'some task name', finished: false, id: '123'}
Now anytime there has to be any kind of interaction with any of the TODO item, either PATCH or DELETE, you would use the id for each TODO and call the respective endpoint.
Also, do I need to use knex to solve this type of problem? (or better to have knex and why?) I think my problem is I kind of know what frontend, server, database for, but not clear how they are connected with each other...
In a nutshell or in the most minimal sense, think of Frontend as the presentation layer and backend and DB as the application layer.
the overall game is of sending some kind of request and receiving some response for those sent requests. Frontend is what enables any end-user to create these so-called requests, the backend (server & database) is where these requests are processed and response is sent back to the presentational layer for the end user to be notified.
These explanations are very minimal to make sure you get the gist of it. Since this question almost revolves around the entire scope of web development. I would suggest you read a few articles about both these layers and how they connect with each other.
You should also spend some time understanding what is RESTful API. That should be a great help.

Display Kafka messages on web page

I have a Java Spring Application with a Tomcat server that listen on kafka topic. I want to display all messages in a real-time mode on the web page. Therefore, when a kafka messages is arrived in the backend I want to see it on my web page. I don't know a good approach to push kafka message directly to the front-end and display it on web page. Is someone could help my with a solution and some examples that could help? Thanks!
I have implemented a system like this in Java for my last employer, albeit not with Spring/Tomcat. It was consuming messages from Kafka and serving them on a web socket to be displayed in the browser. The approach I followed was to use akka-stream-kafka and akka-http for web-socket support. The benefit of that is both are based on akka-streams which makes it an easy fit for streaming data.
While you can embed akka-http in your spring app running inside tomcat, it may not feel the most natural choice any more as spring framework already has its own support for both kafka and websockets. However, if you're not familiar with either, then jumping on the akka approach may be easiest and the core logic goes along these lines (I can't share the code from work so have just put this together from the examples in the docs, not tested):
public Route createRoute(ActorSystem system) {
return path("ws", () -> {
ConsumerSettings<byte[], String> consumerSettings = ConsumerSettings.create(system, new ByteArrayDeserializer(), new StringDeserializer())
.withBootstrapServers("localhost:9092")
.withGroupId(UUID.randomUUID().toString()) //this is so that each client gets all messages. To be able to resume from where a client left off in case of disconnects, you can generate in on the client side and pass in the request
.withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
return handleWebSocketMessages(
Flow.fromSinkAndSourceCoupled(
Sink.ignore(),
Consumer.committableSource(consumerSettings, Subscriptions.topics("topic1"))
.map(msg -> TextMessage.create(msg.record().value()))
)
);
}
}
To expose this route you can follow the minimalistic example, the only difference being the route you define needs the ActorSystem:
final Http http = Http.get(system);
final ActorMaterializer materializer = ActorMaterializer.create(system);
final Flow<HttpRequest, HttpResponse, NotUsed> routeFlow = createRoute(system).flow(system, materializer);
final CompletionStage<ServerBinding> binding = http.bindAndHandle(routeFlow,
ConnectHttp.toHost("localhost", 8080), materializer);
Once you have your messages published to the websocket, the front end will code will of course depend on your UI framework of choice, the simplest code to consume ws messages from javascript is:
this.connection = new WebSocket('ws://url-to-your-ws-endpoint');
this.connection.onmessage = evt => {
// display the message
To easily display the message in the UI, you want the format to be something convenient, like JSON. If your Kafka messages are not JSON already, that's where the Deserializers in the first snippet come in, you can convert it to a convenient JSON string in the Deserializer or do it later on in the .map() called on the Source object.
Alternatively, if polling is an option you can also consider using the off-the-shelf Kafka Rest Proxy, then you only need to build the front-end.

How to subscribe via DDP connections to other Meteor servers on the server side?

I'd like to synchronize Data between two Meteor apps. Therefore I have published a collection with the data in question on both apps (which obviously run the same Meteor version 0.8.1.2 with the exact same packages).
When I run
var testConnection = DDP.connect('http://10.0.10.20:3003/');
var newCollection = new Meteor.Collection('remoteData', testConnection);
testConnection.subscribe('remoteData');
console.log('Data list starts here:');
newCollection.find().forEach(function(data){console.log(data)});
on any client I do get a list of all data like expected. Server side there is nothing so newCollection stays empty (also I know from debugging that the server does actually execute testConnection.subscribe('remoteData') and the other server executes everything within its corresponding publish function just like for clients).
I tried it this way as the poster here https://stackoverflow.com/a/18360441 mentioned something like this works on client and server. Looking in the docs for subscribe ( http://docs.meteor.com/#meteor_subscribe ) it says it only works on the client which would explain that nothing happens on my server but would be a bit strange as DDP.connect ( http://docs.meteor.com/#ddp_connect ) seems to be meant for client and server and supports subscribe.
So do I miss something here? And what would be the best way to get a subscribe like functionality between two servers if subscribe really does not work in this scenario?
I know I can work with custom Meteor.methods but this seems a bit like a crutch compared to how nice it would work with subscribe, so I would be very interested in any better solution...
Like user728291 pointed out the problem was that the server in this case isn't waiting for this.ready() in the publish function on the other side and therefore when newCollection.find() is called on the server newCollection still is empty (but will receive data shortly after). It seems that on the client newCollection.find() tries to wait for this.ready() of the servers publish function (also I'm absolutely not sure about this, maybe the reason it works on the client is a totally different one) and therefore on the client it isn't empty at that time.
Anyhow, you are on the safe side when you always trigger find() in the callback of subscribe which will interpret any function as onReady callback (http://docs.meteor.com/#meteor_subscribe).
So what guaranteed works on server and client is
var testConnection = DDP.connect('http://10.0.10.20:3003/');
var newCollection = new Meteor.Collection('remoteData', testConnection);
testConnection.subscribe('remoteData', function() {
console.log('Data list starts here:');
newCollection.find().forEach(function(data){console.log(data)});
});

Javascript messagehandler for NServicebus

I'm trying to create a javascript event subscriber for NServicebus and I would like to know if my thoughts are valid and if there are any common pitfalls in this design.
I'm purposing the following components:
ASP.NET MVC BusController (ASyncController)
receives subscriptions from the javascript clients and returns some sort of sessionId for the client to use in further communication.
async ActionMethod Receive which will return a json serialized EventMessage.
has a generic messagehandler, which will filter and queue up events for clients who subscribed for it.
javascript client
can subscribe to 1 or more events using the subscribe action method of the BusController
can Receive events by long-polling the Receive method of the BusController with the received sessionId.
There are a few problems:
How to detect when a client disconnects?
I've thought about a simple timout system, which tells the client to re-initiate the
connection with the Receive Action Method
I'm worried about the performance of a generic messagehandler in the buscontroller, handling all messages in my system. Has anyone else had experience with this?
You can always try something that work right out of the box. http://pservicebus.codeplex.com/
it comes with a javascript api that allows you to do pub-sub just like you would do it in .net code.
it is already coded to use Http Streaming/Comet as needed and switch to long polling when using browser that does not support it.
Here is a sample of using the javascript api pub-sub. http://pservicebus.codeplex.com/SourceControl/changeset/view/7169bd78a707#pServiceBus%201.0.2%2fSamples%2fJS%20API%20WebChat%2fScripts%2fchat.js

Monitoring Mongo for changes with Node.js

I'm using Node.js for some project work and I would like to monitor my Mongo database (collection) for changes, basically fire an event if something gets added.
Anyone know if this is possible? I'm using the node-mongodb-native drivers.
If it's not I'd also like any available pointers on pushing data from the server (run with node) to the client browser.
The question is if all data is added to your database through your node.js app. If so, you can use the EventEmitter class of node.js to trigger an event (http://nodejs.org/api.html#eventemitter-14).
If the database is populated by some other app, things are getting difficult. In this case you would need something like a database trigger, which is AFAIK not yet availabled in MongoDB.
Pushing Events to the Client (aka Comet) will be possible once the HTML 5 websockets API makes its way into all major browsers.
In the meantime, you can only try to emulate this behaviour using techniques like (long-term) AJAX polling, forever frame etc. but each of them has its weaknesses.
I would turn on replication in your mongodb. There is a replicate? database that contains a list of changes, similar to the mysql replication log. You can monitor that.
-daniel
Almost 3y since the last answer. I would suggest looking at:
Pub Sub for nodejs and MongoDB https://github.com/scttnlsn/mubsub
npm install mubsub should get you there
collection.insert({"key1":val1,"key2":"val2"}, function(err, info){
if(err){
//handle this
}
else{
if(info){
you call a fireandforgetfunction(info); here
that can write to logs or send to SQS or do some other child spawn or in process thing. This could even be a callback but I think a fire and forget may do in most circumstances.
I say fire and forget because I presume you don't need to hold
up the response so you can return what ever you need to the client.
And in part-answer to your other question you can return JSON like this
db.close();
var myJSON =[];
sys.puts("Cool info stored and did a non blocking fire and forget for some other mongo monitoring stuff/process and sending control back to the browser");
sys.puts(sys.inspect(info));//remove later
myJSON.push({"status":"success"});
myJSON.push({"key1":val1,"key2":val2});//or whatev you want to send
res.writeHead(200, { "Content-Type" : "text/plain" });
res.write(JSON.stringify(myJSON));
res.end();
}
}

Categories