I'm currently about to implement saga pattern in my NestJS application for transaction
I already read some documentation about saga in NestJs in their documentation.
But I need some examples of how to do compensating transaction in case of failure in any step of the saga. Should I handle it using try catch? or whatever? I'm not really sure, there is no example of handling compensating transaction in the documentation.
It would be very helpful if anyone can show be the best practice of how should i do it.
Related
in my team we use NestJS for our backend. We want to use microservices architecture and I've seen there is a built-in microservices support in NestJS. After read the documentation I've got a few questions about this feature:
Should I create microservices in the same project or in separate projects?
Why even use createMicroservice over NestFactory.create? Using createMicroservice also makes me to use #MessagePattern and use an emitter to send messages - which is okay if my microservice gets messages using queue (like RabbitMQ), but I can't create REST API in my microservice, so it makes it harder (or impossible) to call NestJS microservice from non-NestJS microservice since it relies on NestJS MessagePattern.
Thanks ahead.
It is not necessary but I personally create separate projects within a workspace.
Calling createMicroservice is compulsory because nest needs to know what kind of controller you have and what behavior do you expect. In addition, you should avoid implementing rest API in all services; it should be implemented just in gateway service which is responsible for API layer, and it is a bond between microservices and outworld.
For having a good notion about microservice implementation with NestJS I highly recommend you to take a look to this repo:
https://github.com/Denrox/nestjs-microservices-example
Trying to find the best way to fetch values from web services in ReactJs, redux.
Found ways using useEffects, fetch, redux-thunk, redux-saga.
But which is best to be used..?
This is a bad bad question. Okay, I cannot even explain why it's bad. I'll start with this, you're talking about completely different things.
I guess you're asking about how to make network requests in react. Here's brief description of the things you're talking about and why they are used:
useEffect: It's a hook in react, with which you can run an effect(basically running a function) on every or if depending on a state, some renders.
fetch: It is a web API with which you can make ajax requests(network requests) which is based on promises. Previously, we had XHR for doing so which is event-based. It is still used because fetch doesn't allow tracking the download progress of the response download. 'Should I use fetch or XHR for making requests', now that'd be a good question.
redux-thunk && redux-saga: Now you'd wanna use this with redux. In redux, as you probably know, dispatching action is synchronous. So if you wanna do some asynchronous task and dispatch the action object after that, then look into redux-thunk or redux-saga.
So the question shouldn't be 'Which of these should you use for fetching things off the web', because they're not specifically used for that purpose.
There are many differnt ways to doing that.
Basically useEffect is a hook and can not fetch any data directly.
Then you must pay attention to your app and select one that meet the needs of you. So there is no any best. If your application ia in a larg scale, i suggest react-query, then redux saga(if api reault is needed in some places)
Background
I am making a publish/subscribe typical application where a publisher sends messages to a consumer.
The publisher and the consumer are on different machines and the connection between them can break occasionally.
Objective
The goal here is to make sure that no matter what happens to the connection, or to the machines themselves, a message sent by a publisher is always received by the consumer.
Ordering of messages is not a must.
Problem
According to my research, RabbitMQ is the right choice for this scenario:
Redis Vs RabbitMQ as a data broker/messaging system in between Logstash and elasticsearch
However, although RabbitMQ has a tutorial about publish and subscriber this tutorial does not present us to persistent queues nor does it mention confirms which I believe are the key to making sure messages are delivered.
On the other hand, Redis is also capable of doing this:
http://abhinavsingh.com/customizing-redis-pubsub-for-message-persistence-part-2/
but I couldn't find any official tutorials or examples and my current understatement leads to me to believe that persistent queues and message confirms must be done by us, as Redis is mainly an in memory-datastore instead of a message broker like RabbitMQ.
Questions
For this use case, which solution would be the easiest to implement? (Redis solution or RabbitMQ solution?)
Please provide a link to an example with what you think would be best!
Background
I originally wanted publish and subscribe with message and queue persistence.
This in theory, does not exactly fit publish and subscribe:
this pattern doesn't care if the messages are received or not. The publisher simply fans out messages and if there are any subscribers listening, good, otherwise it doesn't care.
Indeed, looking at my needs I would need more of a Work Queue pattern, or even an RPC pattern.
Analysis
People say both should be easy, but that really is subjective.
RabbitMQ has a better official documentation overall with clear examples in most languages, while Redis information is mainly in third party blogs and in sparse github repos - which makes it considerably harder to find.
As for the examples, RabbitMQ has two examples that clearly answer my questions:
Work queues
RPC example
By mixing the two I was able to have a publisher send to several consumers reliable messages - even if one of them fails. Messages are not lost, nor forgotten.
Downfall of rabbitMQ:
The greatest problem of this approach is that if a consumer/worker crashes, you need to define the logic yourself to make sure that tasks are not lost. This happens because once a task is completed, following the RPC pattern with durable queues from Work Queues, the server will keep sending messages to the worker until it comes back up again. But the worker doesn't know if it already read the reply from the server or not, so it will take several ACK from the server. To fix this, each worker message needs to have an ID, that you save to the disk (in case of failure) or the requests must be idempotent.
Another issue is that if the connection is lost, the clients blow up with errors as they cannot connect. This is also something you must prepare in advance.
As for redis, it has a good example of durable queues in this blog:
https://danielkokott.wordpress.com/2015/02/14/redis-reliable-queue-pattern/
Which follows the official recommendation. You can check the github repo for more info.
Downfall of redis:
As with rabbitmq, you also need to handle worker crashes yourself, otherwise tasks in progress will be lost.
You have to do polling. Each consumer needs to ask the producer if there are any news, every X seconds.
This is, in my opinion, a worst rabbitmq.
Conclusion
I ending up going with rabbitmq for the following reasons:
More robust official online documentation, with examples.
No need for consumers to poll the producer.
Error handling is just as simple as in redis.
With this in mind, for this specific case, I am confident in saying that redis is a worst rabbitmq in this scenario.
Hope it helps.
Regarding implementation, they should both be easy - they both have libraries in various languages, check here for redis and here for rabbitmq. I'll just be honest here: I don't use javascript so I don't know how are the respected libraries implemented or supported.
Regarding what you didn't find in the tutorial (or maybe missed in the second one where there are a few words about durable queues and persistent messages and acknowledging messages) there are some nicely explained things:
about persistence
about confirms (same link as you've provided in the question, just listing it here for clarity)
about reliability
Publisher confirms are indeed not in the tutorial but there is an example on github in amqp.node's repo.
With rabbit mq message travels (in most cases) like this
publisher -> exchange -> queue -> consumer and on each of these stops there is some sort of persistence to be achieved. Also if you get to clusters and queue mirroring you'll achieve even greater reliability (and availability of course).
i think they are both easy to use as there are many libraries developed for them both.
There are a fews to name such as disque, bull, kue, amqplib, etc...
The documentations for them are pretty good. You can simple copy and paste and have it running in few mins.
I use seneca and seneca amqp is a pretty good example
https://github.com/senecajs/seneca-amqp-transport
I am trying to understand how to perform crud operations using AngularJs, NodeJs
add Mysql. I need to understand the approach for this. Please explain with some
basic example.
CRUD is the acronym for the set of operations Create, Read, Update and Delete.
CRUD is the basic set of operations required to use and manipulate a collection, or resource if you will.
In this setting, "performing" CRUD operations would relate to your application in the following way:
Angularjs uses $http to run http requests asynchronously against your RESTful API
Nodejs implemest the routes that makes up the API, hopefully in accordance to the best practices widely used in existing APIs (An in-depth blog can be found here. Based on the route/api-method triggered, the nodejs will also send queries to MySQL to persist the changes nessesary.
MySQL's job is to persist the data you are working with. Using SQL queries to (C)reate, (R)ead, (U)pdate and (D)elete data (CRUD operations). MySQL ofcourse has a lot of other tools as well, but in reference to the CRUD operations, these are what you need.
Now this is just the practical idea of how to implement a RESTful API with CRUD operations, persisting it with MySQL and consuming it with Angular. If you really are serious about learning more about this, you should:
Google better (Really, it's too easy to find info about the subject as people have mentioned). The terms REST, RESTful, API, CRUD are all good keywords to google, then append whatever keywords you are more interested in "Best practice REST API" for example.
Read blogposts, follow engineers on social media/blogs, read books, watch youtube etc. This information is everywhere.
One important thing that might not be obvious when starting out: Always try to find and standards or best practices for what you want to do. That means someone already has done the hard work of trying and failing, and basically serves you the best solution on a silver platter.
I'm trying to implement admin-site for commenting-system. I have REST-API with JSON. I don't want do isomorphic application. I just want feel in Single Page manner. I see there is already has some solutions:
1) Create ajax factory and send request to api methods with XmlHttpRequest, during dispatch action and handling this by hands.
2) Redux-api or redux-rest.
3) Method that used in redux real-world example.
For my job i need's stable solution. I think to choose redux-api. But i don't know which disadvantages can be in each variant.
Maybe anyone has the same problem?
There's no definitive answer to this; however I am using a variant of redux-api-middleware which allows me to keep my action creators stateless and free of side effects.
redux-api and redux-rest both look valid; if somewhat 'magic' based on the amount of configuration / convention they enforce on your app.