I have running my backend services based on Monolith Architecture using Node.js express. The application is getting big and I need to migrate from Monolith to Microserivce Architecture.
I decided to use REDIS pub/sub methods for messaging purpose among API Gateway and microservices. What is sure is that, I need to create a topic for every single API available under each microservice and then start listen for emitted event from API Gateway to interact and return appropriate data from microservice to Gateway and return that to end point.
For Instance, if an end point call api for gathering products list, there must be a listener opened inside the product microservice and listening for list event to generate the list and return via gateway to end point.
The question is, is there any solution available for me to either do not create event for every single API inside my microservices or there is no way for me and I need to create event listener for every API that I have?
There are some patterns to refactoring a monolithic application to microservice. for example, you can use the anti-corruption layer to start it and the start to separate your bounded context one-by-one to a new Micro-service. Then you can use a gateway to handle your client request and authentications. To implement a fast dedicated Gateway you can use Ocelot. But don't forget that to implement a Microservice structure its important to know details about Domain-Driven Design (DDD) and some other patterns like CQRS and some Message brokers like rabbitMQ.
to start you can read and follow the below links:
Refactoring a monolith to microservices
Monoliths to microservices using domain-driven design
Related
in my team we use NestJS for our backend. We want to use microservices architecture and I've seen there is a built-in microservices support in NestJS. After read the documentation I've got a few questions about this feature:
Should I create microservices in the same project or in separate projects?
Why even use createMicroservice over NestFactory.create? Using createMicroservice also makes me to use #MessagePattern and use an emitter to send messages - which is okay if my microservice gets messages using queue (like RabbitMQ), but I can't create REST API in my microservice, so it makes it harder (or impossible) to call NestJS microservice from non-NestJS microservice since it relies on NestJS MessagePattern.
Thanks ahead.
It is not necessary but I personally create separate projects within a workspace.
Calling createMicroservice is compulsory because nest needs to know what kind of controller you have and what behavior do you expect. In addition, you should avoid implementing rest API in all services; it should be implemented just in gateway service which is responsible for API layer, and it is a bond between microservices and outworld.
For having a good notion about microservice implementation with NestJS I highly recommend you to take a look to this repo:
https://github.com/Denrox/nestjs-microservices-example
I have a rails SPA using react. I've recently started using actioncable. Since websockets have lower overhead than normal http connections I'd like to allow the javascript client to make requests over the websockets created by actioncable but I don't want to duplicate all my code from my controllers.
Is there a good way to trigger a controller from actioncable (eg set the params and current_user)? I could then just implement a method that switches whether or not it sends the json response via actioncable or by the usual render option (I don't have any views). I'm guessing this is a common need so I'm probably just not searching for the right thing.
No
The way you interact with Rails controllers is by sending HTTP requests to your application. Which would defeat the entire purpose of the excersize.
Rails controllers aren't really just simple classes that are easy to extract into isolation. They are actually fully fledged Rack applications that live in a Rack middleware stack. Their entire purpose is to take an incoming HTTP request from the router and provide a response which is passed back up the stack.
While you can fake a request with Rack::MockRequest it starts to melt down as soon as your application depends on the other middleware in the stack. For example Devise depends on the Warden middleware - and the answer is more stubbing and mocking. This very flawed approach was used in the deprechiated controller tests - using it in production would be crazy.
I'm guessing this is a common need so I'm probably just not searching for the right thing.
It is. And the answer is quite simple. If you need to reuse code outside of the controller don't put it in a controller.
There are tons of different options such as mixins, service objects, ActiveJob etc.
I have a never ending discussion with my manager about the usage of AWS Lambda. I would like to get some help from one of you guys.
I am a bit hesitated to utilize serverless architecture for production level projects yet. First of all, its a bit time-consuming to test what I am building in the local setup. Even if we can test the code through the unit testing, we cannot remove the possiblity of faliure of mocked request and response objects. The fact that I cannot invoke the lambda in my local setup makes me very bored when I am testing my Lambda-written API during the development process. Secondly, as far as I know so far, there is no promised SLA for AWS Lambda for its availability and reliability. This just make me kind of hesitate adopting Lambda to build RESTful API.
So now what I do is to use Lambda only the case when to catch events triggered from AWS. For example, to do something after a user uploaded his profile picture to S3 bucket or to do something after a user registered throuh Cognito.
However, what my manager expects is to mix my Node.js API with AWS Lambda for a single project. From my perspective it totally doesnt make sense. Once we have set up Node API on EC2 instances, I think it will be more productive to think about setting up auto-scaling or how to utilize all the resources running on the current EC2s. But my manager insists me to set up both Node API and Lambda API altogether. For example, the services A and B will be served by Node API and the services C, D, and E by AWS Lambdas.
I tried it before, but it caused a lot of confusion to me. I feel like it will be better to choose either Node API or AWS Lambda API when building an API instead of mixing them together.
I don't want to say that my manager is totally false and I am right. I want to just have a clear answer in this case. I would really appreciate any comment and answers on this situation.
Just adding some thoughts on the previous answers:
First of all, its a bit time-consuming to test what I am building in the local setup. Even if we can test the code through the unit testing, we cannot remove the possiblity of faliure of mocked request and response objects. The fact that I cannot invoke the lambda in my local setup makes me very bored when I am testing my Lambda-written API during the development process
For sure you can build, test and simulate a Lambda Invoke in your local environment, it's just a new paradigm and there's some tools to help you out.
Secondly, as far as I know so far, there is no promised SLA for AWS Lambda for its availability and reliability. This just make me kind of hesitate adopting Lambda to build RESTful API.
AWS Lambda operates on the AWS "compute layer" infrastructure, so I believe if they face an issue on the compute layer, for sure your EC2 instances will also face an outage.
Once we have set up Node API on EC2 instances, I think it will be more productive to think about setting up auto-scaling or how to utilize all the resources running on the current EC2s
I don't think so. The serverless stack scales with zero effort, you don't need to manage infrastructure.
I tried it before, but it caused a lot of confusion to me. I feel like it will be better to choose either Node API or AWS Lambda API when building an API instead of mixing them together.
Welcome to micro and decoupled services! Where developing the service is easy, but managing the whole infrastructure is hard.
Another thing to keep in mind when talking to managers about architecture: Cost
It's hard to argue and makes every manager's eyes shine when they see the possibility of running their businesses with a low cost. And having your service running with a serverless stack is really cheap.
Bottom line:
No, it's not a bad idea to mix resources as your manager wants.
Yes, it's easier to just get a framework to write the api, setup an EC2 instance and a auto-scaling group.
Yes, there's a heavy lift when decoupling services, but it pays its price when running in production.
OK let's go one by one. First thing first. Your first problem is to test Lambda in your local and it completely possible to do with SAM.
Please have a look on - http://docs.aws.amazon.com/lambda/latest/dg/test-sam-local.html
The most important design decision, If your application is monolithic and you don't want to redesign it to microservices then stick with EC2.
Next regarding your design for hybrid API (Lambda and EC2). I don't think that is an anti-pattern or bad idea to do. It completely based on your requirment. Say you have an existing set of API's in EC2 (May be monolithic) and you want to slowly migrate it to serverless and micro services. You don't need to migrate it all together to serverless. You can start one by one. Remember the communication is happening through Http and it doesn't matter if your services are ditributed between EC2 and Lambda. In micro service world it doesn't matter services are in same server or ditributed across many servers.
The communication speed has drastically changed over last few years and that's one of the main reasons of emergence of micro service.
So in a nut shell it is not a bad idea to have hybrid API's but completely based on your design architecture. If monolithic then don't go for Lambda.
There are instances, where you do need to run Lambda and EC2 instances together (E.g Monolith to Microservices migration projects, NodeJS with Express as a WebServer) which can make sense.
There are multiple approaches to achieve this. Two common approaches(For request-response) are to use.
If you plan to serve only RESTful APIs from both Lambda & NodeJS EC2 instances you can use API Gateway as a proxy for both of them.
If NodeJS EC2 instance is used as a WebServer to serve both dynamic pages & etc, then you can use AWS CloudFront as a proxy & still you will require API Gateway to connect with Lambda.
Note: For asynchronous workflows, you can use Lambda along with other AWS services such as AWS Step Functions, SQS, SNS based on the nature of workflow.
I'm trying to implement RabbitMQ as a general task queue in an existing web app. But I also need to run a scheduled task to aggregate some user data at a set interval.
I know task scheduling isn't in the core design of rabbitMQ but it seems like it can be done with dead letter exchanges. But what I'm concern about is that when I have multiple instances of the web app running the task will get scheduled multiple times.
Is there a way I can structure this to avoid this problem? Perhaps limiting the amount of connection an exchange if that is possible?
Like you've said it's not in the core design of RabbitMQ.
You can take a look at RabbitMQ blog to figure out how they solving problems with semaphore queue. https://www.rabbitmq.com/blog/2014/02/19/distributed-semaphores-with-rabbitmq/
"when I have multiple instances of the web app running the task will get scheduled multiple times."
This is true, but you can change your setup a little bit. Ether introduce a "shared memory" (say db with primary key enforcement). And make sure you publish to rabbit only after your transaction suceeded.
Or you can take a look at sort of "round robin" exchange and leverage it entirely in RabbitMQ context (https://github.com/jbrisbin/random-exchange)
Is it possible to create a single Meteor application having multiple domains and displaying different views/layouts depending on that domain?
For example, i have a admin interface accessible on admin.myapp.com and the two domains storeX.com and storeY.com. Both domains should point to the data from admin.myapp.com but displaying the data (mostly) independently of each other.
This may not be totally updated to 2014 standards, but I did answer this question before:
How can Meteor handle multiple Virtual Hosts?
And with the same setup, you can use Passenger (for nginx OR apache2) for Meteor. Here's a complete tutorial for using Passenger with Meteor, but keep in mind you have to integrate the multiple virtualhosts/domains to this tutorial yourself.
Perhaps a better approach would be to use Meteor's Pub/Sub capabilities, rather than sharing a DB per say. It's entirely possible to publish and subscribe across meteor apps, or indeed any implementation using DDP.
http://docs.meteor.com/#/full/ddp_connect
You can use the partitioner to send different views of the data to different users based on the domain name they hit.
See How can I share MongoDB collections between Meteor apps? . Basically the idea is that you build two meteor apps which would share the mongodb and collection(s) data.