I have a never ending discussion with my manager about the usage of AWS Lambda. I would like to get some help from one of you guys.
I am a bit hesitated to utilize serverless architecture for production level projects yet. First of all, its a bit time-consuming to test what I am building in the local setup. Even if we can test the code through the unit testing, we cannot remove the possiblity of faliure of mocked request and response objects. The fact that I cannot invoke the lambda in my local setup makes me very bored when I am testing my Lambda-written API during the development process. Secondly, as far as I know so far, there is no promised SLA for AWS Lambda for its availability and reliability. This just make me kind of hesitate adopting Lambda to build RESTful API.
So now what I do is to use Lambda only the case when to catch events triggered from AWS. For example, to do something after a user uploaded his profile picture to S3 bucket or to do something after a user registered throuh Cognito.
However, what my manager expects is to mix my Node.js API with AWS Lambda for a single project. From my perspective it totally doesnt make sense. Once we have set up Node API on EC2 instances, I think it will be more productive to think about setting up auto-scaling or how to utilize all the resources running on the current EC2s. But my manager insists me to set up both Node API and Lambda API altogether. For example, the services A and B will be served by Node API and the services C, D, and E by AWS Lambdas.
I tried it before, but it caused a lot of confusion to me. I feel like it will be better to choose either Node API or AWS Lambda API when building an API instead of mixing them together.
I don't want to say that my manager is totally false and I am right. I want to just have a clear answer in this case. I would really appreciate any comment and answers on this situation.
Just adding some thoughts on the previous answers:
First of all, its a bit time-consuming to test what I am building in the local setup. Even if we can test the code through the unit testing, we cannot remove the possiblity of faliure of mocked request and response objects. The fact that I cannot invoke the lambda in my local setup makes me very bored when I am testing my Lambda-written API during the development process
For sure you can build, test and simulate a Lambda Invoke in your local environment, it's just a new paradigm and there's some tools to help you out.
Secondly, as far as I know so far, there is no promised SLA for AWS Lambda for its availability and reliability. This just make me kind of hesitate adopting Lambda to build RESTful API.
AWS Lambda operates on the AWS "compute layer" infrastructure, so I believe if they face an issue on the compute layer, for sure your EC2 instances will also face an outage.
Once we have set up Node API on EC2 instances, I think it will be more productive to think about setting up auto-scaling or how to utilize all the resources running on the current EC2s
I don't think so. The serverless stack scales with zero effort, you don't need to manage infrastructure.
I tried it before, but it caused a lot of confusion to me. I feel like it will be better to choose either Node API or AWS Lambda API when building an API instead of mixing them together.
Welcome to micro and decoupled services! Where developing the service is easy, but managing the whole infrastructure is hard.
Another thing to keep in mind when talking to managers about architecture: Cost
It's hard to argue and makes every manager's eyes shine when they see the possibility of running their businesses with a low cost. And having your service running with a serverless stack is really cheap.
Bottom line:
No, it's not a bad idea to mix resources as your manager wants.
Yes, it's easier to just get a framework to write the api, setup an EC2 instance and a auto-scaling group.
Yes, there's a heavy lift when decoupling services, but it pays its price when running in production.
OK let's go one by one. First thing first. Your first problem is to test Lambda in your local and it completely possible to do with SAM.
Please have a look on - http://docs.aws.amazon.com/lambda/latest/dg/test-sam-local.html
The most important design decision, If your application is monolithic and you don't want to redesign it to microservices then stick with EC2.
Next regarding your design for hybrid API (Lambda and EC2). I don't think that is an anti-pattern or bad idea to do. It completely based on your requirment. Say you have an existing set of API's in EC2 (May be monolithic) and you want to slowly migrate it to serverless and micro services. You don't need to migrate it all together to serverless. You can start one by one. Remember the communication is happening through Http and it doesn't matter if your services are ditributed between EC2 and Lambda. In micro service world it doesn't matter services are in same server or ditributed across many servers.
The communication speed has drastically changed over last few years and that's one of the main reasons of emergence of micro service.
So in a nut shell it is not a bad idea to have hybrid API's but completely based on your design architecture. If monolithic then don't go for Lambda.
There are instances, where you do need to run Lambda and EC2 instances together (E.g Monolith to Microservices migration projects, NodeJS with Express as a WebServer) which can make sense.
There are multiple approaches to achieve this. Two common approaches(For request-response) are to use.
If you plan to serve only RESTful APIs from both Lambda & NodeJS EC2 instances you can use API Gateway as a proxy for both of them.
If NodeJS EC2 instance is used as a WebServer to serve both dynamic pages & etc, then you can use AWS CloudFront as a proxy & still you will require API Gateway to connect with Lambda.
Note: For asynchronous workflows, you can use Lambda along with other AWS services such as AWS Step Functions, SQS, SNS based on the nature of workflow.
Related
in my team we use NestJS for our backend. We want to use microservices architecture and I've seen there is a built-in microservices support in NestJS. After read the documentation I've got a few questions about this feature:
Should I create microservices in the same project or in separate projects?
Why even use createMicroservice over NestFactory.create? Using createMicroservice also makes me to use #MessagePattern and use an emitter to send messages - which is okay if my microservice gets messages using queue (like RabbitMQ), but I can't create REST API in my microservice, so it makes it harder (or impossible) to call NestJS microservice from non-NestJS microservice since it relies on NestJS MessagePattern.
Thanks ahead.
It is not necessary but I personally create separate projects within a workspace.
Calling createMicroservice is compulsory because nest needs to know what kind of controller you have and what behavior do you expect. In addition, you should avoid implementing rest API in all services; it should be implemented just in gateway service which is responsible for API layer, and it is a bond between microservices and outworld.
For having a good notion about microservice implementation with NestJS I highly recommend you to take a look to this repo:
https://github.com/Denrox/nestjs-microservices-example
My boss asked me to find a way to completely disassociate our front-end application from the back-end in the local environment, currently I'm the sole developer for both our back-end software and the front-end, so using Docker I'm able to mimic a production environment and work on both projects separately, (we don't render on the server side), his idea is to mock literally everything, so in theory you wouldn't need the back-end software to develop the front-end.
Two of the (more reasonable) solutions I've thought of are:
Mocking all of the network requests on the frontend, these functions will
run instead of network requests.
the problem with this approach is that it is not persistent, all of the data is randomly generated for every request, and in a system that is so oriented around forms, tables, and lists, I feel that getting the data you're expecting after a form submission is a must.
and in order to persist data, every request would have to go through some sort of data store (Mobx, Redux, etc...) and even then, if the page refreshes, the data is gone.
Initiating an express server and DB on top of Docker along with Webpack, and mimicking the production server requests and responses using db seeders, this way the front-end is persistent.
Obviously, this approach would generate plenty of work, and in order to make sure the express server is correctly mimicking the original back-end software, it too will need unit tests and mock requests.
While mocking the data is great for unit tests, this doesn't seem like the way of doing front-end with such a small team to me, is there a good approach to achieving this that I cant come up with or find? or is this an exercise in poor decoupling strategies?
What you are looking for is a Mock API. There's plenty of packages for it where you define example requests in a JSON format. A lot of these also handle persisting data for a short amount of time.
From a strategy perspective using these can actually make a lot of sense to automize end-to-end-tests, which shouldn't rely a production API. Whether it's the right choice of developer time in a one man team depends on the long-term perspective of course ;-)
I am a beginner to web development - I am building a website that requires some entries from the user, does some complicated mathematical processing on those entries, and returns the result to the client. I was thinking of implementing the mathematical stuff in another separate application which is better suited for such work (like in Java or C++ where there are good math libraries and the implementation would be more robust and faster). I was wondering what is the best way to do this, architecture-wise.
The "dummy" approach would be spawning a process from the Node.js application and waiting on its output from stdout, parsing it (probably in JSON) and then processing it before sending the result to the client. I have trouble believing that this is the best way to do this (it seems too error-prone, no proper error handling, dependent on the output, and just plain bad practice). A slightly better approach would be to have the Java or C++ application listening on a specific port and waiting for requests from the Node.js application. However this requires more thinking in terms of load-balancing (how would it scale with the number of requests?). Finally, the last approach I found online was to use a queuing system such as RabbitMQ as a way to communicate between the Node.js application and the Java application.
Typically (in a "traditional" software), implementing a separate library that holds all the math magic to which we can make calls would be a good way to go.
What is the best approach to achieve this with a Node.js/web application? There must be good practices/models/architectures/designs for such a problem. Thanks!
The best approach I can think of is writing a native nodejs module, you can use C++ and any library you have to write the code and export some APIs to be called from node's javascript.
This also allows you to package your native application as a module for nodejs, which can then be distributed and compiled/installed with npm.
Here's the official tutorial from node's website.
It isn't the easiest approach, but certainly the more elegant and mantainable one.
IF your problem can be structured in a stateless way -- i.e. you send a request out with the data and the associated task, and then later receive the result as a response, I would probably create one angular app for the front end, and multiple node servers for the back end (IF you needed to handle many requests simultaneously).
As pointed out in the first answer below, you can put whatever parts of the problem that you want to into native code, BUT what might be much easier is to use sockets to connect to a server to do the computation, in any language, etc... In one project I use a socket.service to handle all the communication back and forth to the servers. (The angular-fullstack generator in the next paragraph has such a skeleton socket.service piece.)
In terms of the frontend in combination with a node backend, The Yeoman generator (http://yeoman.io/generators/) for creating MEAN stack applications, using MongoDB, Express, AngularJS, and Node - lets you quickly set up a project following best practices. And the angular-fullstack generator will give you a framework that is light years ahead in so many ways. You can use yeoman to generate a sample application frontend (angular, express) and backend (NodeJs, MongoDB) with support for sockets, etc...
This too is not very simple, but in the end, with a few months of work, you would have a structured problem and solution that you would never outgrow. When I first wanted to work into this space, that was the path I took. And in truth the front-end was/is the harder part and the backend, connected with sockets, http, or whatever has gotten to be much simpler. In terms of servers, AWS, or even just nginx on Linux through DigitalOcean or many others is inexpensive and flexible.
I hope this helps.
Currently I have two node processes that I need to run. One is my own custom app and another is iframely, another Node app that returns embed codes. Right now, I have the node app make requests to, say http://localhost:8061/iframely?url=.... But now, switching to Heroku, only 1 process in my app can accept HTTP requests (this is the process designated with web: in the Procfile, as I understand it).
To run iframely along side my app, do I have to create another app? Or can I have the two processes speak to each other bypassing http? Keeping in mind I don't want to devilry restructure iframely.
It sounds from your description as if you have two separate node apps, and each one serves its own purpose.
Regardless of how these apps are implemented, the best way to handle this sort of thing is via multiple heroku apps. This is what they were designed for!
You can think of a Heroku app as a single-purpose web server. If you have one codebase that does something independent of another, create two Heroku apps. If you have 3 codebases that all do different things, make 3 Heroku apps.
In addition to this being the best way to handle this sort of thing in general (you get more reliability as each service has it's own servers), it's also cheaper: you get 1 free Heroku dyno per app, this means you'll have 2x's the free web servers you would have otherwise.
I want to do a smoke test in order to test the connection between my web app and the server itself. Does Someone know how to do it? In addition I want to do an acceptance tests to test my whole application. Which tool do you recommend?
My technology stack is: backbone and require.js and jquery mobile and jasmine for BDD test.
Regards
When doing BDD you should always mock the collaborators. The tests should run quickly and not depend on any external resources such as servers, APIs, databases etc.
The way you would want to make in f.e. Jasmine is to declare a spy that pretends to be the server. You then move on to defining what would be the response of the spy in a particular scenario or example.
This is the best aproach if you want your application to be environment undependent. Which is very needed when running Jenkins jobs - building a whole infrastructure around the job would be hard to reproduce.
Make spy/mock objects that represent the server and in your specs define how the external sources behave - this way you can focus on what behavior your application delivers under specified circumstances.
This isn't a complete answer, but one tool we've been using for our very similar stack is mockJSON. It's a jQuery plugin that does a nice job both:
intercepting calls to a URL and instead sending back mock data and
making it easy to generate random mock data based on templates.
The best part is that it's entirely client side, so you don't need to set up anything external to get decent tests. It won't test the actual network connection to your server, but it can do a very good job validating that type of data your server would be kicking back. FWIW, we use Mocha as our test framework and haven't had any trouble getting this to integrate with our BDD work.
The original mockJSON repo is still pretty good, though it hasn't been updated in a little while. My colleagues and I have been trying to keep it going with patches and features in my own fork.
I found a blog post where the author explain how to use capybara, cucumber and selenium outside a rails application and therefore can be use to test a javascript app. Here are the link: http://testerstories.com/?p=48