Introduction to SeleniumBase
3rd January 2024The SOLID Principles of Object-Oriented Programming
8th January 2024When talking about Serverless solutions on AWS Environment, Lambda and Fargate are the most popular. But the question remains: which aligns best with your specific requirements, and which offers greater cost-efficiency? What are the pros and cons? This article delves into various crucial aspects to help you make an informed decision tailored to your context. Lambda vs. Fargate
Costs comparison
I chose to delve into the cost aspect first, acknowledging its criticality for the majority of users. Given that costs fluctuate based on individual use cases, I highly advise conducting a personalized calculation for your specific needs. Despite this variability, the forthcoming analysis will reveal a significant disparity between the two services in terms of cost-effectiveness.
Lambda environment definitions used for calculation:
- API Gateway (REST API without caching)
- No free tier
- 512MB (RAM)
- ARM (Processor architecture)
- 300ms (Response time in average)
- Variables: API Gateway and Lambda amount of requests
ECS + Fargate environment:
- Application Load Balancing (ALB)
- 1 second average connection duration
- Variables: GB processed/month (34kb * number of requests)
ECS Task Definition:
- Linux ARM64
- 730 hours average duration
- 20GB ephemeral storage
- Variables: vCPU, amount of RAM, and desired tasks
1MM requests per month (Low traffic)
On a low traffic API, we find out that it can cost up to $5.70/month on AWS Lambda Infrastructure.
On Fargate Infrastructure (with 0.5 vCPU, 2 desired tasks and 1GB of RAM), we can reach up to $47.61/month (+8x more expensive 😮).
10MM requests per month (Medium traffic)
On a 10MM requests/month traffic, we got $57.00/month.
On Fargate (with 0.5 vCPU, 3 desired tasks and 1GB of RAM), we calculated something close to $62.41/month.
100MM requests per month (High traffic)
Well, on our final test, Lambda have got $570,00/month.
Now, on Fargate (with 1 vCPU, 4 desired tasks and 2GB of RAM), we got $158.67/month (+3.5x cheaper 😮).
Conclusion
Cost considerations are fundamentally tied to your application’s performance and the consumption of time and computational resources. Therefore, it’s essential to calculate these expenses based on your specific usage. This tailored assessment will provide a more accurate understanding of the cost implications for your unique scenario.
In practical terms, the costs associated with AWS Lambda and Fargate may turn out to be lower than initially anticipated. This is due to several factors: the option to implement Scheduled Scaling with ECS tasks, reducing the number of running tasks during times of lower traffic; your Lambdas will propably consume less memory and execute quicker than the average 300ms; among other considerations. It’s crucial to factor in these scenarios when estimating your costs.
Obs.: this analysis is not considering services like Cloudwatch, VPC additional resources, and so on. Also, not considering Free Tier.
Performance
It is well known that on Serverless environments the Cold Start result in a significant performance impact, so it needs to be considered, and I strongly advise you to search for methods to decrease its impact.
Lambda Performance
The operational mechanism of AWS Lambda, which involves creating a new ‘context’ by generating an instance from our code definition and maintaining this context in a ‘warm’ state for a brief period, can significantly impact this infrastructure. This is particularly evident in the phenomenon of ‘cold starts’, where the initiation of a new instance can lead to noticeable delays. Understanding and mitigating the effects of cold starts is crucial in optimizing the performance of Lambda-based applications.
When AWS Lambda initiates a new environment, it undergoes a process of preparing the entire runtime environment, which includes building the image and executing initial code. This setup phase can take several seconds, potentially translating into a noticeable delay for your end clients. Understanding and optimizing this start-up time is crucial for ensuring efficient client experiences.
Additionally, the first execution of an AWS Lambda function involves retrieving certain resources for the first time, such as establishing database connections and accessing secrets. These initial operations can lead to a slower execution during the Lambda’s initial run. It’s important to factor in these initial resource setup times when considering the overall performance and responsiveness of your Lambda functions.
Another vital aspect to consider is monitoring and tracing within the AWS Lambda environment, which often proves to be a significant challenge. Current solutions offered by monitoring tools, such as Datadog, can add a substantial amount of time to your cold-starts, sometimes amounting to several seconds. This can also noticeably affect the performance of warm executions. For instance, the Datadog Lambda Extension solution involves executing HTTP requests to the Datadog API, among other actions, which can further impact execution times.
ECS + Fargate Performance
In this infrastructure, the cold-start causes way less impact than on Lambda since it happens only on:
- Initial tasks deployment
- Scaling (new tasks create manually or automatically via Scaling Policies)
- New deployments (but your client should not feel that since the last running tasks will keep up and running before the deployment completion)
In the context of monitoring and tracing, it’s important to note that such infrastructure typically does not suffer from significant performance impacts. This is because solutions like Datadog run in a separate container, often as a sidecar to the same tasks, rather than directly within the Lambda environment. Consequently, your clients are unlikely to experience any notable delays due to response times. The primary consideration is that these monitoring solutions will share and compete for computational resources with your web application, but they do not inherently slow down the Lambda execution itself.
Regarding overall performance, it’s noteworthy that AWS Fargate generally outperforms AWS Lambda. But why is this the case? The answer lies in the intricate workings beneath the surface of these technologies. Understanding these nuanced details requires a deep dive into their architectural and operational differences, a topic that is beyond the scope of this article.
Performance Comparison
In our comparative analysis, we’ll focus on a basic register/login API that incorporates several key functionalities. This includes performing database queries, as well as the comparison and hashing of passwords using the bcryptjs library. Additionally, we’ll explore the process of signing JSON Web Tokens (JWTs) using the jsonwebtoken library. These elements will form the basis of our evaluation, providing a practical context to assess the performance and capabilities of AWS Lambda and Fargate in handling common API operations.
We are going to use Postman’s Performance Runner to execute the Load Test, using 50 VUsers on a 3 minutes run with 2 minutes of ramp up.
Lambda application : a Typescript running on Node.js application using no Frameworks and using TypeOrm to connect to Postgres (deployed on AWS RDS and being accessible via RDS Proxy). See the results:
ECS + Fargate application : a Nest.js Framework application using also TypeOrm and Postgres deployd on RDS. The infrastructure is the same as the defined before for a High Traffic API. See the results:
This comparison reveals a significant performance disparity, especially notable when Lambda initiates a new context. In such instances, the processing and return of a request can take as long as 4.3 seconds. This extended duration is primarily due to the time Lambda takes to set up a new execution environment. Such scenarios are critical to consider when selecting the most suitable solution for your needs. It’s important to weigh the implications of these delays against your application’s performance requirements and user experience expectations. Lambda vs. Fargate
Scalability
Lambda is well-known as having ‘Infinite Scalability’, but it does not work as that means. Lambda do have an auto-scalability since it will create a new context when necessary, since one context can handle just one request per time. But, it do have a limit.
The limit for AWS Lambda is called ‘Account Concurrency’. It means that this limits the amount of Lambda context’s created at time, including all the Lambda on your accounts. So, being on this limit, your Lambdas will scale as necessary. But always remember to increase it on high demand periods.
ECS + Fargate can be scaled manually creating new tasks (lets say, copies) of your application and also increasing vCPU and amount of memory, or it can be done automatically via Auto Scaling, which can monitor some metrics (CPU, Memory, etc) to create new tasks, services, and so on.
Fargate will handle for you the necessary infrastructure for your application to run, meaning you will not need to complain about the machines ECS runs your tasks.
Development process
Lambda do have a much simpler way to develop, since it just needs a function definition and can be updated quickly. But, you can suffer a little when trying to run it locally. There are some tools to do so like: LocalStack, Serverless Offline and SAM CLI (via local start-api), but all of them have its limitations and can be not trivial as a docker compose up.
Since it is not trivial to run it locally, you will find out that is much simpler you literally deploy your Lambda on a development environment and test it directly on AWS, which can led to a slower development process and not that kindly for new developers on your project. Also, executing integrating tests can be slower since you will have to run it directly using a deployed Lambda.
ECS will run their tasks from a docker image, so your application will need to be developed using docker, which is AMAZING! With docker, you can define a docker-compose to emulate your infrastructure locally and definitely speed up your development process and track usage of CPU/Memory easily, helping new developers and also speeding up integration tests.
Conclusion
Drawing from the insights presented and my personal experience, AWS Lambda emerges as a viable option for developing APIs primarily in two scenarios: for very small businesses or in cases where the potential for cold-starts exceeding 3 seconds, especially when incorporating monitoring tools, is acceptable. The slower response times inherent in certain Lambda operations should be a key consideration, particularly for applications where rapid response is crucial. Ultimately, Lambda’s suitability hinges on balancing these performance characteristics against the unique needs and scale of the business.
AWS Lambda excels in managing event-driven operations, such as handling S3 bucket events, orchestrating workflows with SNS and SQS, processing DynamoDB Streams, and integrating with EventBridge, among others. Its capabilities in these areas are highly effective and efficient. However, when it comes to building APIs, my strong recommendation leans towards utilizing a Dockerized solution deployed on ECS. Here, the serverless advantages of Fargate come into play, offering a more robust and scalable infrastructure for API development and deployment. This approach harnesses the best of serverless architecture while mitigating some of the limitations encountered with Lambda for API-related tasks. Lambda vs. Fargate
6 Useful Firebase Services For Mobile Application Development