Serverless Is Not Just Lambda
Let’s take a look at the serverless AWS offerings that will save us a lot of money and effort.
What is Serverless?
Serverless means building and running applications without thinking about servers.
There are servers, of course, they’re just managed by someone else. Developers are not concerned with virtual, or physical, server machines and other infrastructure aspects such as host-level networking.
Benefits
Cost savings together with implicit scaling and high availability capabilities are the greatest advantages of serverless architectures:
- No operational overheads. Infrastructure is someone else’s concern.
- Pay as you go. No additional cost for unused resources.
- Scalability. Resources scale automatically based on the incoming load.
- High availability. Serverless services are highly available by default.
Moreover, serverless computing is much more green as no resources are kept up and running when not needed, which prevents energy wasting and saves our planet.
Drawbacks
Serverless comes with some disadvantages to take into account. Due to latency issues, vendor lock-in, and potential development difficulties, Serverless doesn’t fit every use-case:
- Latency. Not suitable for synchronously responsive communication due to cold starts.
- Vendor lock-in. Usually no standard APIs. Partly resolvable via third-party framework which, however, introduces a product lock-in.
- Multitenancy. Multitenancy must be solved on the application level which increases development costs.
- Testing. Serverless systems are hard to test locally.
- Debugging. Some common practices like remote access are not possible.
- Complexity. Big amounts of small services make communication serverless systems complex and difficult to reason about.
Use-Cases
Serverless architecture is a great fit for asynchronous, stateless applications that can be started instantaneously as well as for use cases that see infrequent, unpredictable surges in demand.
Typical use-cases for Serverless are:
- Event-driven systems.
- Batch processing.
- Data stream processing.
- Scheduled tasks.
- Chat bots.
Serverless vs FaaS (Function as a Service)
While FaaS is serverless, Serverless does not mean only FaaS. Consider the following AWS serverless services, where only AWS Lambda is a representative of FaaS:
- Computation
- Functions
- AWS Lambda
- Engines
- AWS Fargate
- Functions
- Databases
- Key-Value Store
- Amazon DynamoDB
- Relational
- Amazon Aurora Serverless
- Amazon RDS Proxy
- Key-Value Store
- Messaging
- Amazon EventBridge
- Amazon SNS
- Amazon SQS
- Object Storages
- Amazon S3
- Identity Management
- Amazon Cognito
- Routing
- Amazon API Gateway
In some literature, serverless services other than FaaS are called BaaS (Backend as a Service). Taking BaaS into the account, we can simply say that Serverless equals FaaS + BaaS.
Serverless vs PaaS (Platform as a Service)
Serverless is similar to PaaS, however, it goes beyond the traditional usages of containers by removing the concept of long-lived server components.
With PaaS (such as Cloud Foundry), developers have much low-level control of operating the application and still have to construct and scale the platform. Serverless is designed to bring entire applications up and down for individual requests automatically.
Another difference can be found in pricing models. While with PaaS the costs tie to reserved compute, network, and storage resources, you never pay for serverless resources that idle.
Serverless vs SaaS (Software as a Service)
Simply put, SaaS is about using applications while Serverless is about building them.
SaaS usually offers a complete business solution to use (such as Salesforce CRM, Microsoft Dynamics 365, Dropbox, etc.) with a different pricing model (user-based subscriptions).
Serverless offerings, however, consist of building blocks for application development such as backend services (storages, databases, etc.) and computation models (functions and container engines).
AWS Lambda
Lambda is a serverless compute service to run code. The code is organized into independent units called functions. Each function has well-defined inputs and outputs and it is written in a supported in one of the languages that Lambda supports. Alternatively, a custom execution container image can be assigned to the function.
Lambda functions provide a high-availability infrastructure including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging out of the box.
You pay as you go based on the consumed compute time and the category of resources that are assigned to the function.
AWS Fargate
AWS Fargate is a serverless compute engine for containers that works with both EKS and ECS.
- No operational overhead of scaling, patching, securing, and managing servers.
- No over-provisioning and paying for additional servers, you only pay for the resources that you use.
- Workload isolation and improved security.
- Out-of-box observability.
Fargate is a launch-type alternative to EC2 where provisioning and managing of the infrastructure resources (compute and memory) is fully managed by AWS.
Fargate supports all of the common container use cases such as microservices architecture applications, batch processing, machine learning, etc.
With Fargate the applications are decoupled from the underlying hosts. As the underlying servers are abstracted away no direct iterations such as login or customization are anymore allowed.
- Decoupled networking.
- Decoupled permissions.
- Decoupled monitoring.
Fargate with EKS
Alongside self-managed EC2 nodes and managed node groups, AWS Fargate is the third option for a compute engine on the EKS scheduler. Fargate profile is an interface to configure Fargate on an EKS cluster.
You don’t see any virtual machines in your account when running EKS pods on Fargate — it’s completely serverless.
Fargate scheduler is installed on EKS as an add-on extension to the Kubernetes scheduler to schedule and run pods on Fargate.
Cluster scaling is fully managed by AWS, you scale only your application.
Load balancing can be easily done by using AWS Load Balancer Controller.
Limitation of running EKS on Fargate
- No privileged pods.
- No support for general purpose DaemonSet.
- No GPUs.
- No Amazon EBS support (only Amazon EFS).
- Run only in private subnets.
Amazon Aurora Serverless
Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora, MySQL, and PostgreSQL-compatible relational databases.
Serverless Aurora scales automatically up and down, starts up, and shuts down based on your application’s demands.
- Scale instantly, from hundreds to hundreds of thousands of transactions, within a second.
- Adjusts capacity in fine-grained increments based on the application’s needs.
- You pay only for the capacity you use on a per-second basis.
- Supports the full breadth of Aurora features (highly durable, fault-tolerant, self-healing, continuous backup, etc.).
- High-available by default.
Serverless Aurora has typical use-cases of serverless databases:
- Unpredictable and variable workloads.
- Enterprise database fleet management.
- Scaled-out databases split across multiple servers.
For instance, if you run a test environment where testing is performed a few times a day and nothing is going on during the night, Serverless Aurora will definitely save you a couple of bucks...