Contributed"> 3 Reasons Why Teams Move Away from AWS Lambda - The New Stack
TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Cloud Services / Serverless

3 Reasons Why Teams Move Away from AWS Lambda

Here's why teams move away from AWS Lambda to lower-level computing abstractions and how you can migrate smoothly to functions running on Amazon EKS.
Jul 18th, 2023 10:00am by
Featued image for: 3 Reasons Why Teams Move Away from AWS Lambda
Feature and inline images courtesy of Pixabay.

When Amazon Web Services first introduced Lambda in November 2014, it touted it as a compute service “that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.”

It was a big deal because it raised the level of abstraction as high as you could imagine in terms of operationalizing code: write a function and Lambda takes care of the rest.

The consumption-based pricing model was also revolutionary in that you only paid for the amount of compute actually used, and the functions scaled down to zero when unused.

However, the total cost of running functions (when you factor in compute, networking and other AWS services required to trigger and orchestrate the functions) could be higher than the cost of compute on a simpler abstraction like AWS EC2 (if you count active compute cycles only) — it’s the price you pay for the amount of value bundled into the higher level abstraction. It also means less flexibility in terms of what you can actually do from your function’s code and the programming languages available for use.

To explain this to my mother, I told her that it is like the difference between ordering food delivery and cooking yourself: ordering a meal via a delivery app is very convenient, but there is less choice and it is more expensive. Cooking yourself provides you with all the freedom of choice at a cheaper cost, but there is more initial investment required from you to cook your meal. Especially if you want to make pad Thai (pictured above), which requires some uncommon ingredients (at least in Europe), like tamarind paste.

Sometimes, the value of takeaway food is worth the higher cost and reduced choice when compared to the effort required to cook the same meal at home.

Let’s dive into the three main reasons why some teams move away from AWS Lambda to lower-level computing abstractions. Read on until the end for tips on how you can migrate smoothly from Amazon Lambda to functions running on Amazon EKS.

Reason #1: Cost

It is very easy to get started with a service like AWS Lambda. If you are a small team starting a new project, you want to maximize your chances of getting to market quickly and getting feedback early. Lambda lets you ship fast by turning as much capital expenditure into operational expenditure as possible.

But at some point, the higher costs of serverless functions when compared to lower-level computing abstractions like virtual machines or containers can become a problem, particularly when an application starts receiving a lot of traffic.

This topic came into the spotlight recently when a team working on Amazon Prime Video cut costs by 90% by moving from Lambdas to a monolith on EC2. They were largely benefiting from the serverless scaling mechanics, but by moving everything into a monolith they cut down massively on orchestration and data transfer costs.

Reason #2: Focusing on a Single Abstraction

There are many types of workloads that you probably don’t want to run on AWS Lambda. For example, ETL data processing or service orchestrations won’t leverage the scalability benefits provided by Lambda and are likely to hit limits imposed by AWS such as total execution time. When a platform team has to support multiple computing paradigms for their organization’s developers, such as lambdas and containers, it adds complexity to their work.

Lambdas and containers each require different solutions to manage various steps of the software development lifecycle. The way you develop, test, deploy, secure and monitor an AWS Lambda function is very different from how you would do the same for a containerized workload running on a container orchestrator like Amazon’s managed Kubernetes service EKS. What we’re hearing from the TriggerMesh community (you can speak to them directly on Slack) is that operations teams will sometimes prefer to unify their operations on a single abstraction like containers, running on Kubernetes, rather than having to solve the same problems in different ways across multiple abstractions. This has a few other benefits:

  • It makes the landscape simpler for developers in the organization: a single paradigm for deploying code means a single paradigm to learn and master.
  • It lets teams capitalize on their Kubernetes expertise and optimize the usage of the resources made available in their clusters.
  • It creates a more cloud-agnostic, portable way to write business logic, with less lock-in to a specific cloud vendor’s services, which brings us to reason #3.

Of course, not all platform teams have the skills or desire to base all their operations on Kubernetes; some will lean toward simpler systems like ECS or Fargate for example.

Reason #3: Portability

A portable application is one that can run on different platforms with minimal changes to the application code. The “platform” of a cloud-native application is made up of the compute, storage, networking and other managed services used by the application and provided by the underlying cloud platform. Therefore, the portability of a cloud-native application can be defined along two dimensions:

  • The degree of coupling between the application and the compute engine it is running on. For example, what is the cost of migrating a function from AWS Lambda to Google Cloud Functions?
  • The degree of coupling between the application and the cloud services it uses. For example, if an application subscribes to notifications for new files on an AWS S3 bucket, how easily can it be ported to ingest similar notifications for new files on a Google Cloud Storage bucket?

Companies are increasingly dealing with multicloud architectures. A recurring reason is that through mergers and acquisitions, companies that may have initially been all-in on one cloud provider find themselves operating software across multiple clouds. Some choose to lean into the multicloud way and maintain a footprint on multiple cloud platforms, while others prefer to migrate all their applications to a single cloud. There is no right answer and each has its pros and cons. But in both cases, portability can bring significant benefits.

If you’re migrating apps from one cloud to another, enabling application portability can allow for gradual and less risky migrations. You change a small number of variables at a time rather than doing a big-bang update. And if you’re committing to multicloud, then creating a certain level of portability means that developers can more easily consume resources, data, and events from different clouds. Without a portability layer, each developer has to reimplement integration logic for each cloud which slows down development and increases cognitive load. DevOps teams are trying to offload these responsibilities to the platform so that application developers can focus on what they do best.

What Are People to Use Instead of AWS Lambda?

The three points discussed in this post raise the question: is there a way to migrate Lambda functions to a more cost-efficient, unified and portable computing platform?

The good news is that there are now many established, open-source alternatives that let you run serverless functions. These often include, to varying degrees, the ability to run function code and trigger those functions with different event sources. Examples of technologies in this space are Knative, OpenFaaS, Apache OpenWhisk and TriggerMesh.

For platform teams with a focus on Kubernetes, a three-part recipe is emerging as a way to migrate away from traditional Lambda functions:

  1. Write Lambdas using AWS’s Custom Lambda Runtimes.
  2. Deploy the functions on Kubernetes with Knative Serving.
  3. Trigger the functions with TriggerMesh.

Because AWS Lambda functions can now be built with containers that use AWS’s Custom Lambda Runtimes, you can actually use those same container images and deploy them anywhere that can run containers.

Knative Serving provides a way to take a containerized service and deploy it to Kubernetes such that it will scale to zero when idle, scale horizontally according to load and become addressable so that other workloads on Kubernetes can route events to it.

Knative Serving can easily be installed on Amazon EKS.

The final piece of the recipe is the triggering mechanism. Although Knative comes with a few triggers out of the box, it doesn’t include triggers for AWS services that you might have been using to trigger your Lambda functions. TriggerMesh is a popular open-source solution to expand the range of triggers for your Knative serverless functions and includes AWS services as sources of events such as SQS and S3. And because TriggerMesh can run natively on Kubernetes, along with your Knative services and other workloads, it can easily pull events into EKS (or other K8s distributions) from external sources so that you can filter, transform and route those events to the services you need to trigger. (Have a look at this guide for an example.)

You might be wondering if Amazon EventBridge could be used to trigger your function on EKS, as it provides similar functionality to TriggerMesh but as a managed solution. But because EventBridge is push-based and typically isn’t running in the same VPC as your EKS cluster, it isn’t easy to push events from EventBridge into EKS to trigger your functions.

Choose the Right Path for Your Organization

As always, the devil is in the details and there is no one-size-fits-all approach to these questions. In this post, we covered three major reasons why some teams are moving away from Lambda, and people have raised others related to security and delayed updates to Lambda runtimes. However, according to industry surveys, Lambda is a thriving project and provides a quick way to get units of business logic running reliably in the cloud.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.