Pour le moment, cette page n'est disponible qu'en anglais.
Clouds background

Contrary to popular opinion, the term “serverless” is not entirely synonymous with AWS Lambda. In fact, the rise of Lambda seems to have inspired widespread adoption—and possible misuse—of the term “serverless.” Serverless is a general term that includes all cloud services that don’t require an administrator to spin up a server to run them (think Amazon DynamoDB or Amazon S3). Amazon Web Services has been promoting serverless computing for some time, and AWS Lambda is their functions as a service (FaaS) platform.

The key feature of AWS Lambda is that you can upload and run application code with basically no administrative oversight. AWS takes care of your app’s scale and delivers high availability, and your code runs only when triggered via API calls. These lightweight, hands-off serverless functions are changing developer workflows.

I’ve been experimenting with AWS Lambda since it launched, and in my spare time, I’ve built some fun projects using functions and New Relic's AWS Lambda monitoring integration. I’ve found that Lambda generates a lot to think about, from behavior to cost considerations. To that end, I’ve put together this collection of tips and practical advice for getting started with AWS Lambda.

(Note: Other major cloud providers like Microsoft Azure and Google Cloud Platform have launched their own serverless, FaaS platforms, but this post focuses on AWS Lambda.)

NEW RELIC LAMBDA INTEGRATION
AWS Lambda logo

Don’t reinvent the wheel if you don’t have to

When you create a new AWS Lambda function, you’re given the option of starting a new function from scratch, choosing a preconfigured template (or “blueprint”) as a starting point, or using an existing function that another user has uploaded to the AWS Serverless Application Repository. If you’re looking to create a common service or application, there’s a good chance you’ll find an implementation to import or borrow from. Applications in the Serverless Application Repository and blueprints adhere to the serverless application model (SAM) template. The AWS serverless application model is an extension of the CloudFormation template—used to define an AWS cloud stack—that defines how your serverless applications connect to other AWS resources. In some cases, shipping an app that uses AWS Lambda functions may be as simple as deploying the template and changing a few environment variables and parameters.

Understand there is still an underlying infrastructure

Spoiler alert: Underneath it all, your AWS Lambda functions run on containers in AWS' backend.

One of the main advantages of using serverless functions is that you’re not supposed to worry about managing the backend. However, if your AWS Lambda function uses a lot of the container’s memory or CPU, or if it uses the host’s underlying file system (for example, to write temporary files), it’s crucial that you specify resources accordingly.

It’s also interesting to note that FaaS providers have started rolling out service level agreements (SLAs). AWS has recently released its own, guaranteeing 99.95% availability for each AWS region. This is a good sign of Amazon’s commitment to this service, as well as a likely indicator that more and more enterprises are adopting AWS Lambda functions in their development practices and workflows.

Don’t leave your function out in the cold

When you trigger an AWS Lambda function for the first time, it needs some time to initialize: to load your code and any dependencies into the assigned container, and then to start the execution of your code. This initial runtime is known as a “cold start.” Subsequent, timely runs with that container won’t require a cold start and thus will be faster.

If you leave the function (and therefore the container) inactive, AWS will eventually shut it down, and the function will have a cold start the next time you run it. There’s no definitive measure of how long AWS allows a function to idle—one engineer did enough research to hypothesize that 60% of cold starts happen after 45 minutes of inactivity—but it does seem dependent on factors like function size and the needs of other functions running in the shared cloud.

There are ways to minimize the effects of cold starts. First, think about building your AWS Lambda functions as small as possible and minimizing bundled dependencies. According to our 2017 State of Serverles Report, “The total code size of AWS Lambda functions—how much disk space the code and the dependencies of a function consume—tends to be small by modern software standards. Nearly half of the monitored functions could almost fit on a 3½-inch floppy disk. Java functions were notable outliers. Their average code size was more than 20MB, indicating significantly larger function deployment sizes than Node.js or Python functions.” In other words, loading a 5 MB Python function into a new container on a cold start requires significantly less time than it takes to load a 20+ MB Java function.

Another way to mitigate cold starts is to use a “keep warm” solution to keep your AWS Lambda functions container safe from termination. There are plenty of tools designed for this purpose, such as the Serverless WarmUP Plugin, which you can use to schedule a “warm up” event to run your functions every few minutes, at minimal cost.

Of course, if your AWS Lambda function accepts concurrent requests, it will need to spin up concurrent containers so your function can serve requests as quickly as possible. In this case, cold starts may be unavoidable if the volume of requests is ever increasing, and AWS must start new containers.

(Check out our post "Understanding AWS Lambda Performance—How Much Do Cold Starts Really Matter?" for more information on cold start optimizations.)

Eliminate recursion and embrace concurrency

The FaaS model has the potential to radically change how we deploy software applications, but it also requires us to change how we think about writing software to adapt to this new model. Specifically we need to change how we think about recursion and concurrency.

It’s important to understand that AWS Lambda uses concurrency to scale your functions. In traditional applications, engineers could have to plug functions into an async framework to get requests running in parallel. With AWS Lambda, concurrency is handled by AWS; if there isn’t a “warm” container available to fulfill a request triggered by event sources like the Amazon API Gateway or Amazon S3, AWS Lambda will spin up a new container.

Essentially, AWS removes a layer of abstraction and does the work of concurrency, so you don’t have to worry about it.

But automatic concurrency means you have to be careful handling processes like recursion. Some of the most elegantly engineered functions employ a bit of recursion or include carefully crafted recursive implementations of an algorithm. However, in AWS Lambda functions, you don’t want an outer function to call itself. If this happens, AWS will spin up more concurrent instances of the function, and these concurrent instances, coupled with cold starts, will cost you compute time and money (inseparable concepts in this paradigm).

Know your limits

Function limits

For each AWS Lambda function request, AWS sets limits on memory allocation, disk capacity, and execution time. In early November 2018, memory allocation starts at 128 MB and is capped at 3008 MB; disk capacity (/tmp directory storage) is limited to 512 MB; and the maximum duration of an AWS Lambda function is 900 seconds. If your function requires more memory or lasts longer than that, consider refactoring the function to make it more efficient, or break it down into smaller AWS Lambda functions. If you’re hitting disk capacity limits, use Amazon S3 for storage.

Concurrency limits

As mentioned, AWS Lambda uses concurrency to scale your functions. AWS sets the default limit at 1,000 concurrent executions, per region. Expect some throttling if you exceed this limit. If you have a single function that exceeds your limits, consider adjusting your concurrent execution limit at the function level.

Deployment limits

If you work with a language that favors large deployment packages, you may hit deployment package limits. Currently, AWS sets deployment limits at 50 MB for zipped packages and 256 MB for unzipped packages. You’ll want to be vigilant about removing unneeded libraries and otherwise keeping your functions as small as possible. If you have a group of specialized AWS Lambda functions (in other words, functions that perform only one task), consider logically combining them into one function to avoid having to deploy the same shared library across the AWS Lambda environment.

Monitor your limits with the New Relic AWS Lambda integration

Use New Relic's AWS Lambda monitoring integration for New Relic Infrastructure to report data such as invocation counts, error counts, function timers, concurrency, and other metrics and inventory data. You can view your AWS Lambda data in pre-built dashboards and also create custom queries and charts in New Relic Insights.

Take advantage of complementary services

Bring your team together with Cloud 9

Cloud 9 is a browser-based integrated development environment (IDE) you can use in the AWS console. By bundling necessary plug-ins, libraries, and SDKs, AWS has made it easy to engineer and deploy Lambda functions from Cloud 9. You can run your full Lamba development environment on one EC2 instance, and share real-time access with your team.

Perform local development and iteration with SAM CLI

If you want to integrate AWS Lambda function development into your local workflow, try the open source AWS SAM CLI. The AWS SAM CLI lets you use the serverless application model (SAM) to locally develop, test, and iterate on your functions before you deploy them into production.

Take advantage of open source, including the Serverless Framework

The Serverless Framework is an open source, provider-agnostic CLI that allows you to develop and test functions locally and deploy them when you’re ready. It also boasts a great developer community that has come up with an extensive list of plugins for building out your functions.

Turn your AWS Lambda functions into a state machine with AWS Step Functions

Eventually you may have several AWS Lambda functions—or several AWS Lambda functions and services running in containers or on Amazon Elastic Compute Cloud (EC2) instances—performing different tasks in your application. It can be challenging to coordinate, debug, and visualize what’s going on in your backend. This is where AWS Step Functions help.

Think of Step Functions as workflows in which you define your application as a series of steps that must be executed in a particular order. It eventually resembles a state machine enforcing a programmatic flow within your serverless application.

You (probably) don’t need to use serverless functions for everything

As cutting-edge technology paradigms make their way into the mainstream, eager users will naturally want to apply the new tech to solve old problems. With serverless technologies, we should strive to avoid false dichotomies: It’s not about serverless vs. monoliths, or serverless vs. containers.

Instead, let’s work toward definitions for how serverless functions and services, in combination with everything else, fits into our modern architectures. In fact, some cloud users are using serverless in conjunction with traditional servers and containers in their hybrid cloud environments. After all, some applications are a perfect fit for serverless frameworks, and others not so much.

If you are considering going full serverless, we recommend that you do your research and calculate whether a serverless architecture would be less expensive or more viable for your needs. You can perform your own comparisons, use an AWS calculator, or have fun trying out third-party estimators.

The debate over whether or not serverless functions are ready for the big time will continue for a while, so be sure to check in with both sides, as we did in "The Great Serverless Debate" podcast.