Ensuring you get the most efficient performance out of Amazon Web Services (AWS) Lambda can make a huge difference in your company’s cloud budget. AWS Lambda removes the complexity of managing your own server and other cloud resources, which can be beneficial when you need to scale up quickly and prototype new capabilities and products. 

If you’re beyond the stage of using your garage for your startup—and depending on your level of infrastructure complexity—it’s likely worth your time and money to automate these tasks. 

What is AWS Lambda?

AWS Lambda is a serverless event-driven service that automatically executes code to deliver the right AWS resources for your infrastructure without management. This means you don’t have to worry about provisioning or managing tasks like operating system maintenance, scaling, and more. 

For this service, you pay AWS a fee for the compute time you use. If your function is executed via an event notification trigger or a direct invoke call, you’ll incur a fee. That means you’ll want to set up monitoring to see where you can avoid overpaying for future lambda usage. 

AWS provides you with a basic monitoring solution but doesn’t give you information on the most detailed behaviors of your Lambda functions, such as event source information with full context, distributed tracing, and detailed performance data on duration, cold starts, exceptions, and tracebacks. In order to get this information, you’ll need a continuous monitoring solution. 

Image of a magnified graph showing data and graphs

How memory affects Lambda performance

A key thing to watch for in Lambda functions at runtime is memory usage. AWS allows you to configure memory allocation for each function, ranging from 128 MB to 10.24 GB. For small Lambda functions, 128 MB can be sufficient. Also, Lambda performance scales relatively proportionally to the amount of memory allocated. 

The issue is that how much memory you allocate also determines roughly how much virtual CPU power as well as network and disk performance are available. This can lead to functions performing unpredictably. For example, you may be throttled by concurrency limits if you are using too much CPU. 

Benchmarking performance is also complex given that your overall goal is to reduce actual Lambda function execution time. While it's true up to a certain extent that more CPU will make a function run faster, at a certain point you will get diminishing returns. Because you are billed by the 100ms, there’s no difference between billing for 120 ms and 150ms—they both will be charged as 200ms. 

Troubleshooting and improving Lambda performance

To save on costs, you’ll want to set up continuous monitoring of AWS Lambda so you can monitor for any abnormalities within your own Lambda functions, debug them, and generally keep track of these code executions. As your infrastructure grows in complexity, these serverless functions may become less performant if they rely on each other through multiple events. As a result, it can be difficult to track where a problem is given the distributed nature of your system. 

Some common problems you may run into include: 

Large functions

Running a large portion or all of your business logic in one function is generally unwise, but also will result in large bills because of the time it takes to invoke the function. If you can, break your code down into smaller functions and avoid a “Lambdalith,” or large function. 

Cold starts

To keep your functions responding predictably, you’ll also have to invoke them periodically to avoid the “cold start” latency that comes from reinitializing them.

Execution time limits

Lamda functions will time out after 15 minutes. Ideally, your functions will be small, run quickly, and not include long-running workloads. Running through large databases or similar time-consuming tasks are not recommended for AWS Lambdas.

Concurrency limits

An instance is created any time a service invokes a function for the first time. If that request is still processing but the service invokes the function a second time, AWS will create another instance to handle this new request. This continues until you hit a maximum limit. Per each AWS region, you can run 1,000 instances serving requests at the same time.

To avoid throttling, you can ask AWS to increase the limit, but you should ask at least two weeks in advance. This can be useful if you know an event like Black Friday will increase the use of your application. 

AWS does provide a do-it-yourself AWS Lambda Power Tuner that helps you understand how your function may perform and what your costs will be if you adjust memory allocation. However, it doesn’t provide the entire picture you need to truly get to the root of increased costs because this information is simply not detailed enough. 

To truly improve your functions and avoid the aforementioned common issues, you’ll need a third-party continuous monitoring solution. It should do several things that you won’t get from the tuner, including:

  • Allow you to observe all instances of AWS Lambda invocations
  • Set custom alerts based on any invocation criteria
  • Provide error analysis
  • Provide a single view where you can see all your Lambda functions in every AWS region in one glance.

After all, you can’t find a solution to a problem if you don’t understand it. And you can’t understand a problem if you don’t have enough context. 

How New Relic observability can boost your success with AWS

With Lambda monitoring from New Relic, you’ll get a streamlined experience that’s cost-efficient. We offer full Lambda monitoring and an AWS Lambda integration that gets you up and running in minutes. 

 Our Lambda monitoring shows you:

  • Every invocation of your Lambda functions: This includes detailed information on duration, tracebacks, cold starts, and more.
  • Information on events: This information provides the context and attributes you’ll need to find out what triggered an AWS Lambda invocation, including API Gateway, ALB SNS, SQS, DynamoDB, and more. 
  • Distributed tracing: These traces illustrate the path of requests that led to your Lambda. 
  • Logs in context: These provide the full invocation and function-level logs right alongside your metrics, attributes, and trace data.
  • Inventoried tags and metadata: Information from your AWS entities that you can use to drill down to specific metadata attributes on a function configuration or an invocation itself.