If you are not already familiar with AWS Lambda, it’s a compute service that runs your code in response to events and automatically manages the underlying compute resources for you. This means that you never have to provision, manage, or maintain any servers. That’s why AWS Lambda and similar services are often referred to as “serverless.”
This does pose a bit of a conundrum: how do you use New Relic to monitor your servers when you don’t have any!? Well, you can do it in two ways:
- Serverless monitoring: This allows you to see inside your AWS Lambda function. Monitor every invocation, including performance data like detailed duration, cold starts, exceptions, and tracebacks.
- Synthetics: If the data generated by people on a website is organic traffic, then data generated by robots is synthetic. Using New Relic One’s synthetic monitors, you can script tests to monitor how our AWS Lambda function responds to external events.
Before you begin, you need to create a Lambda function to test.
If you would like to skip directly to trying the demo the source is available on GitHub. To run the application locally ensure you have Python 3.9.0 and an active virtualenv before executing the following commands in your terminal:
pip install -r requirements.txt
uvicorn main:app
After you have configured the serverless CLI and completed your serverless.yaml
file, deploy the application to AWS Lambda with:
npm install
sls deploy --stage staging
If you haven’t set up a serverless.yaml
file, I’ll show you how later in this blog.
Creating an AWS Lambda function with Python and FastAPI
You'll create a FastAPI application that will respond to an HTTP GET request with a random universally unique identifier (UUID) contained within a JSON string.
import uuid
from fastapi import FastAPI
from mangum import Mangum
app = FastAPI(title="InsideOutDemoApp")
@app.get("/uuid")
def index():
return {"uuid": uuid.uuid4()}
handler = Mangum(app)
In the code above, you create a FastAPI called InsideOutDemoApp
, which has a single endpoint /uuid
. This app is then wrapped with Mangum, a Python package that allows you to use the (Asynchronous Server Gateway Interface (ASGI) application with AWS Lambda and API Gateway.
If you were to run this application locally and visit it in your browser, you would see a JSON string, and each time you refresh, you get a new freshly generated UUID.
Deploying to AWS Lambda
Now you’re ready to deploy your code to AWS Lambda. There are many different methods for deploying Python code to AWS Lambda. In this example, you will use the serverless.com CLI tool. It is a JavaScript application and can be installed via npm. Run the following in the command line:
npm install -g serverless
You will also need to configure serverless with your AWS credentials, by running the following in the command line:
serverless config credentials --provider aws --key <YOUR_KEY> --secret <YOUR_SECRET>
Now that serverless is installed and can access your AWS account, we need to configure your AWS Lambda function.
service: inside-out-demo-app
package:
individually: true
provider:
name: aws
runtime: python3.8
region: eu-west-1
stage: ${opt:stage, "dev"}
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: true
layer:
name: inside-out-app-demo-layer
description: Inside Out Demo App
compatibleRuntimes:
- python3.8
functions:
app:
package:
include:
- "main.py"
exclude:
- "requirements.txt"
- "package.json"
- "package-lock.json"
- ".serverless/**"
- "__pycache__/**"
- "node_modules/**"
handler: main.handler
environment:
STAGE: ${self:provider.stage}
layers:
- { Ref: PythonRequirementsLambdaLayer }
events:
- http:
method: any
path: /uuid
You can see in the YAML file above you’re able to specify everything you need: what AWS region your code runs in, what Python runtime to use, what events to support, and so on. Check the serverless documentation for more information on all the available options.
You might have also noticed that your deployment requires the serverless-python-requirements
plugin. You can install that from npm too.
npm install serverless-python-requirements
And now you can run your deploy.
sls deploy --stage staging
The URL of your AWS Lambda function should have been printed to your terminal. If you visit it in your browser, it should look exactly the same as the local version. A JSON string with a UUID that changes on each page load.
Your code is now running on AWS Lambda, but it is not yet instrumented.
Monitoring AWS Lambda with Serverless monitoring
To help streamline the process, New Relic has a Serverless framework plugin and a Setup AWS Lambda monitoring Nerdlet.
You can access the Nerdlet from New Relic One under Add more data > Cloud and platform technologies > Lambda. Follow the prompts on screen to enable Serverless monitoring, and make sure that you answer the prompts with the following responses:
- Yes, you are using the Serverless framework.
- Yes, you have a Node or Python Lambda Function to instrument.
- Yes, you wish to deliver your function’s telemetry via the Lambda Extension.
After you have completed the Nerdlet, you should copy the new values into your serverless.yaml
. The completed file should look like this:
service: inside-out-demo-app
package:
individually: true
provider:
name: aws
runtime: python3.8
region: eu-west-1
stage: ${opt:stage, "dev"}
plugins:
- serverless-python-requirements
- serverless-newrelic-lambda-layers
custom:
pythonRequirements:
dockerizePip: true
layer:
name: inside-out-app-demo-layer
description: Inside Out Demo App
compatibleRuntimes:
- python3.8
newRelic:
accountId: <NR_ACCOUNT_ID>
apiKey: <NR_API_KEY>
enableExtension: true
enableIntegration: true
logEnabled: true
functions:
app:
package:
include:
- "main.py"
exclude:
- "requirements.txt"
- "package.json"
- "package-lock.json"
- ".serverless/**"
- "__pycache__/**"
- "node_modules/**"
handler: main.handler
environment:
STAGE: ${self:provider.stage}
layers:
- { Ref: PythonRequirementsLambdaLayer }
events:
- http:
method: any
path: /uuid
Install the New Relic Serverless framework plugin:
npm install serverless-newrelic-lambda-layers
Then you can deploy your code again:
sls deploy --stage staging
It’s worth noting that each time you deploy, the URL of your function will change.
Viewing your AWS Lambda function performance and health in New Relic
After your code has finished deploying, if you generate some traffic on the URL, you’ll begin to see information about your function appearing in New Relic.
There’s a wealth of information available about each invocation, and it’s all queryable via NRQL, so you can use it in whatever way you need, including creating alerts.
Here, I’ve created an alert that is triggered if more than three standard deviations of invocations take longer than two milliseconds to run. I want my application to be highly performant.
AWS Lambda error handling and stack traces
As well as performance data, any errors in your AWS Lambda functions will be captured in New Relic, with a stack trace.
Testing with synthetics
But not all bugs and errors will raise an exception. If, for example, your application suddenly declared that it was a teapot, this wouldn’t raise an exception, but it’s still probably something you would want to know about.
The simplest type of synthetic monitor is Ping. The ping monitor performs a HEAD request on the specified URL and records whether it succeeded (HTTP 200) or failed (any other HTTP status). However, your AWS Lambda function does not support HEAD requests. Change the Bypass HEAD request option in your ping monitor’s Advanced options to ensure it sends a GET request instead.
The results of the synthetic tests will also appear in your data explorer, so you can quickly query them, draw comparisons with other data, or create alerts.
Scripted API tests
The final situation you need to consider is when your AWS Lambda function—or any other API you need to monitor—isn’t raising an exception or is sending an HTTP status of 200, but with an unexpected response body. Receiving a 200 OK status with an error message in the body is something any GraphQL developer is very familiar with.
@app.get("/uuid")
def index():
return {"uuid": "n0t-a-val1d-uu1d"}
Here you have modified the URL handler from your FastAPI app to always return an invalid UUID. It will not raise an exception, and it’s still a valid response, so FastAPI will return a HTTP 200 OK status. In this instance, you wouldn’t see anything troublesome in your traces, nor would your Ping synthetic be recording any failures.
Instead, you must examine the response body and ensure the correct data is being returned. You can do this using a Scripted API Synthetic monitor.
var assert = require('assert');
$http.get('https://nr.execute-api.eu-west-1.amazonaws.com/staging/uuid',
function (err, response, body) {
assert.equal(response.statusCode, 200, 'Expected a 200 OK response');
data = JSON.parse(body)
assert.equal(data.uuid.length, 36, "Expected a 36 character UUID")
assert.equal(/^[0-9A-F]{8}-[0-9A-F]{4}-[4][0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$/i.test(data.uuid), true, "Expected a valid UUID")
}
);
In this code for your synthetic, you issue a GET request to your AWS Lambda function, before making a few assertions to ensure that the function is returning a valid value.
- Does it return a HTTP 200 status code?
- Is the response body able to be parsed as valid JSON?
- Is the JSON in the response body an object, and does it have an attribute named UUID?
- Is the value of the UUID attribute the correct length for a UUID 4?
- Does the value of the UUID attribute contain the correct number of octets of the right length, consisting of valid characters?
After all these checks, you can be reasonably sure that the API is returning a valid UUID.
Bringing it all together
Your synthetic monitor is triggering your AWS Lambda function via a HTTP GET request in the same way a human user would, so you can see your synthetic monitor requests in your serverless monitoring events.
Próximos pasos
Not using AWS Lambda? New Relic supports many different serverless providers, including Google Cloud Functions and Azure Functions.
We also have several more synthetic monitor types for you to try, including certificate check, broken links monitor, step monitor, and a fully scriptable headless browser! Check out the docs for more information.
Las opiniones expresadas en este blog son las del autor y no reflejan necesariamente las opiniones de New Relic. Todas las soluciones ofrecidas por el autor son específicas del entorno y no forman parte de las soluciones comerciales o el soporte ofrecido por New Relic. Únase a nosotros exclusivamente en Explorers Hub ( discus.newrelic.com ) para preguntas y asistencia relacionada con esta publicación de blog. Este blog puede contener enlaces a contenido de sitios de terceros. Al proporcionar dichos enlaces, New Relic no adopta, garantiza, aprueba ni respalda la información, las vistas o los productos disponibles en dichos sitios.