New Relic Now Start training on Intelligent Observability February 25th.
Save your seat.
Derzeit ist diese Seite nur auf Englisch verfügbar.

Editor’s note: This is an updated version of a blog post that originally appeared on the blog from IOpipe, which is now part of New Relic.

With the rapid growth of serverless computing the concept of serverless Python has gained significant traction. Especially since the release of AWS Lambda. While the idea of not having to manage a server and paying only for the compute resources you use (not to mention out-of-the-box horizontal auto scaling) may sound appealing, how do you know where to start?

If you’re a Python developer interested in serverless, you might have heard of the Serverless Framework. But maybe you find the prospect of working in an unfamiliar ecosystem like Node.js daunting, or perhaps you’re unclear on what the framework does or if it’s overkill for folks just getting started.

If these concerns speak to you, you’ve come to the right place. In this blog post, I’ll give you a brief tour of the Serverless Framework and show you how to use it to build a serverless application in Python.

NEW RELIC PYTHON INTEGRATION
python logo
Start monitoring your Python data today.
Install the Python quickstart Install the Python quickstart

Common questions about serverless

Before we jump in, let’s answer some questions you may have about serverless.

Is my app/workload right for serverless?

This is probably the most important question you need to ask. While serverless has a lot to offer, it isn’t ideal for all apps and workloads. For example, the current maximum duration for an AWS Lambda function invocation is 15 minutes. If your app/workload can’t be broken up into 15-minute chunks, serverless isn’t the best option for you.

If, for instance, your app uses WebSockets to maintain a persistent connection with the server, this connection would be closed and need to be reestablished every 15 minutes withAWS Lambda. You’d also be paying for all that compute time for a workload that’s really just keeping a socket open. AWS recently introduced WebSocket support for API Gateway , which gets around the persistent connection issue described above by breaking up every WebSocket exchange into its own function invocation. But the long-running persistent workloads caveat still applies.

On the other hand, if your app is mostly stateless, (like a REST or GraphQL API) then serverless may be a great fit, as HTTP requests rarely last 30 seconds, let alone 15 minutes

Serverless really shines for workloads that might have spikes or periods of low activity. When your app or workload spikes, AWS Lambda provides considerable horizontal scaling. By default, it can handle 1,000 concurrent requests out-of-the-box, but you can increase this limit. And when your app or workload is in a period of low activity, your meter doesn’t run at full tilt, which can save you a lot on operating expenses. Think about it: Most apps and workloads serve a range of time zones, so why pay full price to run yours when your customers are sleeping?

If you’re still not sure whether or not your app or workload is a good fit, here’s a handy calculator to compare AWS Lambda to EC2.

Should I use Python 2 or 3?

The Python ecosystem has gone through a lot of changes in the past decade—the most significant being the release of Python 3 and the transition of many codebases from Python 2.x to 3.x. For new serverless projects, we recommend Python 3.x. While Python 2.7 has served many of us well, it no longer receives updates. So, if you haven’t already started your transition to 3.8, there's no time like the present.

If you have an existing Python 2.7 project, don’t worry, AWS Lambda still supports 2.7. But you should seriously consider porting your code to Python 3.8 as soon as possible. The advice we give in the rest of this post is compatible for both versions.

Should I use a web API or a worker?

Before we go over the serverless tools available to Python, let’s drill down a little more into our app/workload. If your web app serves a frontend with several web assets (HTML, JavaScript, CSS, and images), don’t serve these with a function. That’s not to say that you can’t,just that you shouldn’t. Remember, with AWS Lambda you pay for the time your function runs. It doesn’t make much sense to spend this time serving web assets. In fact, since your frontend likely has many web assets, this could turn a simple task into an expensive liability. For serving web assets, consider a content delivery network (CDN)—Amazon CloudFront is an AWS service built specifically for this purpose. (Check out their guide on how to use it with S3.)

But that really only covers your web app’s frontend. What if your app or workload doesn’t have a frontend at all? We’re going to break down the apps and workloads we talk about in this post into two categories: web APIs (REST, GraphQL, etc.), and workers. Hopefully you’re already thinking about what parts of your app will be served via a web API and what parts can be worker tasks that run in the background, so you can pick the right tool for your project.

Why use Serverless Python functions?

Using serverless Python functions, often implemented through serverless computing platforms like AWS Lambda, Google Cloud Functions, or Azure Functions, offers several advantages in certain scenarios. Below are some reasons why you might choose to use serverless Python functions.

Cost efficiency

Serverless functions follow a pay-as-you-go model, where you are charged based on the actual execution of your functions. If your application has sporadic or low usage, serverless can be more cost-effective than maintaining a dedicated server or virtual machine.

Automatic scaling

Serverless platforms automatically scale your functions based on demand. If your application experiences a sudden spike in traffic, the serverless platform can quickly and automatically allocate resources to handle the increased load.

No server management overhead

Serverless computing abstracts away the infrastructure management tasks. You don't need to worry about provisioning, scaling, or maintaining servers. This allows developers to focus more on writing code and less on managing infrastructure.

Event-driven architecture

Serverless functions are often triggered by events, such as HTTP requests, database changes, or file uploads. This event-driven architecture simplifies handling various aspects of your application, and it promotes modular and loosely coupled designs.

Fast deployment

Serverless functions can be deployed quickly, often in a matter of seconds. This rapid deployment allows for faster iteration and development cycles, making it easier to push updates and improvements to your application.

Automatic high availability

Serverless platforms automatically distribute your functions across multiple availability zones, providing built-in high availability. This reduces the risk of downtime due to server failures.

Microservices architecture

Serverless is well-suited for microservices architecture, where you can build small, independent functions that communicate with each other. This modular approach can lead to easier maintenance, updates, and scalability.

Resource efficiency

Serverless platforms allocate resources precisely for the duration of function execution. If your function is idle, no resources are wasted. This resource efficiency is particularly beneficial for applications with varying workloads.

Easy integration with other services

Serverless platforms often have built-in integrations with other cloud services. For example, AWS Lambda can easily integrate with various AWS services, making it seamless to connect your serverless functions with databases, storage, and other components.

Reduced development time

The serverless model can reduce development time by abstracting away infrastructure concerns, allowing developers to focus on writing code and delivering features.

It's essential to note that serverless might not be the best fit for every application. Consider factors like execution time, resource requirements, and cold start latency, as these aspects can impact the performance of serverless functions in certain use cases.

How to build a "hello world" function on the Serverless Framework

The Serverless Framework is a well-established leader and for good reason. They've put considerable time and effort into the developer experience to make it one of the most intuitive and accessible serverless tools out there. It also offers a comprehensive feature that supports multiple cloud vendors, in addition to AWS Lambda, and has a growing plugin ecosystem. For a Python developer, the Serverless Framework is a great starting point.

But Python developers may want to note a big caveat about the Serverless Framework—it’s written in Node.js, which may not be every Python dev's first choice. But like any good tool, if done right, you shouldn’t even notice what language it is implemented in, and that case could certainly be made here for the Serverless Framework. You’ll still need to install Node and NPM, but you won't need to know any JavaScript.

Let’s give it a try.

Step 1: Set up the project

First, install Node and NPM:

npm install -g serverless

You can access the CLI using either serverless or the shorthand sls. Let’s create a project:

mkdir ~/my-serverless-project

cd ~/my-serverless-project

sls create -n my-serverless-project -t aws-python3

Here, I’ve created a directory called my-serverless-project and created a project using sls create. I've also specified a template with -t aws-python3. Serverless comes bundled with several templates that set some sensible defaults for you in serverless.yml. In this case, I'm specifying the AWS template for Python 3.6. If your project is Python 2.7, use aws-python2. There are other templates for other languages and clouds, but that's outside of the scope of this guide.

The -n my-serverless-project specifies a service name, and you can change this to whatever you want to name your project. Now, let's take a look at the contents of the my-serverless-project directory. Run:

cat serverless.yml

The file serverless.yml comes loaded with several helpful comments explaining each section of the config. (I recommend reading through these comments, as it will be helpful later on.)

Step 2: Write and deploy your function

Let's write a hello world equivalent of a function:

def handler(event, context):

    return {"message": "hi there"}

Save that in your my-serverless-project directory as hello.py. We commonly refer to functions as handlers, but you can name your functions whatever you want.

Now that you have a function, make the Serverless Framework aware of it by adding it to serverless.yml. Edit serverless.yml and replace the functions section with the following:

functions:

  hello:

    handler: hello.handler

Now save serverless.yml. To deploy this function, you’ll need to make sure you’ve configured your AWS credentials.

When you’re ready to deploy, run the following:

sls deploy

Deployment may take several moments. Essentially the Serverless Framework:

  1. Creates a CloudFormation template based on serverless.yml
  2. Compresses the CloudFormation template and hello.py into a zip archive
  3. Creates an S3 bucket and uploads the zip archive to it
  4. Executes the CloudFormation template, which includes configuring an AWS Lambda function, and points it to the S3 zip archive

You could do all of these steps manually, but why would you want to if the framework can automate it for you? When your deploy is complete, test it with the following command:

sls invoke -f hello

You should see the following response:

{"message": "hi there"}

Congratulations, you’ve just created your first serverless function.

How to build an advanced Serverless Python function

Now, we'll do something a little more challenging—we'll make an HTTP request and return the result.

Step 1: Create the HTTP request function

Let’s create a new file called httprequest.py and add the following:

import requests



def handler(event, context):

    r = requests.get("https://news.ycombinator.com/news")

    return {"content": r.text}

Update the functions section of serverless.yml:

functions:

  hello:

    handler: hello.handler

  httprequest:

    handler: httprequest.handler

Now re-deploy the function:

sls deploy

sls invoke -f httprequest

You should now see an ImportError. This is because requests is not installed. With AWS Lambda, you need to bundle any libraries you want to use with your function.

You could run pip install requests -t to install the requests wheel (and its dependencies), but AWS Lambda runs on 64-bit Linux. So what do you do if you're running Mac OS? Or Windows? Or FreeBSD?

Serverless Python requirements

Thankfully Serverless comes with a plugin ecosystem to fill the gaps. Specifically we want to install serverless-python-requirements:

sls plugin install -n serverless-python-requirements

Add the following lines to the end of serverless.yml:

plugins:



- serverless-python-requirements

This plugin enables requirements.txt support, so add a requirements.txt file to your project directory:

echo "requests" >> requirements.txt

Now the requirements will be installed and bundled automatically the next time you deploy.

But we haven’t solved our compilation problem yet. To do that you’ll need to add a custom section to serverless.yml. This section is where you can add custom configuration options, but it's also where plugins look for their own config options. Our new custom section should look like this:

custom:

  pythonRequirements:

    dockerizePip: true

This section tells the serverless-python-requirements plugin to compile the Python packages in a Docker container before bundling them in the zip archive to ensure they're compiled for 64-bit Linux. You'll also need to install Docker in order for this to work, but after you do, this plugin will automatically handle the dependencies you define in requirements.txt.

Now deploy and invoke your function again:

sls deploy

sls invoke -f httprequest

Even if you are running 64-bit Linux, this is way cleaner, don’t you think?

Step 2: Set up a worker for the HTTP request function

Before we continue, let’s explain how events and context are useful.

Functions are event-driven, so when you invoke a one, you’re actually triggering an event within AWS Lambda. The first argument of your function contains the event that triggered the function, which is represented within AWS Lambda as a JSON object, but what is passed to Python is a dict of that object. When you run sls invoke -f hello, an empty dict is passed to the function. But if it’s an API request, it would contain the entire HTTP request represented as a dict. In other words, the event dict acts as your function's input parameters, and your function returns the output. With AWS Lambda, your output needs to be JSON serializable. (Here's some example events you might see with AWS Lambda.)

The second argument is the AWS Lambda context, which is a Python object with useful metadata about the function and the current invocation. For example, every invocation has a aws_request_id which is useful if you want to track down what happened in a specific invocation within your logs. (See the AWS Lambda docs for more information about the context object.) You probably won't need to worry about the context object right away, but you'll eventually find it useful when debugging. By the way, if you are interested in learning more about writing logs with Python (an entirely different but very useful topic), check out this blog on structured logging.

So, how are events useful? Well, if your app/workload can work with a JSON serializable input and produce a JSON serializable output, you can plug it right into an AWS Lambda function.

So far you’ve already implemented what you need for a worker. Let’s say you wanted to run your httprequest function every 10 minutes; to do that, add the following to serverless.yml:

functions:

  httprequest:

    handler: httprequest.handler

    events:

     - schedule: rate(10 minutes)

And deploy the function:

sls deploy

Now httprequest is triggered automatically every ten minutes. If you want more fine-grained control, you can specify a specific time at which your function should be triggered. You can also build more complex workflows using Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), or other AWS services.

Step 3: Set up a web API for the HTTP request function

Earlier I mentioned that an HTTP request can be represented as an event. In the case of web APIs, Amazon’s API Gateway service can trigger events for our function. In addition to this, API Gateway provides a hostname that can receive HTTP requests, transform those HTTP requests into an event object, invoke our function, and collect the response and pass it on to the requester as an HTTP response. That might sound complex, but thankfully the Serverless Framework abstracts away much of this for us.

So, add an HTTP endpoint to serverless.yml:

functions:

  webapi:

    handler: webapi.handler

    events:

      - http:

          path: /

          method: get

This looks a lot like our scheduled worker task earlier, doesn’t it? Like in that task, you configured this handler to handle http events and specified path (our HTTP request path) and a method (the HTTP method this handler will handle). Since we’ve added a new handler, we’ll need to create that in webapi.py:

import json



def handler(event, context):

    return {"statusCode": 200, "body": json.dumps({"message": "I'm an HTTP response"})}

This handler will accept an event from the API Gateway and respond with a JSON serializable dict. Within the dict we have two keys: statusCode, which is the HTTP status code we want the API Gateway to respond with, and body, which contains the HTTP body of the response serialized as JSON. API Gateway expects the HTTP response body to be a string, so if we want our web API to respond with JSON, we need to serialize it before handing it back to API Gateway.

Now deploy the function again:

sls deploy

The Serverless Framework will provide an endpoint:

endpoints: GET - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev

What just happened here? In short, the Serverless Framework created our new function and then configured an AWS API Gateway to point to the function. The endpoint returned is the one provided by API Gateway.

Try your endpoint:

curl https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev

You should see the following response:

{"message": "I'm an HTTP response"}

Congratulations, you’ve just created your first serverless web API! You might have noticed that the URL provided by the API Gateway provides is pretty ugly. It would be a lot nicer if it could be something more readable like https://api.mywebapi.com/. Well, there's a plugin for that, too.

Cleaning up

If you were playing along, you now have three serverless functions and an API Gateway. But these were really just examples to help you get started with serverless development. You’ll probably want to clean up your project; to do so run:

sls remove

And the Serverless Framework will take care of the rest.

Want to learn about monitoring, visualizing, and troubleshooting your functions? Visit New Relic Serverless for AWS Lambda to learn more and request a demo.

In the meantime, check out this explainer video from RedMonk, "What is Serverless Observability And Why Does It Matter?":