New Relic Now ¿Sueñas con innovar más? Comienza a vivir tu sueño en octubre.
Reservar plaza
Por el momento, esta página sólo está disponible en inglés.

In today’s dynamic software environment, traditional infrastructure management can be cumbersome and resource-heavy. The number of services IT departments manage and maintain themselves is dwindling fast as more and more businesses make the switch to serverless architecture.

If you are considering serverless architecture, here are a few critical things you should know.

What is serverless architecture?

First, you need a solid understanding of serverless architecture.

Although the name might have you think otherwise, what we call "serverless" isn’t genuinely serverless. The same virtual machines (VM) and containers your team has traditionally managed are still there—but it’s not you that patches, secures, and scales those servers. These responsibilities are left to a third-party cloud provider. 

While most think of serverless architecture as synonymous with "Functions as a Service" (FaaS) offerings from major cloud providers—like AWS Lambda, Azure Functions, and Google Cloud Functions—any cloud service can be considered serverless if it meets the following criteria:

  • It scales automatically and is highly available.
  • You only pay for what you use.
  • There are no servers directly exposed that you need to manage.

Within a serverless environment, an application runs in stateless, event-triggered compute containers that are ephemeral and entirely managed by the cloud vendor. Typically, pricing is based on the number of executions instead of provisioned computing capacity. 

So, how does serverless stack up against traditional architecture? Here are some of the key areas you must consider before deciding which solution or combination of solutions is right for you.

Knowing your business needs

As Ben Kehoe, Cloud Robotics Research Scientist at iRobot and an AWS Serverless Hero, states in his treatise, “Serverless is a State of Mind," on A Cloud Guru: "If you go serverless because you love Lambda, you’re doing it for the wrong reason. If you go serverless because you love FaaS in general, you’re doing it for the wrong reason. Functions are not the point."

Serverless is not a one-size-fits-all solution for modernizing your stack. And it probably isn’t ideal if you’re simply looking to recreate your monolithic application within a serverless architecture. Rather, serverless is best suited for event-driven architectural patterns where applications are divided into small, loosely coupled components aligned to business needs. 

While the needs of a modern e-commerce application or media platform might align perfectly with serverless, there are companies where re-architecting or building net-new applications to take advantage of serverless benefits might not be worth it in the near-term. 

For example, until recently, companies that required the heavy use of virtual private clouds (VPCs) for resources that aren’t accessible from the public internet—such as a relational database—suffered heavy latency penalties when trying to mix VPCs and AWS Lambda functions. Although AWS has addressed that enterprise use case in a recent update, it’s critical to examine your business and language requirements against the advantages and limitations of current FaaS offerings.

However, this doesn’t mean serverless is only ideal for cloud-native enterprises or SaaS innovators. We often see companies from traditional industries like Matson, a 138-year-old shipping company, build serverless functions that tie new user interfaces to traditional business applications.

Pricing

Generally, the cost model of serverless is execution-based: you are billed for the number of requests for your functions and the duration—the time it takes for your code to execute. You are allocated a set number of seconds of use that differ depending on the amount of memory required. Similarly, the price per millisecond also varies with the amount of memory needed. With this in mind, shorter running functions are usually more adaptable to the serverless model.

Observability

Observability is a bit of a buzzword in the DevOps space—but that doesn’t mean it should be ignored. In short, observability is a measure of how well internal states of a system can be determined from its external outputs.

Observability challenges are often multiplied across distributed environments with millions of invocations. While serverless has some significant benefits—like not paying for idle servers, auto-scalability, and increased agility for developers—it creates new unique observability challenges:

  • When the amount of instances/invocations is large, it’s hard to spot small-scale problems
  • Distributed environments: The context is not easy to find, and everything works based on events
  • Lack of persistent context across observability systems
  • It often requires a lot of manual work to integrate observability between traditional apps and serverless apps

With these new challenges, it becomes more important than ever to instrument your functions in a way so that you can visualize and track the performance of any serverless call across your entire ecosystem.

Across the board benefits of serverless architecture

Before you make the switch to serverless, take the time to reflect on how the technology's benefits will impact not only your business but also your developers and app users:

  • No longer paying for idle servers 
  • Shifting security and infrastructure burdens to cloud vendors
  • Increased agility and scalability
  • Focusing on business logic and value over infrastructure

Ready to learn more?

There is no one-size-fits-all IT infrastructure or architecture model. That said, the popularity of cloud-based solutions like serverless monitoring is growing rapidly. For proof, check out For the Love of Serverless, our 2020 AWS Lambda Benchmark Report.