Pour le moment, cette page n'est disponible qu'en anglais.

AWS Cloud Cost Optimization

Careful use of AWS services can help lower your cloud computing costs.

Introduction

A major promise of life in the Amazon Web Services (AWS) cloud is the opportunity to ship software faster and more dynamically than ever before. The cloud is full of resources waiting to be provisioned to help you meet the real-time demands of your applications, but the costs of accessing those resources can quickly add up, especially if you’re not careful.

Whether you’ve only recently migrated to AWS or were born cloud-native, there are many steps you can take to optimize the infrastructure hosting your applications to be as cost effective as possible. For example, are you using the right instances for your application’s workload? An application may require minimal CPU but at the same time be memory intensive—are your Amazon Elastic Compute Cloud (EC2) instances properly balanced for their workloads? Or maybe you have 20 instances all running at 10% CPU utilization—can you use smaller instances or consolidate more work onto those instances?

However, cost optimizations extend beyond the infrastructure layer. At the end of 2018, AWS provided more than 140 services, along with a handful of budgeting and cost analysis tools, many of which were designed specifically to help you reduce your AWS cloud-operating costs. Additionally, integrating tools like New Relic can gather crucial data about your cloud usage and costs and visualize that information so you can monitor and alert on, for example, your spend versus your budget.

We recommend that your AWS cost optimization strategy begin with these actions:

  • Right-size your EC2 instances
  • Utilize elastic cloud capabilities
  • Choose a pricing model for compute and storage
  • Leverage AWS services to reduce costs, where appropriate
  • Take advantage of AWS cost and improvement performance tools
  • Monitor and visualize your cost optimization data
  • Optimize costs—a proactive investment

Right-size your EC2 instances

The EC2 instance, an Amazon virtual machine, is the fundamental component of the AWS cloud. EC2 instances run on Amazon-managed hardware, so with hardware management completely eliminated from your workflow, you can quickly provision, launch, and scale your EC2 instances to meet the demands of your applications.

Ease of use, however, comes with responsibility: A critical part of optimizing your AWS cost is ensuring that you’re using the right size of EC2 instances for your application’s use case. EC2 instances come in a variety of types and sizes, so you want to optimize your instances based on your application’s primary use case; your application’s primary function or workload should determine if you need to focus on CPU- or memory-intensive instance types.

Right-size for type

The two most common EC2 instance types are C, compute-optimized instances (for workloads like web servers and video encoding), and M, general-purpose instances that provide a balance between compute, memory, and networking resources (for workloads like data processing applications and small databases). R-type instances, meanwhile, are useful for memory-intensive applications (for workloads like high-performance databases and data mining).

Carefully review these instance types, as there are many flavors for each. For example, C4 instances (as March 2019) offer CPU-optimized instances on 2.9 GHz Intel Xeon E5-2666 v3 processors. C5 instances, on the other hand, run on 3.0 GHz Intel Xeon Platinum processors, both at different price points. Again, weigh each option and price against the needs of your application.

As for databases, ‘R-type’ EC2 instances offer fine, cost-effective performance, but if you’re using AWS RDS for your relational database, be sure to right-size those instances as well. T-type instances for RDS, for example, are ideal for microservice architectures or environments that experience spikes in usage from time to time. Like EC2 instances, RDS instances also come in a variety of sizes, so plan accordingly.

Right-size for size

Whether you’re looking for compute-optimized, memory-optimized, or general-purpose instances, CR, and M instance types come in different sizes with slight variations in memory and virtual central processing units (vCPU). (In AWS, a vCPU is the same as half a physical core). For example, a C4.large maxes out at 2 vCPU with 3.75 GB memory while a C4.4xlarge maxes out at 16 vCPU and 30 GB memory. You’ll likely start lower and scale up to meet the demands of your application as it grows.

Utilize elastic cloud capabilities

The ability to scale compute instances with size and type adjustments can definitely help optimize your AWS costs. Using them alongside other elastic cloud capabilities—such as EC2 Auto Scaling, Elastic Load Balancing, and AWS Batch workload scheduling—though, can lead to even more performance and cost savings.   

  • Amazon EC2 Auto Scaling: This service gives you the ability to configure definitions for when EC2 should add or remove instances from your application architecture based on three models:

    1. Scheduled scaling: Configure scaling based on known traffic patterns

    2. Dynamic scaling: Configure scaling against key performance metrics or load needs

    3. Predictive scaling: Use machine learning to predict changes in traffic based on discovered patterns
       

  • EC2 Auto Scaling Fleet Management constantly monitors the health of you instances and adds or removes them as necessary; for example, if an instance fails a health check, it’s automatically replaced. In most cases, as your AWS infrastructure grows, you’ll need to strike a balance between the types and sizes of instances you’re using. With Auto Scaling Groups you can pre-configure when AWS should add or remove instances from your deployment; simply create a priority list of the types and sizes you want to use alongside the pricing models you want to adhere to, and Auto Scaling Groups will handle the load.
  • Elastic Load Balancing: Like EC2 Auto Scaling, Elastic Load Balancing lets you direct traffic, scale, and perform health checks within your EC2 infrastructure. AWS offers three load balancing models: application, network, and classic load balancing, each optimized for different use cases. Application load balancing is best suited for microservices architectures or container-based workflows that require extensive HTTP/HTTPS traffic; network load balancing is optimized for time-sensitive workflows handling TLS or TCP traffic; and classic load balancing is best suited for applications built with EC2-Classic.

It’s more than likely that you won’t run all of your instances 24 hours a day, 7 days a week—that would be needlessly expensive. To further optimize costs, consider workload scheduling. For example, AWS Batch offers job-based batch computing, in which you can deploy, use, and terminate EC2 instances as needed. Or, to help reduce the costs of your instance usage, the AWS Instance Scheduler lets you configure start and stop times for your EC2 or RDS instances.

From auto scaling to instance scheduling, these features help you scale faster and manage the workloads of your instances, and they also help keep your scaling needs in line with your cost expectations.

Choose a pricing model for compute and storage

Now that you understand a bit about EC2 instance types and options for extending their elasticity, you’re ready to familiarize yourself with AWS’s pricing model. AWS provides four tiers for paying for your EC2 usage:

  1. On-Demand instances: This option lets you pay by the hour or second depending on factors such as region, instance type, and Amazon Machine Image (AMI)/operating system combination. As your application increases or decreases in scale, you’ll only pay for the instances you use. AWS recommends On-Demand instance pricing for users who want low costs and no upfront payments or have applications with workloads that can’t randomly start and stop.
  2. Spot Instances: Spot Instance pricing is pre-defined by AWS and adjusts periodically based on supply and demand for this pricing model. When you use spot pricing you pay whatever price is set for the time period your instances run. Pricing is based on instance type and size as well as region and operating system. You can also pay for spot instances to run for a predefined duration.  AWS recommends Spot Instance pricing for users with applications without fixed start and end times or for users who must operate applications at very low cost.
  3. Reserved Instances: If you can plan, or know, what your EC2 usage will be, Reserved Instances provide you with capacity reserves, allowing you to launch instances as needed. If you use Standard Reserved Instances, you’ll be able to change availability zones, instance type, and networking protocols. With Convertible Reserved Instances, you’ll have those same options, but you’ll also be able to change an instance’s type or shift it to a new operating system. Scheduled Reserved Instances let you launch instances in reserved windows of time. Reserved Instance pricing is based on one-year or three-year terms that you can pay upfront, monthly, or hourly. Like other options, prices also very based on instance type, region, and operating system. AWS recommends Reserved Instances for users who require reserved capacity in their AWS deployments, or who can commit to one- to three-year terms to reduce costs. 
  4. Dedicated Hosts: With Dedicated Host pricing, you get a physical EC2 server confined to your use. Hosts vary based on instance type and region and come with On-Demand or Reserved pricing. One benefit is that you pay hourly for the Dedicated Host itself, no matter how many instances or instance types you launch on it. Dedicated Hosts are defined by instance type, and each type comes with different number of sockets and cores, and each type is preconfigured to run a specific number of instances. AWS recommends Dedicated Hosts for users who want to manage how instances are deployed on hardware, especially in cases where they need to meet compliance and regulatory requirements.

Typically, you’ll want to have a core number of Reserved Instances for static workloads and then Spot Instances for any workloads that don’t have fixed start and stop times. You’ll likely use On-Demand instances only as needed. Try to keep track of your On-Demand usage and move any predictable workloads to Reserved Instances. Striking the right balance may take a bit of planning, but the cost saving benefits are worth the effort.

As you work to figure out which pricing models are best for you, you’ll also want to consider Amazon S3 storage pricing if you plan to use AWS cloud storage. With S3, you pay only for the storage you use, but pricing is based on region.

If it all seems overwhelming, don’t despair. AWS offers several resources—including calculators and pricing white papers—designed to help you optimize your cloud costs.

Leverage AWS services to reduce costs, where appropriate

AWS offers dozens of managed services to enhance your cloud journey and reduce your overall cloud-operating spend. Using these services can be more cost effective than building and managing your own, home-rolled, versions.

For example, does your application, or its processes, have short-lived runtimes? Would it be better to move those processes, or the entire application, to AWS Lambda? Lambda functions reduce cloud costs and operational overhead. With Lambda, you don’t have to manage any backend services—simply configure your workflow, upload your code, and AWS handles scaling and availability. You’ll pay only for the compute time, services, or data transfers you use.

Let’s look at a few specific AWS services that can help you reduce costs:

  • Amazon Route 53: Amazon’s DNS web service is highly scalable, reliable, and tightly integrated with existing services like EC2 and S3. It provides critical DNS capabilities such as health checks, resolver endpoints, and credential-based access control. AWS cloud-based DNS is cost-effective and your costs are tied only to the resources you use. More specifically, Route 53 pricing is based your number of hosted zones, the number of DNS queries you process, and your traffic flow policy.

  • Amazon RDS: Instead of hosting your own PostgreSQL, MySQL, or Oracle database, consider Amazon’s relational database service, which includes engines for a number of popular databases. Amazon RDS is designed to reduce your operational database costs, and it offers scalable, secure, and highly available instances. In the section on Right-sizing your EC2 instances, we explained a bit about RDS instance types, but your RDS cost will also depend on the database engine you use. You’ll also pay for data storage and transfer, prices of which vary by region.   

As more and more teams embrace microservices architectures, container orchestration workflows and services are becoming indispensable. AWS offers two managed services for your container-based workflows:

  1. Amazon Elastic Container Service (ECS): This Docker-based container orchestration service deploys your container-based applications onto EC2 instances you pre-configure, or deploys them via AWS Fargate, which abstracts away all infrastructure management. ECS integrates with AWS services like Elastic load balancing, AWS CodeDeploy, and AWS App Mesh to manage and schedule your container orchestration workflows. ECS pricing depends on whether you’ll deploy containers on EC2 instances or via Fargate: Pricing for the latter is based on CPU and memory usage needed to execute your pre-defined launch tasks.
  2. Amazon Elastic Container Service for Kubernetes (EKS): EKS is a fully managed Kubernetes infrastructure running on top of EC2. Kubernetes is a highly dynamic container orchestration platform, but it can be challenging to deploy and manage. EKS manages that complexity for you in a way that’s fully compliant with existing Kubernetes standards; you can trust that your applications deployed in EKS are compatible with any Kubernetes environment or workflow. EKS pricing varies based on the number of Kubernetes clusters you’re managing, and on the type of EC2 instances you use for your worker nodes.

In his2018 AWS re:Invent keynote address, AWS CTO Werner Vogels said that 95% of AWS services and features are based directly on customer feedback. In other words, they’re trying to build tools to make your cloud existence easier—and that includes tools for cost optimizing your cloud usage. Route 53, RDS, EKS, and ECS are just a few examples of how you can adopt AWS services to help reduce your operational costs.

There are plenty of others. For example, Amazon Redshift reduces the burden of managing your own data warehouse; Amazon CloudFront is a managed content delivery network (CDN) that integrates seamlessly with your existing EC2 infrastructure and S3 storage; and Amazon CloudFormation provides automation capabilities to help you create, deploy, and manage your entire AWS cloud infrastructure from templates written in YAML or JSON. You definitely won’t need every AWS service, but you should look for those services that will reduce your operational overhead wherever possible.


As always, be sure to check out the available AWS economics resources—whitepapers such as Introduction to Cloud Economics explain the how and why of reducing your operating costs in the cloud.

Take advantage of AWS cost and improvement performance tools

AWS provides two key tools to help customers examine and reduce their cloud operating costs—AWS Cost Management and AWS Trusted Advisor.

 

  1. AWS Cost Management: This tool bundle provides all the resources you need for understanding and optimizing your AWS costs. Two key features include:

    • AWS Cost Explorer: Review and visualize your AWS cost and usage data. You can review cost and usage over specific time periods, filter and group your cost data, and project forecasts when planning your future roadmaps. AWS Cost Explorer provides a number of essential, well-visualized reports, including monthly AWS service costs and EC2 monthly costs. You can even query the Cost Explorer API directly, at a nominal fee per request.

    • AWS Budgets: Set custom monthly, yearly, or quarterly budgets for your AWS usage and get alerts you when you exceed those budgets—or when AWS predicts you have the potential to. The AWS Budgets dashboard gathers all your budget data into one place, so you can easily analyze and assess your data.

      Pricing for AWS Cost Management depends on how you plan to use it; see the pricing page for details.

  2. AWS Trusted Advisor: Based on AWS operational best practices, Trusted Advisor is an application that scans your AWS infrastructure and provides real-time results to help you optimize not only cost, but also performance, security, fault tolerance, and service limits. For example, when helping optimize costs, Trusted Advisor applies best practices related to eliminating resource wastage, such as adjusting EC2 Reserved Instance usage, identifying idle load balancers, and reporting on underutilized EC2 instances. Trusted Advisor will, when needed, recommend investigation or courses of action you can take to reduce your AWS bills.  

Monitor and visualize your cost optimization data

A major part of AWS cost optimization is closely monitoring your usage and correlating that with your budget. With its numerous AWS integrationsNew Relic infrastructure is in a prime position to help you:

  • Make sure that your assumptions about your cloud spend are playing out as expected

  • Quickly catch and correct any unexpected spikes in spending

  • Start fine-tuning the usage of your cloud-based resource

For example, you can collect data from New Relic’s AWS Billing integration and visualize that data in a number of ways in New Relic Insights. From one dashboard you could track the number of EC2 instance running at any given time, your AWS forecast vs. actual spend by application, and your performance against your monthly budget. In fact, you can track all that right alongside the performance data of the applications you’re running in AWS, as shown here:

 

New Relic's AWS billing integration displaying cost information and data

 

The dashboard below is a high-level overview of two applications hosted in AWS:

 

Dashboard of a high-level overview of two applications hosted by Amazon Web Services

 

The dashboard displays the number of EC2 instances in each application as well as the Actual and Forecasted costs. Also included is a New Relic Infrastructure chart that shows the average of key CPU metrics for each application. The bottom three charts show AWS Billing information for AWS services tagged with Production and Development, along with a total AWS Monthly Budget.

You can use this information to determine areas where you might be able reduce your AWS spend. For example, oversized EC2 instances will show low CPU metrics—which could indicate an opportunity to shrink your EC2 instances or reduce the number of instances you’re running.

New Relic knows costs are important, so we’ve made it easy to take granular control over how you monitor your AWS cloud services with the ability to configure polling frequency and data collection for cloud integrations. Specifically with New Relic, you can:

  • Reduce the polling intervals for metrics and inventory

  • Set filter conditions to narrow data fetching (for example, by region or by tag)

  • Focus on the cloud service data that is important to you, and prevent excessive API polling that leads to hitting API usage rate limits and unnecessarily incurring additional service charges.

As an AWS Advanced Technology Partner, New Relic is dedicated to helping you perfect your AWS journey—whether you need to calculate the cost of a cloud migration or just optimize your cloud spend.

Optimize costs—a proactive investment

“You have to spend money to make money,” is a tired cliché from the business world. With AWS cloud cost optimizations, it may be more accurate to say, “You have to spend money to save money.” Some of the strategies and tools outlined here may boost up-front costs, but the more you proactively spend on tools and strategies, the greater your chance of lowering your usage costs down the line.

It’s important to understand that you’ll be able to reduce costs only so far. But once you've right-sized your infrastructure and taken advantage of the appropriate cloud services for your use, you’ll have a cloud environment that wrings the most value out of every dollar. And with monitoring tools like New Relic, you can collect data about your cloud spend and visualize and share it to prove that your cloud spend is justified—and optimized.
And as you do more to optimize your customer experience in the cloud, you’ll be ready for that increase in users and for the next step—to scale your cloud usage, with the right cost optimizations along the way.