New Relic Now Start training on Intelligent Observability February 25th.
Save your seat.
No momento, esta página está disponível apenas em inglês.
New Relic estimate data cost conceptual graphic

For organizations on our usage-based pricing model, several factors can affect your costs. This blog post will help you estimate your New Relic data ingest costs.

Before you begin: Understand the usage-based pricing model

First, you should have a decent understanding of how our usage-based pricing model works and the factors that affect billing. Note that you get 100 GBs of ingested data per month for free. Learn more about what you get for free.

Best option: Extrapolate usage from a test New Relic account

You can sign up and use New Relic for free, without ever putting in a credit card. We give you full visibility on your data ingest, so it's easy for you to figure out where data is coming from and adjust your data ingest as you go.

The amount of ingested data can vary from one New Relic organization to the next, based on what kinds of things you are monitoring, what features you are using, the behaviors of your monitored applications, and more. 

It's vital to understand the various factors that affect data ingest so you can accurately predict how much data ingest you need. For example, our logs-in-context feature will add more logs ingest, and the percentage amount will vary based on the size of the log lines. Smaller log events will result in a higher percent increase, for example. Given such variability, the best way to estimate your costs is to set up a test New Relic account and extrapolate your usage from that. Your actual usage is shown in the data management UI.

Here are some tips for understanding your usage:

  • If you're just signing up for New Relic, consider creating a test installation with an environment similar to what you'll need moving forward. Then use the baseline ingest from the trial to estimate what your full environment would require. 
    • First, create a free account.
    • Follow the steps on the Add data page in New Relic to start the flow of data. Note that APM, infrastructure monitoring, and logs tend to produce the bulk of most organizations’ data, but your usage may vary.
  • If you're an existing customer, use consumption information from your account. You can find this in the data management hub or by querying your data ingest to estimate new or added ingest.

That's all you need to do. This is the best option to understand your usage and estimate your data ingest costs.

Second option: Use the cost estimator spreadsheet

As described earlier, data ingest and associated costs can vary greatly for accounts, depending on your architecture and New Relic setup. That's why we highly recommend creating a New Relic account to predict usage. But if you don't create a test New Relic account, you can use our cost estimator spreadsheet, which auto-populates a rough estimated cost. 

Use the cost estimator spreadsheet for directional guidance and approximations based on simplified inputs that you provide with the built-in assumptions (which may not be true for your use case). It’s not a guarantee of your actual costs, but it will give you approximations. To get started, make a copy of this Google spreadsheet.

Note that this spreadsheet gives you an option of choosing one of two data options: Data Plus at US$0.55 per GB and the original data option at US$0.35 per GB. If your organization has a different data cost, you'll have to adjust the cost estimate to match that difference.

To arrive at the data ingest rates used in the estimator, we analyzed about 10,000 existing New Relic customer organizations of various sizes to arrive at the ingest rates used in our calculations. Note that you get 100 GBs of data ingest per month for free.

The next sections explain how to use the various parts of the spreadsheet. Note that the spreadsheet provides you only an estimate: it's not a binding billing proposal.

Size your APM agent data ingest

Data ingest rates are measured per agent, not per host. You might have multiple agents monitoring a single host.

In the APM data volume section of the spreadsheet, you estimate whether you have low, medium, or high data ingest rates from APM agents. We've built an average of the data volume for all APM agent types into the spreadsheet calculator. When filling out this section, consider these questions:

  • How many APM agents will you deploy?
  • What types of applications will you monitor? Understanding how the application is used and the application complexity is important. For example, e-commerce apps will have much higher throughput than an internal application.
  • Will you use features that contribute to higher data ingest rates? See the criteria questions that follow for more detail.

Criteria for calculating ingest rates per APM agent

In general, use higher data ingest rates for applications that are integration/business tiers, large business-to-consumer (B2C) sites, or have significant custom instrumentation or metrics. That means, select High in these cases:

  • For app behaviors and environments where you expect high throughput and a high number of errors, and the app is in a production environment.
  • For complex app architectures (for example, a single front-end request spawns multiple back-end requests).
  • If you have a high number of key transactions.
  • If you have custom instrumentation and APM metrics.
  • For transactions with a lot of attributes.

Use these instructions to add the APM data agent ingest to the spreadsheet with these steps:

  1. Add the number of APM agents that you will monitor.
  2. Approximate the amount of ingest you'll need for your agents and select one of the options. In general, if you're on the Standard pricing edition (the edition new organizations start at), you can probably select Low.

To learn how to manage your data ingest, see Manage data ingest.

Size your infrastructure agent data ingest

Sizing your infrastructure monitoring data ingest depends on the number of hosts and integrations you have, and how much data they're each reporting.

When calculating the volume of your infrastructure ingest, consider these questions:

  • How many infrastructure agents do you think you'll need?
  • Which integrations contribute to higher data ingest rates? The following are some approximate sizes. You should also take the size of your environments into account. If they're very large, for example, these rates might not be accurate.
    • On-host integrations (low)
    • Cloud integrations (low to medium)
    • Kubernetes integrations (medium to high)

Use these instructions to add the infrastructure agent data ingest to the spreadsheet:

  1. At step three in the spreadsheet, input your estimated number of infrastructure agents. To determine this, decide how many hosts you'll run infrastructure agents on.
  2. At step four, assign a size for the volume of your infrastructure:
  • Start with your base ingest rate as Low if you don't have any on-host integrations.
  • Adjust to Medium or High depending on how many integrations and how high the volume of those integrations. Consider whether you have cloud integrations with large footprints, or a large number of database on-host integrations, or multiple or large Kubernetes clusters. For example:
    • If running two or more low or medium impact integrations such as cloud or on-host integrations, choose the Medium ingest rate.
    • If running all three types of integrations (on-host, cloud, and containers) or observing really large Kubernetes environments, choose High for your ingest rates.

For more on managing your data ingest, see Manage data ingest.

Size your log data ingest

In this section, you add an estimated amount of ingest in GBs.

Because software tools measure log data differently, there isn't an easy way to establish a baseline estimate of log volume in New Relic from an existing implementation.

The best way to estimate your log volume is to send a sample amount of log data. Log events are stored and metered as JSON objects, which are always larger than the original raw log on disk.

To read more on this process, see Manage data ingest.

Data option

Your per-GB ingested data price may vary depending on your organization's contract. The spreadsheet uses our two main price points:

  • Data Plus uses a price of US$0.55 per GB.
  • Our original data option uses our standard price of US$0.35 per GB.

See Data options for more information.

Add additional retention

You can adjust the data retention settings for each data source. To learn about retention and the baselines, see Data retention.

Retention considerations:

  • For each additional month (30 days) of retention on top of your existing retention (the default retention periods for our original data option, and 90 days for the Data Plus option), the cost is $0.05 per GB ingested per month.
  • Retention is added evenly across all namespaces up to a maximum of 395 days. Retention cannot be extended for just one namespace (for example, just logs or custom events). The increased rate is applied to all ingested data.

In section six of the spreadsheet, select the additional months of retention that you want.

View the calculated estimate

When you complete the extended retention section, the total estimated price is displayed in the Calculations section of the spreadsheet.

Other potential data ingest costs

Because this billing calculation was designed for newer customers, it uses the implementations and costs that our newer customers often have. For example, we haven't provided cost estimates for browser monitoring, mobile monitoring, network performance monitoring, or other services. 

Note that our basic alerting features and our synthetic monitors don’t contribute to data ingest. 

For many organizations, these other costs will often represent only about 5 percent of the costs examined and calculated in the spreadsheet. But be aware that high levels of data ingest by other tools can make that higher.

Other billing factors

Data ingest is one billing factor. To learn about others, such as billable user count, see Usage-based pricing.