Maintaining log configurations manually across hundreds of services leads to configuration drift and operational toil. Practicing log management as code allows you to move away from reactive administration and toward a self-governing pipeline that:
- Standardize Data: Ensure every service follows the same parsing and naming conventions.
- Ensure Performance: Automatically route high-volume logs to specific partitions for faster querying.
- Secure by Default : Automate the "Safety Net" so it’s impossible for a developer to accidentally leak PII
- Optimize Costs: Use drop rules to filter out "noise" (like health checks) before they hit your bill.
By bundling these four pillars into a reusable Terraform module, you ensure that every new service is born into a secure, cost-effective, and high-performance ecosystem without a single manual click.
In this walkthrough, we’ll use the New Relic Terraform provider to build a self-governing log pipeline.
To begin, you'll need to set up your environment to communicate with New Relic. Before writing your log-specific code, please follow the Get Started with New Relic and Terraform guide to configure your provider and credentials correctly.
A: Log Parsing & Enrichment
Raw logs are just noise until they are structured. Use newrelic_log_parsing_rule to turn text into actionable attributes. These allow you to filter, facet, and alert on your data with precision using parsed attributes such as user_id, session_id, or error_code rather than just searching for keywords.
resource "newrelic_log_parsing_rule" "app_parsing" {
name = "Enrich App Logs"
description = "Extracts session and user IDs using Grok"
nrql = "SELECT * FROM Log WHERE service_name = 'checkout-api'"
enabled = true
lucene = "logtype:app_messages"
# Grok pattern to extract IDs from structured strings
grok = "%%{TIMESTAMP_ISO8601:timestamp} %%{WORD:level} \\[%%{DATA:session_id}\\] User: %%{DATA:user_id} - %%{GREEDYDATA:message}"
}
Note: The following is an example configuration. Because Terraform resources are frequently updated, please always refer to the Terraform Registry: newrelic_log_parsing_rule for the latest argument specifications.
B. Provision a Log Partition for Performance
When you have millions of logs, searching through everything at once is slow. By using newrelic_data_partition_rule, we can architect our data for speed.
resource "newrelic_data_partition_rule" "production_logs" {
description = "Route all production app logs to a high-performance partition"
nrql = "environment = 'production'"
enabled = true
target_data_partition = "Log_Production"
retention_policy = "STANDARD"
}
By codifying this, any service tagged with environment: production is automatically optimized for the SRE team's search queries.
Note: This configuration is provided as an example. For the most up-to-date options and requirements, please consult the Terraform Registry: newrelic_data_partition_rule.
C. Obfuscation Rules
In an Observability as Code world, it is a global guardrail. Instead of creating a rule for every service, you should create a Global Security Module. This ensures that even if a developer forgets to sanitize their code, the pipeline has a "Safety Net."
We use newrelic_obfuscation_expression to define the pattern (like a Credit Card or SSN) and newrelic_obfuscation_rule to apply the action.
# Define the pattern once (e.g., Credit Card Numbers)
resource "newrelic_obfuscation_expression" "credit_card" {
name = "creditCardPattern"
description = "Regex to find potential credit card strings"
regex = "(?:4[0-9]{12}(?:[0-9]{3})?|[25][1-7][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\\d{3})\\d{11})"
}
# Apply it globally to the 'message' attribute
resource "newrelic_obfuscation_rule" "global_masking" {
name = "Global PII Masking"
filter = "message LIKE '%card%'" # Target logs likely to have this data
enabled = true
actions {
attributes = ["message"]
expression_id = newrelic_obfuscation_expression.credit_card.id
method = "MASK" # Options: MASK or HASH
}
}
Note: This configuration is provided as an example. For the most up-to-date options and requirements, please consult the Terraform Registry: newrelic_obfuscation_rule guide.
D. Implement Pipeline Cloud Rules
Managing ingest rules for 10 or 100 services in the UI is a recipe for massive time sink. You can use Terraform for_each loops to apply standard pipeline rules across your entire service catalog.
# 1. Define the services you want to create rules for
locals {
target_services = ["checkout-api", "inventory-service", "payment-gateway"]
}
resource "newrelic_pipeline_cloud_rule" "service_gatekeeper" { for_each = toset(local.target_services)
name = "Filter-High-Volume-${each.key}"
description = "Automated ingest filter for ${each.key}" account_id = 123456
# 1. Use DELETE syntax instead of SELECT # 2. No 'action' or 'enabled' arguments are needed
nrql = "DELETE FROM Log WHERE serviceName = '${each.key}' AND environment != 'production'"
}
Use Terraform to apply global "Gatekeeper" rules that filter out noise across your entire fleet.
Note: This configuration is provided as an example. For the most up-to-date options and requirements, please consult the Terraform registry: Pipeline_cloud_rule
Conclusion: The Automated Future
By treating your log management as code, you remove the "toil" of manual setup. When a developer adds a new service to your Terraform repository, they don't have to worry about creating partitions, setting up parsing, or managing costs the infrastructure handles it for them.
This "Self-Governing Pipeline" ensures that your observability stack remains as agile as the code it monitors.
Call to Action: Take the Next Step to ‘O11y as Code’
Learn how to treat your telemetry as a first-class citizen of your CI/CD pipeline. Read the full Observability as Code Guide and start scaling your New Relic environment with Terraform today.
If you're new to New Relic, sign up for a free account and experience the power of Observability as Code firsthand.
If you experience problems or have questions, feel free to reach out in our Explorers Hub or in this repository.
本ブログに掲載されている見解は著者に所属するものであり、必ずしも New Relic 株式会社の公式見解であるわけではありません。また、本ブログには、外部サイトにアクセスするリンクが含まれる場合があります。それらリンク先の内容について、New Relic がいかなる保証も提供することはありません。