When a production incident hits, the first question is almost always: what do the logs say? Too often, however, you can’t query those logs because they are in remote, local storage. Sure, teams can create custom schemas or do manual re-ingestion but the clock is ticking. What about data that cannot be moved to meet strict sovereignty mandates?
Maybe you can access the logs, but they are in raw form. How do you make sense of the data without wasting hours writing and testing manual regex.
You shouldn't have to choose between compliance and visibility, or having to spend countless hours just to make sense of raw data to get the insights you need.
Today, we’re announcing two new capabilities that help address these issues. From processing and querying logs locally with Federated Logs (now in preview), to reducing time-to-insight from hours to minutes without technical barriers or engineering toil with no-code parsing, now generally available.
Federated Logs : Keep data local, unlock full logs in context.
New Relic Federated Logs eliminates this compromise of having to manage multiple siloed log tools and storage sources separately, and crucially missing the context between those data sources while trying to troubleshoot an issue. We are delivering a unified view of logs in context no matter where they are stored, while allowing you to adhere to your data boundary mandates, all without sacrificing visibility and insights. You can now query logs directly in Amazon S3 buckets within your domain or VPC, and gain granular insights without the complexity of managing custom schemas or re-ingestion.
Powered by our Pipeline Control Gateway (PCG), this architecture delivers a unified experience that fundamentally differs from legacy approaches:
- Local Data, Global Observability: Process and query logs locally while viewing them in context with the rest of your stack. This grants you granular insights within a single UI, accelerating troubleshooting by eliminating manual processes and context switching.
- Optimized Log Management: Leverage the PCG to automatically process and format logs at the source. This delivers complete value extraction—handling schema management for you—minimizing complexity.
- Residency-by-Design: Ensure strict compliance by keeping raw log data securely within your local customer environment boundaries. This architectural guarantee eliminates the need for data egress, keeping your sensitive data under your control.
New Relic effectively eliminates the "Silo Penalty." You no longer need complex pipelines to access log data that cannot be moved just to make it useful. By bringing the query engine directly to the storage, you can maintain strict governance while ensuring 100% of your telemetry is instantly accessible for incident resolution, the moment you need it, using the same syntax and the same UI, all in context.
Let’s examine how these capabilities are redefining log management strategies.
Unified troubleshooting without context switching
The concept of “Local Data, Global Observability” fundamentally shifts the architecture. In a siloed setup, an engineer might need to search sensitive logs, then switch to live telemetry for latency. With federated logs, they can query across both stores from a single interface. The system queries the local stores in real-time and aggregates the results.
This eliminates the "swivel-chair" effect and grants granular insights within a single UI, significantly accelerating Mean Time to Resolution (MTTR).
Automation at the Source
Logs stored locally or in siloed Amazon S3 buckets are often noisy and unstructured. One of the biggest time sinks for DevOps engineers is managing log schemas. If a developer changes a log format, then downstream parsers break
Pipeline Configuration Generator (PCG) to handle the heavy lifting at the source. Delivers "complete value extraction," locally, to get granular insights and queryable data in context with the rest of the stack without writing complex regex rules or manually maintaining schemas.
Ensuring strict compliance
Compliance should not be an afterthought or a "feature" you toggle on. It must be baked into the architecture. For highly regulated sectors like finance, healthcare, and government, the guarantee that raw data stays local is critical.
Residency-by-Design ensures that raw log data never leaves the local environment unless explicitly configured to do so. Because the heavy processing happens locally (thanks to PCG) and only aggregated insights or specific query results are viewed globally, you eliminate the need for bulk data egress or manual processes.
This approach allows enterprises to retain full control over their data, knowing it isn't being replicated to a third-party cloud in a different jurisdiction. It builds trust between the engineering and compliance arms of the organization by removing the old trade-offs between visibility and compliance.
No-Code Parsing: From raw logs to insights in minutes.
Breaking data silos is only half the battle; you also need to make sense of it, something we can call the "Expert Tax."
The traditional approach to log parsing often involves wrestling with complex regular expressions (regex), navigating blind spots in data visibility, and risking production stability with untested configurations. It's a friction-heavy workflow that slows down troubleshooting and frustrates even the most seasoned engineers.
This is where data remains "dark" because of the lack of experienced engineers who possess the deep regex or coding knowledge required to transform messy, unstructured log data into the right insights. Previously, this meant writing fragile regex or custom code—manual processes that restricted data transformation to a handful of power users .
New Relic No-Code Parsing democratizes this workflow . It is a visual builder embedded directly in the ingestion pipeline that allows any engineer to structure logs in minutes:
- No-code visual builder: Simplifies log attribute extraction with text highlighting, eliminating the need for complex regex or manual effort to capture domain knowledge without writing any code.
- Automated parsing rules: Intelligent format detection to instantly identify common schemas like JSON and CSV, creating rules for efficient search analysis and insights.
- Real-time rule validation: Instantly previews parsing rules against real log samples in real-time before saving, eliminating evaluation cycles.
For teams looking to mature their observability practice, the question is no longer "how do we parse this log?" but rather "what insights can we unlock next?" Now you are not just managing logs; you are mastering your data to provide the transparency and speed your business needs to stay ahead.
But let's explore how these features help you get there.
Simplicity Meets Power
The first step in unlocking logs insights is removing the barrier of entry for parsing. No-code visual builder changes the game by eliminating the need for manual regex scripting.
Instead of writing complex code to tell the system where a user ID starts and ends, engineers can simply highlight the relevant data directly within the log UI. The system then automatically generates the parsing logic in the background.
Imagine a scenario where a DevOps engineer needs to track latency across a specific microservice. In the past, they would copy a sample log, open a regex tester, write a pattern like (?<=latency=)\d+, test it, and then deploy it.
- With Visual Attribute Extraction, they simply:
- Open the log stream.
- Highlight the latency value.
- Name the attribute "service_latency".
- Save the rule.
This shift does more than just save time. It democratizes log management. You no longer need to be a regex wizard to structure data. Junior engineers and developers can create their own parsing rules without blocking the DevOps team, fostering a culture of self-service observability.
Zero-latency analysis
Data is only useful if you can make sense of it, and when a production incident occurs, every second counts.
In legacy systems, you often have to define a schema before you can effectively query and analyse your logs. If an application throws an unexpected error with a new attribute, that attribute might effectively be invisible to your analytics tools until you update the schema. But you don't have time to re-index data or update configurations.
Automated parsing-rules flips this model. It automatically detects and indexes structure within your logs. This means that if a new field appears in your JSON logs, it is instantly available for filtering and aggregation, you can immediately search for the new error codes or transaction IDs that triggered the alert. This significantly reduces the manual toil during a firefight, allowing engineers to focus on the "why" of the incident rather than the "how" of data access.
No more evaluation cycles
Perhaps the most critical aspect of log management is reliability. You need to know that your parsing rules will work before they hit production. Real-time parsing rules validation provides this assurance.
Historically, updating parsing logic was a risky operation. You would write a rule, deploy it, and then watch the incoming data stream to see if it worked. If you made a mistake, you might inadvertently drop logs or corrupt data for the entire team. This "deploy and pray" approach is unacceptable in modern, high-velocity engineering environments.
Real-time validation allows teams to test new parsing rules against live sample data currently flowing through the pipeline. You get immediate feedback on how your rule will behave with actual production logs.
This capability acts as a safety net. It catches regressions and edge cases before they impact your observability data. It empowers teams to iterate on their log configurations fearlessly, knowing that they have verified the outcome against real-world conditions.
Shifting the log management paradigm
The combination of advanced log capabilities like Federated Logs and no-code parsing represents a turning point in operational efficiency. There is no need to choose between compliance and visibility, and instant access to raw data insights ensures that 100% of your logs and telemetry are accessible, structured, in context and actionable the moment an alert is triggered
New Relic Intelligent Platform fundamentally transforms how teams interact, manage and get insights from log data. This isn't just about incremental improvement; it is about empowering engineers to stop fighting their tools and start solving problems, even faster than before.
Next steps
Ready to reclaim your engineering time? No-code parsing is now generally available to all New Relic customers, with Federated Logs currently in preview.
Explore how these features can transform your workflow today.
- Get Started: Sign Up for New Relic. Your account includes 100 GB/month of free data ingest.
- Join the Federated Logs preview here.
The views expressed on this blog are those of the author and do not necessarily reflect the views of New Relic. Any solutions offered by the author are environment-specific and not part of the commercial solutions or support offered by New Relic. Please join us exclusively at the Explorers Hub (discuss.newrelic.com) for questions and support related to this blog post. This blog may contain links to content on third-party sites. By providing such links, New Relic does not adopt, guarantee, approve or endorse the information, views or products available on such sites.