No momento, esta página está disponível apenas em inglês.

Modern distributed apps don’t just fail in code—they often fail at the network layer, where blind spots make it difficult to prove what’s really happening. When latency spikes, transactions slow down, or connections intermittently fail, teams often bounce between APM, infrastructure dashboards, and network tools, trying to manually correlate symptoms into a root cause.

That’s why today, at New Relic Advance, we’re announcing eBPF: Network Metrics—now in preview, a lightweight, kernel-level network visibility capability with zero instrumentation across application, infrastructure, and network layers.
 

What makes it different?

eBPF: Network Metrics captures deep TCP and DNS telemetry directly from the Linux kernel (no code changes) and surfaces it in a unified workflow alongside your APM and infrastructure data—so teams can pinpoint root causes faster, reduce downtime, and cut costly context switching.

Kernel-level TCP and DNS signals, correlated with application entities in a single view.

Stop guessing: get granular TCP + DNS signals that explain “the network why”

Traditional monitoring can tell you “requests are slow,” but it doesn’t always reveal whether the cause is DNS resolution, handshake delays, retransmissions, or abnormal connection behavior—and that’s where troubleshooting often stalls.

With eBPF: Network Metrics, you gain access to granular TCP and DNS signals, including:

  • TCP handshake latency — spot connection setup delays early so you can confirm when “slowness” starts before the request ever reaches your service.
  • Retransmissions — identify packet loss or unstable paths that silently degrade performance and drive tail latency, even when CPU/memory look normal.
  • Abnormal closures — catch unexpected disconnect patterns that can trigger intermittent errors and hard-to-reproduce timeouts in distributed systems.
  • DNS resolution failures — quickly pinpoint name-resolution issues that prevent services from reaching dependencies, without guessing whether the app or provider is at fault.
  • Socket level errors - Socket errors provide specific codes that differentiate between a network path issue and a server configuration issue.

Together, these signals help you confirm (or rule out) network behavior as the root cause—fast—so you can move from “we think it’s the network” to “here’s the evidence.”

Example scenario:
A checkout service slows down, but CPU/memory are fine and traces don’t point to a code bottleneck. Network Metrics shows rising handshake latency and retransmissions, confirming the issue sits at the network layer—not in application code.

enpm latency

Connect the dots automatically with process-level attribution

Even when teams suspect “the network,” it can take hours to prove where the issue is surfacing and how broadly its impacting services, especially in Kubernetes-heavy environments where workloads are ephemeral and traffic patterns shift constantly.

eBPF: Network Metrics attributes TCP/DNS behavior directly to the originating process or thread, removing the need for manual tagging and cross-tool correlation.

That means platform engineers and SREs can answer questions like:

  • Which service/process is experiencing DNS failures - and on which hosts/nodes?
  • Are retransmission spikes isolated to one workload, or impacting multiple services that share a path?
  • Is this latency caused by handshake delays and packet loss, or is it application logic?

Troubleshoot in one place: the Network View is integrated with APM entities

Most network monitoring experiences still leave teams bouncing across tools and dashboards. In contrast, eBPF: Network Metrics surfaces data in a new Network View tab across both eBPF and APM entities, enabling seamless troubleshooting across layers.

This is specifically designed to reduce context switching and shorten the path from symptom to root cause.

Correlate network behavior with application and infrastructure signals

Network Metrics isn’t just “more telemetry.” The value comes from correlating network behavior with application and infrastructure performance to make root cause identification faster and less manual.

In other words, your team spends less time building hypotheses and more time validating the actual cause.

A single, lightweight install across hosts and Kubernetes—built for cloud-native complexity

As microservices and Kubernetes adoption increase, network bottlenecks become harder to detect and more expensive to troubleshoot.

eBPF: Network Metrics is designed as a single, lightweight, language-agnostic approach that works across hosts and Kubernetes, helping teams close visibility gaps without deploying a patchwork of language-specific agents and vendor tools.

 

Getting started with eBPF: Network Metrics

Step 1: Enable the preview

Sign up for the preview, and turn on eBPF: Network Metrics by following the setup steps in the docs.

Step 2: Install the eBPF agent where you run workloads

Choose the path that matches your environment:

Step 3: Validate data and use Network View to troubleshoot

Once data is flowing, open an APM or eBPF entity and use Network View to correlate:

  • latency spikes ↔ TCP handshake latency/retransmissions
  • intermittent failures ↔ abnormal closures
  • timeouts ↔ DNS resolution failures

That’s it—three steps to go from “network blind spot” to actionable, attributed network signals.