New Relic Now Demo new agentic integrations June 24.
Save your seat.

The rapid adoption of AI, particularly agentic AI systems, has introduced a new layer of complexity to application performance management. While Large Language Model (LLM) monitoring has gained traction, a critical visibility gap has emerged around the Model Context Protocol (MCP), a foundational standard for agentic AI interactions. Since its release, MCP has quickly emerged as the gold standard for agentic AI, enabling intelligent agents to interact dynamically with a variety of tools and services. While MCP simplifies AI integrations, it comes with challenges. 

Today, we’re announcing groundbreaking support for MCP within our comprehensive AI Monitoring solution, seamlessly integrated with our industry-leading Application Performance Monitoring (APM).

The Observability Challenge with Agentic AI and MCP

Agentic AI applications, where AI agents dynamically interact with various tools and services, often rely on MCP servers to facilitate these interactions. However, these MCP servers have historically operated as "black boxes," obscuring critical insights into the performance and behavior of the AI layer. This lack of visibility presents significant challenges for both agent developers and MCP service providers:

  • For Agent Developers: Understanding which tools an AI agent selects for a given prompt, the sequence of tool invocations, and the duration of each step has been a laborious, often manual, process. Pinpointing performance bottlenecks or error sources within the AI's decision-making and execution flow was exceptionally difficult.
  • For MCP Service Providers: Gaining insights into how their MCP services are being utilized, identifying performance bottlenecks within their infrastructure, or understanding tool effectiveness was a major hurdle. This often necessitated complex, custom instrumentation, adding significant operational overhead.

The result was a fragmented view of AI application performance, often requiring "screen-swiveling" between disparate monitoring tools and a significant drain on developer and operations teams' time.

Bridging the gap: New Relic's MCP Integration

Our new MCP support directly addresses these challenges by providing deep, actionable insights into the entire lifecycle of an MCP request. This integration allows developers and service providers to:

  1. Gain Instant MCP Tracing Visibility:
    • Automatically instrument and observe the full invocation lifecycle of an MCP request.
    • Visualize the specific tools invoked by an AI agent, their call sequences, and execution durations through clear waterfall diagrams.
    • Understand the decision-making process of the AI agent as it interacts with various services.
  2. Enable Proactive MCP Optimization:
    • Analyze agent tool selection patterns for specific prompts, allowing for evaluation of tool choices and effectiveness.
    • Track key performance indicators (KPIs) such as tool usage patterns, latency, and error rates associated with MCP interactions.
    • Identify and optimize underperforming tools or inefficient agent strategies within the MCP service.
  3. Provide Intelligent AI Monitoring Context:
    • Crucially, we are correlating MCP performance data directly with the broader application ecosystem. This means seamless correlation between AI interactions and the performance of backend services, databases, microservices, and message queues.
    • This holistic view eliminates data silos and enables true end-to-end observability, allowing teams to quickly pinpoint the root cause of issues, whether it resides within the AI layer, the MCP service, or a traditional backend component.