The rapid adoption of AI, particularly agentic AI systems, has introduced a new layer of complexity to application performance management. While Large Language Model (LLM) monitoring has gained traction, a critical visibility gap has emerged around the Model Context Protocol (MCP), a foundational standard for agentic AI interactions. Since its release, MCP has quickly emerged as the gold standard for agentic AI, enabling intelligent agents to interact dynamically with a variety of tools and services. While MCP simplifies AI integrations, it comes with challenges.
Today, we’re announcing groundbreaking support for MCP within our comprehensive AI Monitoring solution, seamlessly integrated with our industry-leading Application Performance Monitoring (APM).
The Observability Challenge with Agentic AI and MCP
Agentic AI applications, where AI agents dynamically interact with various tools and services, often rely on MCP servers to facilitate these interactions. However, these MCP servers have historically operated as "black boxes," obscuring critical insights into the performance and behavior of the AI layer. This lack of visibility presents significant challenges for both agent developers and MCP service providers:
- For Agent Developers: Understanding which tools an AI agent selects for a given prompt, the sequence of tool invocations, and the duration of each step has been a laborious, often manual, process. Pinpointing performance bottlenecks or error sources within the AI's decision-making and execution flow was exceptionally difficult.
- For MCP Service Providers: Gaining insights into how their MCP services are being utilized, identifying performance bottlenecks within their infrastructure, or understanding tool effectiveness was a major hurdle. This often necessitated complex, custom instrumentation, adding significant operational overhead.
The result was a fragmented view of AI application performance, often requiring "screen-swiveling" between disparate monitoring tools and a significant drain on developer and operations teams' time.
Bridging the gap: New Relic's MCP Integration
Our new MCP support directly addresses these challenges by providing deep, actionable insights into the entire lifecycle of an MCP request. This integration allows developers and service providers to:
- Gain Instant MCP Tracing Visibility:
- Automatically instrument and observe the full invocation lifecycle of an MCP request.
- Visualize the specific tools invoked by an AI agent, their call sequences, and execution durations through clear waterfall diagrams.
- Understand the decision-making process of the AI agent as it interacts with various services.
- Enable Proactive MCP Optimization:
- Analyze agent tool selection patterns for specific prompts, allowing for evaluation of tool choices and effectiveness.
- Track key performance indicators (KPIs) such as tool usage patterns, latency, and error rates associated with MCP interactions.
- Identify and optimize underperforming tools or inefficient agent strategies within the MCP service.
- Provide Intelligent AI Monitoring Context:
- Crucially, we are correlating MCP performance data directly with the broader application ecosystem. This means seamless correlation between AI interactions and the performance of backend services, databases, microservices, and message queues.
- This holistic view eliminates data silos and enables true end-to-end observability, allowing teams to quickly pinpoint the root cause of issues, whether it resides within the AI layer, the MCP service, or a traditional backend component.
다음 단계
New Relic AI Monitoring with MCP support is now available in Python Agent version 10.13.0, with support for additional languages planned for future releases.
Visit newrelic.com/platform/ai-monitoring for additional information.
이 블로그에 표현된 견해는 저자의 견해이며 반드시 New Relic의 견해를 반영하는 것은 아닙니다. 저자가 제공하는 모든 솔루션은 환경에 따라 다르며 New Relic에서 제공하는 상용 솔루션이나 지원의 일부가 아닙니다. 이 블로그 게시물과 관련된 질문 및 지원이 필요한 경우 Explorers Hub(discuss.newrelic.com)에서만 참여하십시오. 이 블로그에는 타사 사이트의 콘텐츠에 대한 링크가 포함될 수 있습니다. 이러한 링크를 제공함으로써 New Relic은 해당 사이트에서 사용할 수 있는 정보, 보기 또는 제품을 채택, 보증, 승인 또는 보증하지 않습니다.