Extend full-stack observability to machine learning with model performance monitoring

Published 4 min read
Blue interconnected lines

Today, New Relic is extending its observability experience to provide a new offering for artificial intelligence (AI) and machine learning (ML) teams to break down visibility silos. This brand new innovation provides AI/ML and DevOps teams one place to monitor and visualize critical signals like recall, precision, and model accuracy alongside their apps and infrastructure. 

Start measuring your ML performance in minutes. In this video, see how to set up New Relic ML model performance monitoring for fast time-to-value of your AI and ML applications:

Bring your ML model data into New Relic One

AI/ML engineers and data scientists can now send model performance telemetry data into New Relic One and—with integrations to leading machine learning operations (MLOps) platforms—proactively monitor ML model issues in production. You can empower your data teams with full visibility, with custom dashboards and visualizations that can show you the performance of your ML investments in action.

Complete visibility into ML-powered applications

Unlike ordinary software, AI and ML models are based on both code and the underlying data. Because the real world is constantly changing, models developed on static data can become irrelevant or “drift” over time, becoming less accurate. Monitoring the performance of an ML model in production is essential to continue to deliver relevant customer experiences.

By using New Relic One for your ML model performance monitoring, your development and data science teams can:

  • Bring your own ML data or integrate with data science platforms and monitor ML models and interdependencies with the rest of the application components, including infrastructure, to solve problems faster.
  • Create custom dashboards to gain trust and insights for more accurate ML models.
  • Apply predictive alerts to ML models from New Relic Alerts and Applied Intelligence to detect unusual changes and unknowns early before they impact customers.
  • Review ML model telemetry data for critical signals to maintain high-performing models.
  • Collaborate in a production environment and contextualize alerts, notifications, and incidents before they have an impact on the business.
  • Access data that allows you to make data-driven decisions, such as boosting innovation, planning decisions, increasing reliability, and enhancing customer experience.
Monitoring is fast emerging as one of the biggest and most important aspects of MLOps and I’m excited to see New Relic launch their AI Observability platform.   As companies expand into more complex use cases for AI/ML, full-stack ML application observability needs to be a key focus for any advanced team—and they need the right tools to keep track of their models as they make key decisions in production. At the AI Infrastructure Alliance, we’re dedicated to bringing together the essential building blocks for the Artificial Intelligence applications of today and tomorrow and we are happy to partner with New Relic on that mission.

Get instant value from machine learning model telemetry 

With 100GB free per month and ready-made libraries, you can easily bring your own ML model inference and performance data directly from a Jupyter notebook or cloud service into New Relic in minutes to obtain metrics like statistics data and feature and prediction distribution. 

In addition, New Relic’s open-source ecosystem offers flexible quickstarts so you can start getting value from your ML model data faster. A wide range of integrations with leading data science platforms like AWS SageMaker, DataRobot (Algorithmia), Aporia, Superwise, Comet, DAGsHub, Mona, and TruEra include pre-configured performance dashboards and other observability building blocks that give you instant visibility into your models. Getting value from your ML model data has never been easier with New Relic One.  

Inaccurate ML recommendations or predictions can cost a company millions. New Relic Model Performance Monitoring enables teams to measure ML model performance for maximum return on investments.

Get started with machine learning model performance monitoring

We’re committed to making observability a daily best practice for every engineer. With the launch of New Relic ML Model Performance Monitoring, we deliver a unified data observability platform that gives ML/AI and DevOps teams unprecedented visibility into the performance of their ML-based apps. With everything you need in one place, New Relic is expanding observability into the future.

All the available New Relic ML Model Performance Monitoring observability integrations can be found as part of the New Relic Instant Observability ecosystem, with more on the way. 

For more information on how to bring your ML model telemetry to New Relic One, check out our Python library and notebook example of an XGBoost model, including step-by-step explanation on the integration.