현재 이 페이지는 영어로만 제공됩니다.
AI integration logos

As organizations integrate AI into their tech stacks to improve efficiency and provide better customer experiences, a variety of tools are emerging for developing and deploying AI applications. However, bringing these applications from prototype to production is not without challenges. AI components like large language models and vector databases, while powerful, can be opaque and may lead to issues like inaccurate and biased results, security vulnerabilities, and new types and overwhelming amounts of telemetry data for analysis.

To help solve this problem, we announced New Relic AI monitoring (AIM). This innovative solution offers unparalleled end-to-end visibility into your AI-powered applications, empowering you to optimize performance, quality, and cost in your AI application development. With New Relic AI Monitoring, adopting AI becomes easier and provides confidence in harnessing the full potential of AI in driving business growth and efficiency.

To extend observability into specialized tools and technologies, we have developed over 50 quickstart integrations to help provide out-of-the-box visibility into each layer of your AI tech stack, including:

  • Foundational models (for example, large language models or LLMs)
  • Orchestration frameworks
  • ML libraries
  • Model serving
  • Storage and registries
  • Infrastructure

Orchestration framework: LangChain

Orchestration frameworks like LangChain allow developers to chain together different components of an AI application, such as data processing, model invocation, and post-processing. This makes it easier to build and deploy AI applications that are modular, extensible, and scalable. 

LangChain is a popular orchestration framework because of its flexibility and ease of use. It provides a library of pre-built components that can be combined to create custom AI applications.

The New Relic LangChain quickstart integration provides a single view of all components of your AI application, including models, chains, and tools. The pre-built quickstart dashboard visualizes prediction times, token usage, and even breaks down the contents of prompts and responses. This allows you to track performance and throughput, identify bottlenecks, and optimize your application for efficiency. Moreover, the integration can automatically detect different models such as OpenAI and Hugging Face so you can compare and optimize your LLMs.

Foundational AI models: OpenAI, Amazon Bedrock, PaLM 2, Hugging Face

AI models are algorithms that have been trained on data to perform specific tasks, such as recognizing objects in images, translating languages, or generating text. OpenAI offers various popular generative AI models, as well as APIs that you can use to integrate this functionality into your applications. New Relic introduced the industry’s first OpenAI observability integration, which helps you monitor usage, analyze model performance, and optimize costs for your OpenAI applications. By adding just two lines of code, you can gain access to key performance metrics such as cost, requests, response time, and sample inputs and outputs.

Additionally, New Relic is also integrated with Amazon Bedrock, a fully managed service by AWS that makes building and scaling generative AI applications more accessible by providing API access to foundation models from leading AI companies, including AI21 Labs, Anthropic, Cohere, and Stability AI. Now with the Amazon Bedrock integration, you can easily monitor performance and usage of the Amazon Bedrock API and its connected LLMs.

Furthermore, if you chain multiple models to LangChain, you can compare the performance of LLMs within a single pre-built dashboard. This includes integrations with popular models and model hosting platforms such as PaLM 2 and Hugging Face.

ML libraries: PyTorch, Keras, TensorFlow, scikit-​​learn

Machine learning libraries are an essential component of the AI stack. They provide the tools and libraries that developers need to build and train AI models. New Relic has integrated with popular ML libraries such as PyTorch, Keras, TensorFlow, and scikit-learn to tackle model degradation by monitoring performance and health metrics such as inference latency, memory usage, and data drift. This allows for early detection and helps you identify when to optimize your ML libraries.

Model serving: Amazon SageMaker, Azure Machine Learning

The model serving and deployment layer is responsible for making AI models available to users. It is responsible for loading and running AI models, as well as providing an API or other interface for users to interact with the models. Handling the complex and resource-intensive process of developing AI applications requires a centralized platform for managing the entire AI development lifecycle, from design to deployment. 

Popular AI platforms that offer this functionality include Amazon SageMaker and Azure Machine Learning. The Amazon Sagemaker and Azure Machine Learning quickstart integrations provide visibility into training jobs, endpoint invocations, and operational efficiency with pre-built dashboards and alerts. By monitoring key metrics like job executions, endpoint deployments, and CPU and GPU usage, you can ensure that your AI projects can adequately be supported by your infrastructure while troubleshooting user experience.

Learn more about the Azure Machine Learning integration by reading this blog post.

Storage and registries: Pinecone, Weaviate, Milvus, FAISS

AI applications need to store and access large amounts of data. Vector databases are specialized databases that are designed to store and query high-dimensional data more efficiently with similarity search rather than exact matches. They’re often used for AI applications because they can reduce the cost and increase the speed of training and deploying AI models.

To help monitor database health, New Relic provides pre-built quickstart dashboards for various data storage and registry solutions such as Pinecone, Weaviate, Milvus, and FAISS. In addition to key database metrics such as execution and response latency, requests, and disk space usage, they also allow you to track metrics specific to vector databases such as indexing performance. You can connect your data via chains in LangChain or through Pinecone’s Prometheus endpoint.

Infrastructure: AWS, Azure, Google Cloud Platform, Kubernetes

AI infrastructure is the foundation for developing and deploying AI applications. It includes powerful GPUs and CPUs to train and deploy AI models, as well as cloud computing platforms such as AWS, Azure, and Google Cloud Platform (GCP) that provide a scalable way to deploy AI applications. Because building, clustering, and deploying AI applications requires such compute-intensive workloads, it’s critical to have visibility into your infrastructure. This means being able to monitor your computing resources, such as GPU and CPU usage, as well as your storage and network resources.

New Relic provides flexible, dynamic monitoring of your entire infrastructure, from services running in the cloud or on dedicated hosts to containers running in orchestrated environments. You can connect the health and performance of all your hosts to application context, logs, and configuration changes.

 

New Relic offers a wide range of infrastructure monitoring solutions and integrations including:

Working with the latest AI and machine learning technologies allows you to build amazing experiences. But like any other software experience, your monitoring strategy is key to keeping your user experiences running smoothly. With our integrations for AI and model performance monitoring, New Relic helps you gain visibility into the performance of every layer of your AI stack to easily identify performance, cost, and quality issues affecting AI applications.