You can edit this quickstart to add helpful components. View the repository and open a pull request.
What is Azure Content Safety?
Azure AI Content Safety detects harmful user-generated and AI-generated content in applications and services. Content Safety includes text and image APIs that allow you to detect material that is harmful.
New Relic Azure Content Safety quickstart features
A standard dashboard that tracks key indicators like total calls, data in, success rate, total call count for text moderation and total call count for image moderation. It runs custom queries and visualizes the data immediately.
Why monitor Azure Content Safety with New Relic?
New Relic Azure Content Safety monitoring quickstart empowers you to track the performance of Azure Content Safety via different metrics including total calls, data in, success rate, total errors, total call count for text moderation and total call count for image moderation.
Our integration features a standard dashboard that provides interactive visualizations to explore your data, understand context and get valuable insights.
Start ingesting your Azure data today and get immediate access to our visualization dashboards so you can optimize your Azure service.