New Relic Now+ New Relic’s most transformative platform update yet with 20+ product launches.
Watch the event on-demand now.

In the evolving world of technology, serverless architecture coupled with AI presents unique opportunities and challenges. At New Relic, we are committed to empowering our customers to harness these advancements and fully leverage serverless capabilities for AI-driven use cases. We are excited to announce our latest innovation—support for instrumenting AWS Lambda response streaming functions, integrated with AI monitoring, to deliver a suite of new benefits uniquely tailored for AI applications.

What is response streaming and why is it useful for AI applications?

Response streaming allows AWS Lambda functions to deliver outputs progressively as they are processed, rather than waiting to send a full batch of results all at once. This approach facilitates a real-time data flow, which is particularly beneficial for AI-driven applications that demand speed and instant insights.

As AI thrives on rapid data processing and decision-making, with response streaming, your serverless AI applications can stream data in real-time to help reduce latency, processing times, and memory usage, which can enhance scenarios like immediate anomaly detection, dynamic performance tuning, real-time data classification, prediction updates, etc.. This ultimately leads to a more interactive user experience.

How can New Relic help?

As a leader in observability, we understand the intricacies involved in monitoring and optimizing serverless functions, especially those centered around AI tasks. New Relic platform offers detailed insights and seamless integration to ensure clarity and efficiency in your AI operations.

  • Real-time response-specific insights: Track and analyze data live as it gets streamed through Lambda functions, using our ‘AI Responses’ feature integrated with serverless observability.
  • Debugging erroneous invocations: Gain instant visibility into every invocation throughout the invocation duration, optimizing memory allocation, simplifying troubleshooting errors, cold start times, and refine AI models accordingly.
  • Response streaming metrics: Adapt AI operations based on streaming output volume with the help of ‘Streamed outbound bytes’ and ‘Streamed outbound throughput’ metrics to meet demands efficiently.