With AWS re:Invent 2025 behind us, it's clear that the future of cloud computing lies in agentic workflows and AI-driven operations. As the cloud evolves into a more agent-friendly ecosystem—enhancing everything from operational excellence to security practices through spec-driven development—many of us are left wondering, "Will my job be replaced by AI?" but Dr Werner Vogels’ during the closing keynote of AWS re:Invent 2025 quotes about how the mindset should be “will AI make me obsolete?”. To stay relevant in the wave of AI and AI based tools, it’s important as engineers to stay upskilled with the wave of technology change.
In this blog post, we’ll explore some of the key announcements from AWS re:Invent 2025, covering everything from core cloud computing services to the newly available Agentic AI integrations.
Autonomous, always on-call engineer: AWS DevOps Agent
One of the biggest announcements is the public preview of the AWS DevOps Agent. Matt Garman, CEO of Amazon Web Services, announced the AWS DevOps Agent, highlighting its integration with observability partners like New Relic. This new AI agent aims to redefine operational excellence for SRE, DevOps, and Platform Engineering teams. Acting as an autonomous, always-on-call engineer, it's designed to accelerate incident response, identify root causes, and proactively ensure system reliability.
By integrating the AWS DevOps Agent with New Relic, you unlock powerful observability capabilities that significantly reduce mean time to detection (MTTD) and mean time to resolution (MTTR). This integration leverages New Relic’s AI MCP Server to automate investigations and streamline root cause analysis. This empowers SRE and DevOps teams to move beyond time-consuming manual processes and resolve issues faster and more efficiently.
Learn more about it from the Resolve and prevent operational incidents with AWS DevOps Agent and New Relic blog post.
Accelerated Compute and AI Infrastructure
- Graviton5: AWS launched its fifth-generation Arm CPU, Graviton5, built for a wide array of cloud workloads. New instances powered by this chip offer up to 25% better performance than the previous generation, boosting price-performance for tasks like microservices and data processing. (Announcement)
- Trainium3 UltraServers: Powered by a new 3nm AI chip, the new Trn3 UltraServers deliver a significant leap in performance. They provide up to 4.4 times more computing power with greater energy efficiency, resulting in better cost-per-token economics. (Announcement)
- S3 Vectors: Amazon S3 now natively supports storing and querying vector embeddings. This feature is designed for AI applications like Retrieval-Augmented Generation (RAG) and can handle up to 2 billion vectors per index. It also offers substantial cost savings, potentially reducing expenses by up to 90% compared to specialized vector databases. (Announcement)
- Amazon Nova 2 Model Family: AWS has unveiled the next generation of Nova models, introducing Nova 2 Lite, Nova 2 Pro, and a preview of Nova 2 Omni. These new models are equipped with multimodal reasoning capabilities. Additionally, AWS launched Nova 2 Sonic, which enables speech-to-speech functionality. (Announcement)
- IAM Policy Autopilot: An open-source MCP server that analyzes code to auto-generate valid IAM policies simplifying a challenging aspect of best practices with security such as least-privileges. (Announcement)
Cloud Computing, Data and Cost
- AWS Lambda Managed Instances: This new feature allows customers to run Lambda functions on dedicated EC2 capacity while preserving the simple, serverless operational model. This hybrid approach provides access to specialized hardware and enables cost savings through EC2 pricing models like Savings Plans and Spot Instances, all without the overhead of direct instance management. AWS handles the underlying infrastructure, freeing developers to focus on their code. (Announcement)
- AWS Lambda Durable Functions: AWS introduced a new capability for building serverless applications that can reliably coordinate multiple steps over long periods—from seconds to a full year. This simplifies the development of complex, stateful workflows, such as long-running payment processes or data pipelines, by managing state automatically and eliminating the cost of idle compute time while waiting for external events or user input. (Announcement)
- Amazon EKS Capabilities: AWS announced Amazon EKS Capabilities, which streamline Kubernetes development by providing a fully managed platform. These capabilities handle workload orchestration and cloud resource management, removing the need for infrastructure maintenance while ensuring enterprise-grade reliability and security. Platform teams can use declarative policies to define desired states, reducing the amount of custom code and cluster upkeep required. (Announcement)
- Increased S3 Object Size: Amazon S3 now supports single objects up to 50TB. This substantial increase simplifies operations by reducing the need to split large files and enables faster batch processing. (Announcement)
- Database Savings Plans: Database Savings Plans mark a major breakthrough for FinOps teams by providing a single, flexible commitment that spans multiple AWS database engines, including RDS, Aurora, DynamoDB, and ElastiCache. This streamlined approach simplifies both commitment forecasting and management, making it easier than ever to optimize costs across diverse database solutions. (Announcement)
Intelligent observability
During his session, New Relic Chief Product Officer - Brian Emerson discussed The Future of Intelligent Observability which is an evolution that takes us from traditional observability to intelligent observability, leveraging the powerful capabilities of New Relic’s agentic integrations with Amazon Quick Suite and Amazon Q Business Index. This integration paves the way for a future where AI agents can autonomously identify and diagnose issues within a system. These agents will not only perform root cause analysis with unprecedented speed and accuracy but also take proactive actions, such as automatically autoscaling clusters to handle increased loads or generating detailed Jira tickets for necessary patches—all before a human engineer even needs to intervene.
What’s next?
With the latest updates from AWS re:Invent 2025, AI agents are transforming workflows, enabling greater efficiency and productivity. These advancements, combined with the newly launched core compute capabilities, offer powerful tools to take your operations to the next level.
The introduction of the AWS DevOps Agent, now integrated with New Relic, empowers teams with an autonomous on-call engineer capable of seamless incident triage and response. This innovation is designed to reduce the complexity of incident management and ensure your systems remain resilient. Explore them today and experience a new standard of operational excellence.
Die in diesem Blog geäußerten Ansichten sind die des Autors und spiegeln nicht unbedingt die Ansichten von New Relic wider. Alle vom Autor angebotenen Lösungen sind umgebungsspezifisch und nicht Teil der kommerziellen Lösungen oder des Supports von New Relic. Bitte besuchen Sie uns exklusiv im Explorers Hub (discuss.newrelic.com) für Fragen und Unterstützung zu diesem Blogbeitrag. Dieser Blog kann Links zu Inhalten auf Websites Dritter enthalten. Durch die Bereitstellung solcher Links übernimmt, garantiert, genehmigt oder billigt New Relic die auf diesen Websites verfügbaren Informationen, Ansichten oder Produkte nicht.