Before Kubernetes took over the world, cluster administrators, DevOps engineers, application developers, and operations teams had to perform many manual tasks in order to schedule, deploy, and manage their containerized applications. The rise of the Kubernetes container orchestration platform has altered many of these responsibilities.
Kubernetes makes it easy to deploy and operate applications in a microservice architecture. It does so by creating an abstraction layer on top of a group of hosts, so that development teams can deploy their applications and let Kubernetes manage them:
Controlling resource consumption by application or team
Evenly spreading application load across a host infrastructure
Automatically load balancing requests across the different instances of an application
Monitoring resource consumption and resource limits to automatically stop applications from consuming too many resources and restarting the applications again
Moving an application instance from one host to another if there is a shortage of resources in a host, or if the host dies
Automatically leveraging additional resources made available when a new host is added to the cluster
Easily performing canary deployments and rollbacks
But such capabilities also give teams new things to worry about. For example:
There are a lot more layers to monitor.
The ephemeral and dynamic nature of Kubernetes makes it a lot more complex to troubleshoot.
Automatic scheduling of pods can cause capacity issues, especially if you’re not monitoring resource availability.
Until recently, monitoring your applications required aligning to your organization’s monitoring practices, installing language agents, instrumenting each app’s code, and redeploying each application.
In effect, while Kubernetes solves old problems, it can also create new ones. Specifically, adopting containers and container orchestration requires teams to rethink and adapt their monitoring strategies to account for the new infrastructure layers introduced in a distributed Kubernetes environment.
With that in mind, we designed this guide to highlight the fundamentals of what you need to know to effectively monitor Kubernetes deployments with New Relic One and our latest innovation, Auto-telemetry with Pixie. Pixie gives you instant Kubernetes observability without the need to manually instrument your code or install language agents. This guide outlines some best practices for monitoring Kubernetes in general, and provides detailed advice for how to do so with the New Relic One platform.
Whether you’re a Kubernetes cluster admin, an application developer, an infrastructure engineer, or a DevOps practitioner, by the end of this guide you will be able to use New Relic and Auto-telemetry with Pixie to get instant Kubernetes observability. As a result, you’ll know how to monitor the health and capacity of Kubernetes resources, debug applications running in your clusters, correlate events in Kubernetes with contextual insights to help you troubleshoot issues, and understand how to track end-user experience from your apps.