Google Kubernetes Engine (GKE) is Google's fully managed Kubernetes platform. Google has significant experience managing Kubernetes clusters—after all, the company invented it—and GKE (formerly known as Google Container Engine) was the pioneering cloud service for deploying clusters on the Google Cloud Platform (GCP). GKE has quickly gained popularity with users, as it’s designed to eliminate the need to install, manage, and operate your own Kubernetes clusters. It’s a preferred option for developers because it’s mature, easy to use, and packed with features like autoscaling, integrated logging and monitoring, and private container registries.
Companies use GKE for a variety of reasons, including resizing application controllers, creating or resizing container clusters, creating container pods, running jobs, managing services and load balancers, and upgrading container clusters.
GKE has a unique approach to Kubernetes: it promotes hybrid cloud models, ensuring portability across the cloud and on-premise infrastructures. With no vendor lock-in, you’re free to take your applications out of GKE and run them anywhere Kubernetes is supported, even in your own data center. GKE also offers the ability to add monitoring tools—such as New Relic—into its ecosystem.
New Relic provides visibility into GKE
While GKE manages your Kubernetes infrastructure, New Relic provides application and infrastructure-centric views into your clusters so you can effectively troubleshoot, scale, and manage your dynamic environment. To operate a dynamic environment at scale, you must be able to:
Achieve immediate visibility into—and establish the relationships among—all your infrastructure entities
Gain historically accurate performance and status data
Drill down from the application to its supporting host infrastructure running in Kubernetes
Correlate entities with their metadata, such as application name, region, and deployed environment (for example, production, staging, or dev).
Monitoring dynamic environments can also help teams discover if the root cause of an application performance issue is actually based on miscalculated configurations in their clusters. No Kubernetes infrastructure issue should be unsolvable. When using New Relic monitoring capabilities alongside GKE, you can expect quicker resolutions when troubleshooting errors, better performance and consistency, and more reliable means to contain the complexity that comes with running Kubernetes at scale. New Relic can help ensure your clusters are running as expected and quickly detect performance issues within your cluster before they ever make it to your customers.
In the modern digital world, the shift to DevOps and containerized architectures helps business build and ship applications better and faster than ever before. Tools like Kubernetes makes that speed possible.
Here are two examples of how New Relic enables application performance monitoring for workloads running on GKE:
1. Kubernetes enables rapid deployments of applications and is well suited for continuous integration and continuous deployment (CI/CD) practices. Because GKE’s supporting infrastructure is managed as code, a code change can lead to a pod failing to deploy. In such cases, when you need to identify the source of an issue, use New Relic’s understanding of Kubernetes’ native status to isolate applications from issues occurring in GKE and accelerate your time to resolution.
2. In infrastructures that don’t rely on containers and orchestration, if an application has a memory leak, memory usage will slowly increase until the application’s performance plummets. In Kubernetes, pods are given a memory limit and Kubernetes will destroy them when they reach that limit. When this happens, your application’s performance will recover, but you may not have discovered the root cause of the leak—and you’re left with regular performances drops whenever pods consume too much memory. In New Relic, you can correlate resource usage to container restarts and accelerate troubleshooting, all through a single pane of glass.
Google Kubernetes Engine integration: a lightweight alternative
In addition to Infrastructure’s Kubernetes on-host integration, New Relic also offers a “lightweight alternative” for monitoring Kubernetes clusters deployed on GCP. With the New Relic GKE integration, you can use Kubernetes v1.10.2 (or later), combined with Google Stackdriver Monitoring, to get visibility into containers, nodes, and pods. For example, you can monitor pod traffic and CPU and memory usage for containers and nodes.
While the New Relic Kubernetes on-host integration provides deeper metrics and inventory attributes, the GKE integration can pull in additional data that can’t be retrieved with the on-host integration, such as metrics about underlying specialized hardware accelerators and pod volume.
If you choose to use the GKE integration without the on-host integration, you won’t be able to take advantage of the multi-project and full-stack view in New Relic. But, if you don’t need robust Kubernetes monitoring, the GKE integration alone may suit your needs.
Get started with the GKE integration
Check out the full documentation for more details, but it takes just a few steps to get going with New Relic’s GKE integration:
Enable Google Stackdriver Monitoring in your GCP project.
Link your project to New Relic and enable the GKE service monitoring through New Relic's UI.
Once you’ve activated the integration, use the default dashboards to view the most relevant metrics, and make more specific queries in New Relic Insights to build your own dashboards that combine metrics from other data sources such as New Relic APM.
Success is in sight
Simple to set up. Simple to use. Start seeing your data in minutes.