現在、このページは英語版のみです。

Overview

Google Kubernetes Engine (GKE) is Google's fully managed Kubernetes platform. Google has significant experience managing Kubernetes clusters—after all, the company invented it—and GKE (formerly known as Google Container Engine) was the pioneering cloud service for deploying clusters on the Google Cloud Platform (GCP). GKE has quickly gained popularity with users, as it’s designed to eliminate the need to install, manage, and operate your own Kubernetes clusters. It’s a preferred option for developers because it’s mature, easy to use, and packed with features like autoscaling, integrated logging and monitoring, and private container registries.

Companies use GKE for a variety of reasons, including resizing application controllers, creating or resizing container clusters, creating container pods, running jobs, managing services and load balancers, and upgrading container clusters.

GKE has a unique approach to Kubernetes: it promotes hybrid cloud models, ensuring portability across the cloud and on-premise infrastructures. With no vendor lock-in, you’re free to take your applications out of GKE and run them anywhere Kubernetes is supported, even in your own data center. GKE also offers the ability to add monitoring tools—such as New Relic—into its ecosystem.

New Relic provides visibility into GKE

While GKE manages your Kubernetes infrastructure, New Relic provides application and infrastructure-centric views into your clusters so you can effectively troubleshoot, scale, and manage your dynamic environment. To operate a dynamic environment at scale, you must be able to:

  • Achieve immediate visibility into—and establish the relationships among—all your infrastructure entities
  • Gain historically accurate performance and status data
  • Drill down from the application to its supporting host infrastructure running in  Kubernetes
  • Correlate entities with their metadata, such as application name, region, and deployed environment (for example, production, staging, or dev).

Monitoring dynamic environments can also help teams discover if the root cause of an application performance issue is actually based on miscalculated configurations in their clusters.    No Kubernetes infrastructure issue should be unsolvable. When using New Relic monitoring capabilities alongside GKE, you can expect quicker resolutions when troubleshooting errors, better performance and consistency, and more reliable means to contain the complexity that comes with running Kubernetes at scale. New Relic can help ensure your clusters are running as expected and quickly detect performance issues within your cluster before they ever make it to your customers.

Get started monitoring GKE

To start monitoring GKE, activate New Relic Infrastructure’s Kubernetes on-host integration.

The integration is available to all New Relic Infrastructure customers at the Pro level.

Before you install the integration on GKE, make sure you have the correct permissions:

1. Go to https://console.cloud.google.com/iam-admin/iam, find your username, and click edit.

2.  Make sure you have permissions to create Roles and ClusterRoles. If you’re not sure, add the Kubernetes Engine Cluster Admin role, which will grant you sufficient permissions.

Note: If you can't edit your user role, ask the owner of your GCP project for the necessary permissions.

3. Ensure you have a RoleBinding that grants you the same permissions created above:

kubectl create clusterrolebinding YOUR_USERNAME-cluster-admin-binding --clusterrole=cluster-admin --user=YOUR_GCP_EMAIL

4.Follow the standard instructions to install and configure the Kubernetes integration.

After you install the Kubernetes integration, you’ll have access to pre-built dashboards for immediate insight into your GKE environment:

New Relic dashboard displaying pre-built dashboards through Kubernetes integration

For more information, see our blog post on how to Contain Complexity with New Relic’s  Kubernetes Integration.

Monitoring applications running on GKE

In the modern digital world, the shift to DevOps and containerized architectures helps business build and ship applications better and faster than ever before. Tools like Kubernetes makes that speed possible.

Here are two examples of how New Relic enables application performance monitoring for workloads running on GKE:

1. Kubernetes enables rapid deployments of applications and is well suited for continuous integration and continuous deployment (CI/CD) practices. Because GKE’s supporting infrastructure is managed as code, a code change can lead to a pod failing to deploy. In such cases, when you need to identify the source of an issue, use New Relic’s understanding of Kubernetes’ native status to isolate applications from issues occurring in GKE and accelerate your time to resolution.  

New Relic dashboard displaying Kubernetes native status

2. In infrastructures that don’t rely on containers and orchestration, if an application has a memory leak, memory usage will slowly increase until the application’s performance plummets. In Kubernetes, pods are given a memory limit and Kubernetes will destroy them when they reach that limit. When this happens, your application’s performance will recover, but you may not have discovered the root cause of the leak—and you’re left with regular performances drops whenever pods consume too much memory. In New Relic, you can correlate resource usage to container restarts and accelerate troubleshooting, all through a single pane of glass.

New Relic dashboard showing container restarts

Google Kubernetes Engine integration: a lightweight alternative

In addition to Infrastructure’s Kubernetes on-host integration, New Relic also offers a “lightweight alternative” for monitoring Kubernetes clusters deployed on GCP. With the New Relic GKE integration, you can use Kubernetes v1.10.2 (or later), combined with Google Stackdriver Monitoring, to get visibility into containers, nodes, and pods. For example, you can monitor pod traffic and CPU and memory usage for containers and nodes.

New Relic dashboard displaying Kubernetes and Google Stackdriver monitoring

While the New Relic Kubernetes on-host integration provides deeper metrics and inventory attributes, the GKE integration can pull in additional data that can’t be retrieved with the on-host integration, such as metrics about underlying specialized hardware accelerators and pod volume.

If you choose to use the GKE integration without the on-host integration, you won’t be able to take advantage of the multi-project and full-stack view in New Relic. But, if you don’t need robust Kubernetes monitoring, the GKE integration alone may suit your needs.

Get started with the GKE integration

Check out the full documentation for more details, but it takes just a few steps to get going with New Relic’s GKE integration:

  1. Enable Google Stackdriver Monitoring in your GCP project.
  2. Link your project to New Relic and enable the GKE service monitoring through New Relic's UI.

Once you’ve activated the integration, use the default dashboards to view the most relevant metrics, and make more specific queries in New Relic Dashboards to build your own dashboards that combine metrics from other data sources such as New Relic APM 360.

Free access to New Relic. Forever.

Monitor your stack for free with full platform access and 100GB of ingest per month. No credit card required.

See pricing details

Laptop and phone displaying the New Relic dashboard with data being shown on the screens