For software developers, DevOps engineers, and IT professionals, log data is a valuable and often irreplaceable source of troubleshooting data. Today teams are swamped with more log data, from more sources, than ever before. In addition, as the volume of log data scales, so do the challenges associated with collecting, managing, and analyzing this data.

New Relic simplifies that with log management that is fast, reliable, and highly scalable, while also giving developers and operations teams deeper visibility into application and infrastructure performance data, such as events and errors. This reduces mean time to resolution (MTTR), and allows IT to quickly troubleshoot production incidents, and connect their log data with the rest of their telemetry data, like metrics, events, and traces.

New Relic makes it simple to gather and visualize your log data by using plugins that integrate with some of the most common open source logging tools, like Fluentd, Fluent Bit, Logstash, and Amazon CloudWatch (among others).

With New Relic logs in context, you can bring contextual data to the logging experience and correlate log data with other telemetry to reveal meaningful patterns and trends in your applications and infrastructure.

So, what is logs in context?

Logs in context adds metadata that links your log data with related data, like error or trace information in New Relic APM 360.

By bringing all of this data together in a single solution, you’ll more quickly get to the root cause of issues—narrowing down from all of your logs, to the exact log lines that you need to identify and resolve a problem.

For example, you can correlate log messages to a related error trace or distributed trace for a Java application.

In this blog post, I’ll explain how to set up logs in context for a Java application running in a Kubernetes cluster. We’ll achieve this in three steps:

  1. Install the New Relic Kubernetes integration
  2. Configure Kubernetes logging for New Relic
  3. Enable logs in context for your Java app

Case study
New relic virtuo customer story
See how New Relic customers used logs to resolve incidents in record time
Virtuo case study Virtuo case study

Step 1: Install the New Relic Kubernetes integration

New Relic’s Kubernetes integration gives you in-depth information about your cluster’s performance, and reports on data and metadata about the nodes, namespaces, deployments, ReplicaSets, pods, clusters, and containers, so you can easily determine the source, scope, and impact of any problems.

In this example, my Kubernetes  environment is stand-alone, but we also support several cloud-based Kubernetes platforms, including  Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Red Hat OpenShift.

So, let’s get started:

  1. New Relic uses kube-state-metrics—a simple service that listens to the Kubernetes API server and generates metrics—to gather information about the state of Kubernetes objects. Use this command to  install kube-state-metrics in your cluster:
    curl -L -o && unzip && kubectl apply -f kube-state-metrics-1.7.2/kubernetes
  2. Download the New Relic Kubernetes integration configuration file:
    curl -O
  3. In the env: section of the configuration file, add your New Relic license key and a cluster name to identify your Kubernetes cluster. Both values are required.
      - name: NRIA_LICENSE_KEY
        value: <YOUR_LICENSE_KEY>
      - name: CLUSTER_NAME
        value: <YOUR_CLUSTER_NAME>
  4. Load the daemon set onto your  cluster:
    kubectl create -f newrelic-infrastructure-k8s-latest.yaml
  5. Go to, and select the Kubernetes cluster explorer launcher. This will take you to the Kubernetes cluster explorer. If you haven’t worked with it before, take a few minutes and explore—it’ll be well worth it.

The Kubernetes cluster explorer in New Relic One

Tip: If you’re having problems or can’t see your cluster, check out the Kubernetes integration documentation.

Step 2: Capture Kubernetes logs

Now, we need to gather Kubernetes logs and send them to New Relic. Here, we are going to do this with FluentBit and a New Relic output plugin. Here’s how to set this up:

  1. Clone or download the New Relic kubernetes-logging project from GitHub.
  2. In the new-relic-fluent-plugin.yml, edit the env: section to replace the placeholder value <LICENSE_KEY> with your New Relic license key.
    - name:LICENSE_KEY
      value: <YOUR_LICENSE_KEY>
  3. Load the logging plugin into your Kubernetes environment:
    kubectl apply -f .
  4. Go to, and select the Logs launcher.  After a few moments you should see Kubernetes log entries start to appear. If you’ve already got log entries from other sources that might be getting in the way, add plugin source: “kubernetes” to the query field. This will show you entries from Kubernetes clusters only. (Congratulations! You’ve also just gained valuable experience working with New Relic log management, too.  Queries are simple and easy.  No waiting for results and no specialized query language to learn.)

New Relic collects log data from your clusters

Tip:  If you’re having problems or can’t see your logs, check out the full Kubernetes plugin for Logs documentation.

Step 3: Enable logs in context for your Java app

For this to work, New Relic’s instrumentation needs to inject some metadata that maps log entries to application activity. New Relic leverages your existing application log framework to do that. While this step involves a configuration change, you won’t have to change any application code.

In this example, my Java app uses the Log4j 2.x extension for logging, but New Relic supports other languages and logging frameworks as well.

  1. If you haven’t done so already, instrument your app with New Relic’s Java APM agent v 5.6.0 or higher, and enable distributed tracing.
  2. Add the New Relic logs in context extension to your project. My project uses Gradle, so I’ve added the compile stanza to the dependencies section, as shown:
    dependencies {
  3. Edit your logging configuration file (mine is called log4j2.xml) and add the packages statement to the <Configuration> tag.
    <Configuration xmlns="" packages="com.newrelic.logging.log4j2">
  4. Still in the logging configuration XML file, add a <NewRelicLayout/&gt; tag to one of your log appenders. In this case, we’ll use the console appender because that’s the default method for aggregating logs in Kubernetes:
       <Console name="STDOUT" target="SYSTEM_OUT">
  5. Set the log4j2.messageFactory system property to use the NewRelicMessageFactory.  I did this by adding a custom parameter to the Java command line:

    Tip: To help you understand what you need to do in these steps, check out my example Build.Gradle and log4j2.xml files.

  6. Redeploy the app into your Kubernetes environment.
  7. Wait a few minutes and then look for log entries that have a or attribute. For example, in the query field, add has

Tip: If you’re not seeing any traffic, run the app.

If you have log entries with those attributes, you’ll be able to drill down from application traces into logs.

Using span.ids to examine logs in context for a Java application in New Relic

Here's how it looks when you drill down from application traces into logs:

FAQs about logs in context

How do I monitor logs in Kubernetes?

One great way to monitor logs in Kubernetes is to use New Relic log management to collect, aggregate, analyze, and visualize log data and metrics. Also, it can be integrated with Kubernetes native logging agents like Fluentd.

How do you troubleshoot apps while running in Kubernetes?

The first step is to determine the problem. Is it your pods, service, or replication controller? Doing this manually can take a lot of time. New Relic provides a faster approach where you can create custom queries, detect patterns, and get rid of manual queries. After identifying the problem, you can:

  • Debug pods: Check the current state of your pods and depending on what you find, continue debugging.
  • Debug a service: start with verifying that there are endpoints for the service.