Pour le moment, cette page n'est disponible qu'en anglais.
Get started with OpenTelemetry
binoculars looking out over a city

In this blog post, we go over the OpenTelemetry Collector, a powerful tool for building telemetry data pipelines that you’ll want to consider bringing to your environment. Then we show a Java application collecting instrumentation data aligned with the semantic conventions, and exporting it to the Collector. Finally, you'll learn what you can do with the data exported to your backend observability platform. We'll show how the data flows from the Collector into New Relic.

Before you begin, if you need a refresher, review OpenTelemetry, the core components of the OpenTelemetry open source project, the primary OpenTelemetry data sources (traces, metrics, and logs), and how to instrument simple Java apps with OpenTelemetry.

The OpenTelemetry Collector

Modern software environments often consist of a mixture of open source and proprietary components, instrumented with different technologies with varying levels of configurability. You can gain control in this kind of environment with the OpenTelemetry Collector, a highly configurable and extensible means of collecting, processing, and exporting data.

Do your components expose telemetry data in a variety of formats, like Prometheus, Jaeger, Kafka, and Zipkin? You can configure the Collector as a Gateway cluster to accept all of these formats (and more) and apply common processing before exporting to the backend observability platform of your choice. Want to send an extra copy of the data to a data warehouse? Add an exporter to your configuration, and if one doesn’t exist that fits your needs, you can write your own.

The are two main deployment methods for the OpenTelemetry Collector: agent and gateway.

OpenTelemetry: agent vs collector vs gateway

An agent is a program that runs on a host and collects telemetry data (like metrics and traces) from that host. The agent then forwards that data to a collector, which is another program that receives and processes the telemetry data and exports it to a backend for further analysis and visualization. Basically, an agent collects the data from the host and sends it to the collector to process and export.

The agent is an OpenTelemetry Collector instance running on the same host as the application you're collecting the telemetry data from. The agent then forwards this data to a gateway (one or more instances of the OpenTelemetry Collector that receive data from multiple agents). And then data is sent to configured backends.

Using the OpenTelemetry Collector as an agent

When you use the OpenTelemetry Collector as an agent, the Collector is deployed as an agent on every host, where it can scrape metrics about the system. Processes on the host can send telemetry data to the agent, which can enrich the data with host metadata before forwarding it on. Our demos here do not show this option, but it’s something to consider.

Running the OpenTelemetry Collector locally with Docker

The next video demonstrates how to run the Collector locally with Docker (an example from newrelic-opentelemetry-examples). The Collector is configured to receive data through OpenTelemetry Protocol (OTLP) and export it to New Relic. It also shows a Java application exporting data to the Collector. You can adapt this type of setup and deploy it to any container-based environment. 

For reference, opentelemetry-collector contains core code for the Collector, and opentelemetry-collector-contrib contains experimental and vendor-specific extensions to the Collector.

Semantic conventions

As discussed in part 2, the OpenTelemetry semantic conventions define common conventions for collecting instrumentation data for similar operations. For example, if two web servers written in Java and Python are each instrumented and follow the semantic conventions, they produce similar looking trace and metric data. This makes it possible for people using different platforms to derive more meaning from data and tailor improved experiences. 

There are a variety of semantic conventions currently defined for resources, traces, and metrics, and more to come as OpenTelemetry continues to mature. Note that the conventions are still experimental and subject to change. 

The following video shows a Java application instrumented with the OpenTelemetry Java agent JAR file. The agent does a good job of following the semantic conventions. When we run the application, you see how it sends data to the Collector aligned with the conventions.

Visualizing data in New Relic

Next, let's look at the data in New Relic and see where the semantic conventions come into play.

After OpenTelemetry data is flowing into New Relic, it’s accessible to all of the New Relic platform, including querying, alerts, custom dashboards, and more. Trace data is ingested as Span objects. Metrics data is ingested as dimensional Metric objects. An entity is created to identify the component reporting the data. If you navigate to that entity in the New Relic Explorer, you see a dashboard that displays a variety of interesting signals. The dashboard relies on the semantic conventions, and the quality of the experience is better for applications that closely follow them. 

The next video shows how to verify data is flowing using simple NRQL queries and how to use the entity explorer to navigate to the dashboard for the application. Finally, we look at the UI for a more complex microservices setup.

In this blog series, we’ve gone from core concepts to working examples. We showed how to wire together Java applications with the OpenTelemetry Collector, export telemetry data to New Relic, and visualize it in New Relic. Now that you have completed this blog series on understanding OpenTelemetry, you can put what you’ve learned to work in your own environment. Hopefully, we have demystified OpenTelemetry—so you can leverage this powerful new set of tools to reach your observability goals!