This blog post is part of the Understand OpenTelemetry series.
Part 1 provided an overview to OpenTelemetry and why it is the future of instrumentation.
Part 2 explored some of the core components of the OpenTelemetry open source project.
Now in Part 3, we are focusing on the primary OpenTelemetry data sources.
There are three primary data sources in the OpenTelemetry project: traces, metrics and logs. Let's take a look at each data source and the related application programming interfaces (APIs) and software development kits (SDKs).
An OpenTelemetry trace captures details of a single request through a system and is made up of units of work called spans. Spans represent a single operation within a trace, or the work being done by individual services or components involved in a request as it flows through a system. Examples are an HTTP call or database call.
A trace is a tree of spans. Each trace contains a root span, which typically describes the end-to-end latency for the entire request. Optionally, it can contain one or more sub-spans for its sub-operations. Spans contain metadata like the span name, start time, end time, and set of
key:value pair attributes.
The following video demonstrates a simple example of how library authors use the Trace API. It also shows Context APIs for correlating domain-specific data across services, and how application developers use the Trace SDK.
The Trace API
Application or library developers need to add the Trace API as a dependency. The OpenTelemetry Trace API provides a tracer that you can use to create spans that get instantiated by the tracerProvider. Each span generated by the tracer will be associated with the name of the version of the library that generated the span.
The Context API
A span contains a span context, which is a set of globally unique identifiers (
traceID) representing the data required for moving trace information across service boundaries. You use the Context API to propagate the context to the downstream services and create distributed traces. As explained in the video, OpenTelemetry supports different standards for propagating trace context including W3C trace context, B3, and Jaeger. Then services monitored by OpenTelemetry and an observability platform like New Relic appear in the same distributed trace.
Also, you can use the context to associate metrics and logs with the trace. OpenTelemetry supports the W3C baggage standard, so developers can capture arbitrary
key:value pairs to enrich trace, metrics, and log data.
The Trace SDK
Application developers need to take a dependency on the OpenTelemetry Trace SDK. They can configure a
tracerProvider that suits their application needs. This includes associating a resource, configuring sampling, and registering a pipeline of span processors and an exporter with the tracer provider.
- A resource is a collection of attributes that describe the environment that the application is running on like the name of the service, or if the host the service is running or in a Kubernetes environment, it could be the node or pod name.
- Sampling enables you to control the noise and overhead introduced by instrumentation by reducing the number of traces collected and sent to the backend. The SDK provides a few ready-to-use samplers and developers can configure a sampler based on specific application needs. You’re not limited to the trace SDK—you can configure additional sampling (like tail-based sampling) with the OpenTelemetry collector.
- You can register a pipeline of span processors and an exporter to the tracer provider. Span processors are invoked in the order they are registered, and you can use span processors to filter or enrich spans with attributes. Use a batching processor to batch a collection of spans before using the exporter to send them to your backend observability platform.
As you can see, trace data contains detailed information about individual requests made to your application. Also, we discussed how it is often sampled.
OpenTelemetry Metrics data represent aggregated measurements—time-series data that is captured from measurements about a service at a specific point in time. Compared to trace data, metrics data provide less granular information. But metrics are useful for indicating availability and performance of your services. Examples of metrics include CPU and memory utilization, request duration, throughput, and error rate.
The following video shows an example of how developers use the Metric API to instrument their code, implementing a meter provider with the SDK so they can configure metric measurement separately from how an application is instrumented.
The Metric API
OpenTelemetry requires a meter provider to be initialized in order to create instruments that will generate metrics. The OpenTelemetry Metric API enables you to add metadata to your metrics in the format of attributes that you can then use to facet your data. The Metric API has a
meterProvider that you can use to configure a metric collection using the metric SDK. The Metric API provides access to various types of instruments, which are used to capture specific measurements. For example, a counter is a value that is summed over time.
The OpenTelemetry Metric specification defines the number of instruments, which might change in the near term, because the specification is still evolving. Java and Go have the most mature Metric APIs.
The Metric SDK
Just like the Trace SDK, the Metric SDK enables application developers to configure a meter provider for their specific applications. You can associate a resource, and you can register a pipeline of metric processors and an exporter.
You use metric processors for filtering or enriching metric attributes. The metric pipeline also includes an aggregator that indicates how metrics are aggregated and an exporter to send data to a backend observability platform.
The final type of data in the OpenTelemetry project is log data, perhaps one of the simplest forms of telemetry data. It is a time-stamped text record in a structured format that can be filtered by strategic attributes.
OpenTelemetry support for log data is still very early. There are two strategies you can take for using log data with OpenTelemetry:
- Implement exporters for existing logging libraries for languages, using extensions to correlate with the current trace and provide log data with additional context.
- Use a log forwarder with the OpenTelemetry collector and export it to the backend observability platform.
The following video shows architecture diagrams for these two approaches:
Now you have a basic understanding of how the three main data sources work in the OpenTelemetry open source project: traces, metrics, and logs. In next week’s blog post, we start working with OpenTelemetry data in New Relic.
- Sign up for New Relic’s free tier and start sending your OpenTelemetry data today.
- Check out Part 4 to get started instrumenting a Java application with OpenTelemetry!
- Ready to apply your knowledge and instrument a basic application in your preferred programming language? Go to the OpenTelemetry Masterclass.
The views expressed on this blog are those of the author and do not necessarily reflect the views of New Relic. Any solutions offered by the author are environment-specific and not part of the commercial solutions or support offered by New Relic. Please join us exclusively at the Explorers Hub (discuss.newrelic.com) for questions and support related to this blog post. This blog may contain links to content on third-party sites. By providing such links, New Relic does not adopt, guarantee, approve or endorse the information, views or products available on such sites.