New Relic Now Start training on Intelligent Observability February 25th.
Save your seat.

Web development used to be a simpler endeavor. It wasn't so long ago that a developer would insert a few lines of PHP into an HTML page and push it to the server. 

 

<html>
  <head>
  </head>
  <body>
    <ul>
      <?php for($i=1;$1<5;$i++){ ?>
        <li>
          My Brilliant Idea, Number: <?php echo $i; ?>
        </li>
      <?php } ?>
    </ul>
  </body>
</html>

What happened that so fundamentally changed the field of web development?

There are a lot of factors that led to the state of web development as we know it today. Increasing expectations of our web applications to do more and more, and to be available on every type of device from laptops to smartphones to watches are significant factors. This increased set of expectations led to a growth in the complexity of our tooling and the infrastructure of our applications, which in turn, gave birth to containerization, distributed architecture, and Kubernetes to manage it all. 

As the complexity and tooling to support it keeps on growing, the difficulty in obtaining the telemetry data from our applications increases.

This timeline of the evolution of web development may sound familiar to you. It sure does to me. I have written code for a living, and sometimes I even wrote tests to go along with that code! It used to be that I could write the code, maybe deploy it myself or give it to someone else to deploy, and that was the end of the picture as far as I was concerned.

However, something changed. I began to realize that I needed the kind of information that telemetry data provides in order to do my job the best way I could. I wanted to be able to investigate a trace all the way through to understand quickly what is wrong and what line of code I should focus on. As my application transformed from a monolith to an increasingly distributed architecture, it became integral to know which microservice was impacting performance in order to properly deploy a fix.

At the same time, the landscape of telemetry tooling was daunting. The ecosystem was fragmented between competing protocols that each had their own set of API standards, SDKs, and collectors. 

This is where the story of OpenTelemetry gets its start. 

In 2015 the Cloud Native Computing Foundation (CNCF) introduced OpenTracing, followed two years later by OpenCensus. Both of these projects revolutionalized the telemetry landscape by bringing forth a vendor-neutral standard and set of APIs to trace across all the surfaces of an application, collect those metrics, and send them to any backend of your choice.

These two projects had so much initial success in their distinct but overlapping areas of concern that in 2019, the CNCF formally merged them into one unified project called OpenTelemetry (OTel). This new umbrella project would provide engineers with a single point to integrate metrics and traces across all the surfaces of their applications and send that telemetry data to any backend they chose.

As engineers, we know that an integrated and unified approach to standards decreases complexity in our applications. We saw this with the movement towards Swagger and then OpenAPI in the API specifications space, and we see it now with OpenTelemetry in the observability space.

OpenTelemetry offers a unified API to collect and shape metrics data from all the surfaces of your application. Using the OTLP Receiver you can send telemetry data to then be exported to any backend of your choice. In between the receiver and the exporter, you can also shape the data with any number of custom processors. Popular use cases for processing include grouping the data by an attribute, tagging metrics, spans, and traces with Kubernetes metadata

The promise of OTel is the end of needing to learn unique telemetry API specifications every time you move to a new telemetry backend. It is the end of needing to reformat all your data into a new format for every new observability provider. Perhaps most alluringly, there is the removal of the need to change dependencies or learn new tooling for every new vendor. The same OTel SDK can be used across supported vendors like New Relic thereby reducing your tooling tax and cognitive overhead.

For example, to add OpenTelemetry instrumentation to your Ruby on Rails application, you would use the Rails implementation of the Ruby SDK by adding and installing the gems and then configuring it:

OpenTelemetry::SDK.configure do |c|
  c.use_all
end

Want to get that valuable telemetry data into New Relic for analysis? To do so, set the environment variable pointing to the New Relic endpoint:

export OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:443

You will also want to make sure to include your New Relic license key in every OTel request so New Relic knows to identify the data with your account. You can do that by defining an environment variable with your license key:

export OTEL_EXPORTER_OTLP_HEADERS="api-key=your-new-relic-license-key"

That’s it. With those two environment variables, you can bring your telemetry data received and processed by your OTel receiver and processors into New Relic. That kind of reduction in tooling overload can make a big difference in your work as an engineer. As your application grows in scope and complexity, you don’t need your dependencies to also grow in either quantity or complexity.