At New Relic, we're seeing a clear trend: more and more customers are enthusiastically adopting OpenTelemetry as their standard for observability. This adoption is rapidly expanding into serverless environments like Azure Functions. As this powerful combination becomes more common, setting it up correctly is key. This guide will walk you through exactly how to instrument your DotNet Azure Functions with OpenTelemetry to get complete, end-to-end visibility in New Relic.
Before we dive in, it's easier to think of your function app as two separate parts. I found this concept a bit tricky at first, so hopefully, this helps you get up to speed quickly without spending hours or even days figuring it out:
- Azure Functions Host: This is the runtime that receives the trigger (e.g., an HTTP request), manages the lifecycle, and calls your code. It has its own built-in telemetry system, which traditionally defaults to its own Application Insights pipeline.
- .NET Isolated Worker process: This is your
Program.csand function code. It runs in a separate process. The OpenTelemetry SDK you configure lives here.
This is very important that will enable OpenTelemetry to integrate with the Azure Functions environment.
Enable OpenTelemetry in the Functions host
According to Azure this feature is actively in preview. At this time, only HTTP, Service Bus and Event Hubs triggers are supported with OpenTelemetry outputs.
OpenTelemetry with Azure Functions enables the collection and export of telemetry data, including logs, metrics, and traces, from both the host process and language-specific worker processes. This data can be sent to any OpenTelemetry-compliant endpoint, like New Nelic’s edge OTLP compliant endpoint.
By default, the Functions Host and your Worker process don't automatically share telemetry context. This will become a critical concept later on.
The telemetryMode setting is a flag in your host.json file that instructs the Azure Functions Host to change its behavior. When you set it to "OpenTelemetry", you are telling the host: Stop using the default Application Insights default telemetry pipeline. Instead, emit the function invocation data (like start/stop times and trigger details) as OpenTelemetry signals.
Without this setting, your OpenTelemetry SDK in the worker is essentially "blind" to the function execution itself. You can always manually invoke your own trace entirely inside your worker code, but you'll be missing the main parent span that represents the function invocation, resulting in broken or incomplete traces.
How to Configure It
// host.js
{
"version": "2.0",
"telemetryMode": "OpenTelemetry"
}
Keep in mind: It is very important to understand that due to the host built-in telemetry being an abstracted layer to your application code, telemetry.sdk.language will always show as "dotnet" and as of writing telemtrty.sdk.version “1.9.0” regardless of your actual function runtime for invocation related root entry spans.
Java Azure Function displays “telemetry.sdk.language = dotnet”.
Enable OpenTelemetry in your app
Instrumenting your application's worker process with OpenTelemetry requires a method that aligns with your specific OpenTelemetry needs.
You can find various available instrumentations in the OpenTelemetry.io registry.
To get started, install these prerequisite packages: Microsoft.Azure.Functions.Worker.OpenTelemetry, OpenTelemetry.Extensions.Hosting, and OpenTelemetry.Exporter.OpenTelemetryProtocol.
For this demonstration, a basic HTTP client (such as System.Net.Http.HttpClient or System.Net.HttpWebRequest) will be used to illustrate how to collect metrics and traces for outgoing HTTP requests. You'll also need OpenTelemetry.Instrumentation.Http, in addition to any other OpenTelemetry packages specific to your own function's needs.
dotnet add package Microsoft.Azure.Functions.Worker.OpenTelemetry --version 1.1.0-preview6
dotnet add package OpenTelemetry.Extensions.Hosting
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
dotnet add package OpenTelemetry.Instrumentation.Http
Keep in mind the way that you configure OpenTelemetry will depend if your project startup uses IHostBuilder or IHostApplicationBuilder, which was introduced in v2.x of the .NET isolated worker model extension.
This example assumes your app is using IHostApplicationBuilder:
// Program.cs
using Microsoft.Azure.Functions.Worker.Builder;
using Microsoft.Azure.Functions.Worker.OpenTelemetry;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using OpenTelemetry;
using OpenTelemetry.Trace;
var builder = FunctionsApplication.CreateBuilder(args);
builder.ConfigureFunctionsWebApplication();
builder.Services.AddOpenTelemetry()
// Apply Azure Functions specific OpenTelemetry defaults
.UseFunctionsWorkerDefaults()
.WithTracing(traceBuilder => traceBuilder.AddHttpClientInstrumentation())
.UseOtlpExporter();
var app = builder.Build();
app.Run();
This example assumes your app is using IHostBuilder, the more traditional way to configure and create a host:
// Program.cs
using System.Diagnostics;
using System.Net.Http.Headers;
using Microsoft.Azure.Functions.Worker.OpenTelemetry;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using OpenTelemetry;
using OpenTelemetry.Trace;
var host = new HostBuilder()
.ConfigureFunctionsWebApplication()
.ConfigureServices(
services =>
{
services
.AddOpenTelemetry()
// Apply Azure Functions specific OpenTelemetry defaults
.UseFunctionsWorkerDefaults()
.WithTracing(tracing => tracing.AddHttpClientInstrumentation());
.UseOtlpExporter()
}
)
.Build();
host.Run();
But what about metrics, you ask?
The OpenTelemetry SDK will automatically start collecting a standard set of metrics for every outgoing HttpClient request. These metrics are defined by the official OpenTelemetry semantic conventions and are incredibly useful for monitoring the performance and reliability.
The primary metrics you get automatically are:
http.client.request.duration(Histogram)- What it is: This is the most common and important metric. It measures the time taken for each outgoing HTTP request, from the moment it's sent until the response is fully received.
http.client.active_requests(UpDownCounter)- What it is: A counter that goes up every time a new request is started and goes down when a request finishes.
Here's a cool tidbit: New Relic automatically normalizes these metrics to fit its APM semantic conventions, which comes with several benefits that won't cost you a dime!
- Enhanced APM UI Experience: Normalized data populates the standard New Relic APM UI, enabling consistent querying of APM data and golden metrics across all applications, regardless of whether they are instrumented with New Relic or OpenTelemetry.
- Non-destructive Process: This automated process is non-destructive, guaranteeing continued access to your original, unadulterated OpenTelemetry source data.
Learn more about this enhanced New Relic OTel experience.
Putting the rest of this together in a simple function:
// HttpTrigger.cs
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;
namespace Newrelic.Function;
public class HttpTrigger
{
private readonly ILogger<HttpTrigger> _logger;
private static readonly HttpClient _httpClient = new HttpClient();
public HttpTrigger(ILogger<HttpTrigger> logger)
{
_logger = logger;
}
[Function("HttpTrigger")]
public async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post")] HttpRequest req)
{
try
{
await _httpClient.GetStringAsync("http://example.com");
return new OkObjectResult("Welcome to Azure Functions! The external call was successful.");
}
catch (HttpRequestException ex)
{
return new BadRequestObjectResult($"An error occurred: {ex.Message}");
}
}
}
Configuring the data exportor
Regardless of whether "telemetryMode": "openTelemetry" is set in your host.json file, the actual destination for your telemetry data will be controlled by specific environment variables. These settings tell both the host and process worker where to send the telemetry data.
Configure the following application settings aka local.settings.json:
OTEL_EXPORTER_OTLP_ENDPOINT:"https://otlp.nr-data.net:4317"OTEL_EXPORTER_OTLP_HEADERS: "api-key=YOUR_INGEST_KEY"OTEL_SERVICE_NAME: "service.name"OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE: “delta”
Important: APPLICATIONINSIGHTS_CONNECTION_STRING
Consider reviewing all application settings and removing the default APPLICATIONINSIGHTS_CONNECTION_STRING environment variable if you only want to send OpenTelemetry data to the New Relic OTLP endpoint or you may incur additional costs through application insights.
Sampling your data
Keep in mind that sampling is not enabled by default. For Application Insights live metrics, the default is often around 5 requests per second. However, when configuring OpenTelemetry exporters directly, the sampling behavior is different and needs explicit configuration to suit your needs.
With “telemetryMode” set to “opentelemetry” in an Azure Function, sampling at the host level is controlled by the sampling decision made in your worker process. The Functions Host does not apply its own independent sampling rules; instead, it respects the sampling decision propagated from your OpenTelemetry SDK.
For further sampling configuration, refer to the application settings, specifically local.settings.json. You can find the accepted values for OTEL_TRACES_SAMPLER and OTEL_TRACES_SAMPLER_ARG in the general SDK configuration documentation.
What it all looks like
New Relic Distributed Tracing
Host and worker process logs automatically captured and contextualized with tracing context trace.id / span.id
Dynamic service map views and health
In Summary
Program.cscontains your OTel service registrations and configurations on how to process and send your telemetry data.host.jsonconfigures how telemetry data (logs, metrics, and traces) generated by your function app is collected and exported on the Functions Host.
You’ll need both for a complete, end-to-end trace experience.
As opiniões expressas neste blog são de responsabilidade do autor e não refletem necessariamente as opiniões da New Relic. Todas as soluções oferecidas pelo autor são específicas do ambiente e não fazem parte das soluções comerciais ou do suporte oferecido pela New Relic. Junte-se a nós exclusivamente no Explorers Hub ( discuss.newrelic.com ) para perguntas e suporte relacionados a esta postagem do blog. Este blog pode conter links para conteúdo de sites de terceiros. Ao fornecer esses links, a New Relic não adota, garante, aprova ou endossa as informações, visualizações ou produtos disponíveis em tais sites.