OpenTelemetry is a vendor-neutral standard for observability that provides a unified way to capture traces, metrics, and logs. LangWatch is fully compatible with OpenTelemetry, allowing you to use any OpenTelemetry-compatible library in any programming language to capture your LLM traces and send them to LangWatch. This guide shows you how to set up OpenTelemetry instrumentation in any language and configure it to export traces to LangWatch’s OTEL API endpoint.
Protip: wanna to get started even faster? Copy our llms.txt and ask an AI to do this integration

Prerequisites

  • Obtain your LANGWATCH_API_KEY from the LangWatch dashboard
  • Install the OpenTelemetry SDK for your programming language

LangWatch OTEL API Endpoint

LangWatch provides a standard OpenTelemetry Protocol (OTLP) endpoint for receiving traces:
https://app.langwatch.ai/api/otel/v1/traces
This endpoint accepts OTLP over HTTP and gRPC protocols, making it compatible with all OpenTelemetry SDKs.

General Setup Pattern

The setup follows this general pattern across all languages:
  1. Install OpenTelemetry SDK for your language
  2. Configure the OTLP exporter to point to LangWatch’s endpoint
  3. Set up authentication using your API key
  4. Initialize the trace provider with the exporter
  5. Instrument your LLM calls using available instrumentation libraries

Language-Specific Examples

1

Install OpenTelemetry

Add to your pom.xml:
<dependency>
    <groupId>io.opentelemetry</groupId>
    <artifactId>opentelemetry-sdk</artifactId>
    <version>1.32.0</version>
</dependency>
<dependency>
    <groupId>io.opentelemetry</groupId>
    <artifactId>opentelemetry-exporter-otlp</artifactId>
    <version>1.32.0</version>
</dependency>
2

Configure the exporter

import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.propagation.W3CTraceContextPropagator;
import io.opentelemetry.context.propagation.ContextPropagators;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.trace.SdkTracerProvider;
import io.opentelemetry.sdk.trace.export.BatchSpanProcessor;
import io.opentelemetry.sdk.trace.export.OtlpHttpSpanExporter;
import io.opentelemetry.semconv.resource.attributes.ResourceAttributes;

public class OpenTelemetryConfig {
    public static OpenTelemetry initOpenTelemetry() {
        OtlpHttpSpanExporter spanExporter = OtlpHttpSpanExporter.builder()
            .setEndpoint("https://app.langwatch.ai/api/otel/v1/traces")
            .addHeader("Authorization", "Bearer " + System.getenv("LANGWATCH_API_KEY"))
            .build();

        SdkTracerProvider sdkTracerProvider = SdkTracerProvider.builder()
            .addSpanProcessor(BatchSpanProcessor.builder(spanExporter).build())
            .setResource(Resource.getDefault().toBuilder()
                .put(ResourceAttributes.SERVICE_NAME, "my-service")
                .build())
            .build();

        return OpenTelemetrySdk.builder()
            .setTracerProvider(sdkTracerProvider)
            .setPropagators(ContextPropagators.create(W3CTraceContextPropagator.getInstance()))
            .buildAndRegisterGlobal();
    }
}
3

Instrument your LLM calls

import io.opentelemetry.api.trace.Tracer;

public class LLMService {
    private final Tracer tracer = OpenTelemetry.getGlobalTracer("my-service");

    public void callLLM() {
        var span = tracer.spanBuilder("llm-call").startSpan();
        try (var scope = span.makeCurrent()) {
            // Your LLM call here
        } finally {
            span.end();
        }
    }
}

Available Instrumentation Libraries

LangWatch works with any OpenTelemetry-compatible instrumentation library. Here are some popular options:

Java Libraries

  • Spring AI: Spring AI provides built-in observability support for AI applications, including OpenTelemetry integration for tracing LLM calls and AI operations
  • OpenTelemetry Java SDK: Use OpenTelemetry Java SDK with custom spans

.NET Libraries

  • Azure Monitor OpenTelemetry: Azure Monitor OpenTelemetry provides comprehensive OpenTelemetry support for .NET applications, including automatic instrumentation and Azure-specific features
  • OpenTelemetry .NET SDK: Use OpenTelemetry .NET SDK with custom instrumentation

Manual Instrumentation

If no automatic instrumentation is available for your LLM provider, you can manually create spans:
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.context.Context;

public class LLMService {
    private final Tracer tracer = OpenTelemetry.getGlobalTracer("my-service");

    public String callLLM(String prompt) {
        Span span = tracer.spanBuilder("llm-call").startSpan();
        
        try (var scope = span.makeCurrent()) {
            // Add relevant attributes
            span.setAttribute("llm.provider", "custom-provider");
            span.setAttribute("llm.model", "gpt-5-mini");
            span.setAttribute("llm.prompt", prompt);
            
            // Your LLM call here
            String response = yourLLMClient.generate(prompt);
            
            return response;
        } finally {
            span.end();
        }
    }
}

Environment Variables

Set these environment variables for authentication:
export LANGWATCH_API_KEY="your-api-key-here"

Verification

After setting up your instrumentation, you can verify that traces are being sent to LangWatch by:
  1. Making a few LLM calls in your application
  2. Checking the LangWatch dashboard for incoming traces
  3. Looking for spans with your service name and LLM call details

Troubleshooting

Next Steps