Using LangWatch with OpenTelemetry
Learn how to integrate the LangWatch Python SDK with your existing OpenTelemetry setup.
The LangWatch Python SDK is built entirely on top of the robust OpenTelemetry (OTel) standard. This means seamless integration with existing OTel setups and interoperability with the wider OTel ecosystem.
LangWatch Spans are OpenTelemetry Spans
It’s important to understand that LangWatch traces and spans are standard OpenTelemetry traces and spans. LangWatch adds specific semantic attributes (like langwatch.span.type
, langwatch.inputs
, langwatch.outputs
, langwatch.metadata
) to these standard spans to power its observability features.
This foundation provides several benefits:
- Interoperability: Traces generated with LangWatch can be sent to any OTel-compatible backend (Jaeger, Tempo, Datadog, etc.) alongside your other application traces.
- Familiar API: If you’re already familiar with OpenTelemetry concepts and APIs, working with LangWatch’s manual instrumentation will feel natural.
- Leverage Existing Setup: LangWatch integrates smoothly with your existing OTel
TracerProvider
and instrumentation.
Perhaps the most significant advantage is that LangWatch seamlessly integrates with the vast ecosystem of standard OpenTelemetry auto-instrumentation libraries. This means you can easily combine LangWatch’s LLM-specific observability with insights from other parts of your application stack. For example, if you use opentelemetry-instrumentation-celery
, traces initiated by LangWatch for an LLM task can automatically include spans generated within your Celery workers, giving you a complete end-to-end view of the request, including background processing, without any extra configuration.
Leverage the OpenTelemetry Ecosystem: Auto-Instrumentation
One of the most powerful benefits of LangWatch’s OpenTelemetry foundation is its automatic compatibility with the extensive ecosystem of OpenTelemetry auto-instrumentation libraries.
When you use standard OTel auto-instrumentation for libraries like web frameworks, databases, or task queues alongside LangWatch, you gain complete end-to-end visibility into your LLM application’s requests. Because LangWatch and these auto-instrumentors use the same underlying OpenTelemetry tracing system and context propagation mechanisms, spans generated across different parts of your application are automatically linked together into a single, unified trace.
This means you don’t need to manually stitch together observability data from your LLM interactions and the surrounding infrastructure. If LangWatch instruments an LLM call, and that call involves fetching data via an instrumented database client or triggering a background task via an instrumented queue, all those operations will appear as connected spans within the same trace view in LangWatch (and any other OTel backend you use).
Examples of Auto-Instrumentation Integration
Here are common scenarios where combining LangWatch with OTel auto-instrumentation provides significant value:
-
Web Frameworks (FastAPI, Flask, Django): Using libraries like
opentelemetry-instrumentation-fastapi
, an incoming HTTP request automatically starts a trace. When your request handler calls a function instrumented with@langwatch.trace
or@langwatch.span
, those LangWatch spans become children of the incoming request span. You see the full request lifecycle, from web server entry to LLM processing and response generation. -
HTTP Clients (Requests, httpx, aiohttp): If your LLM application makes outbound API calls (e.g., to fetch external data, call a vector database API, or use a non-instrumented LLM provider via REST) using libraries instrumented by
opentelemetry-instrumentation-requests
or similar, these HTTP request spans will automatically appear within your LangWatch trace, showing the latency and success/failure of these external dependencies. -
Task Queues (Celery, RQ): When a request handled by your web server (and traced by LangWatch) enqueues a background job using
opentelemetry-instrumentation-celery
, the trace context is automatically propagated. The spans generated by the Celery worker processing that job will be linked to the original LangWatch trace, giving you visibility into asynchronous operations triggered by your LLM pipeline. -
Databases & ORMs (SQLAlchemy, Psycopg2, Django ORM): Using libraries like
opentelemetry-instrumentation-sqlalchemy
, any database queries executed during your LLM processing (e.g., for RAG retrieval, user data lookup, logging results) will appear as spans within the relevant LangWatch trace, pinpointing database interaction time and specific queries.
To enable this, simply ensure you have installed and configured the relevant OpenTelemetry auto-instrumentation libraries according to their documentation, typically involving an installation (pip install opentelemetry-instrumentation-<library>
) and sometimes an initialization step (like CeleryInstrumentor().instrument()
). As long as they use the same (or the global) TracerProvider
that LangWatch is configured with, the integration is automatic.
Example: Combining LangWatch, RAG, OpenAI, and Celery
Let’s illustrate this with a simplified example involving a web request that performs RAG, calls OpenAI, and triggers a background Celery task.
In this example:
- The
handle_request
function is the main trace. retrieve_documents
is a child span created by LangWatch.- The OpenAI call creates child spans (due to
autotrack_openai_calls
). - The call to
process_result_background.delay
creates a span indicating the task was enqueued. - Critically,
CeleryInstrumentor
automatically propagates the trace context, so when the Celery worker picks up theprocess_result_background
task, its execution is linked as a child span (or spans, if the task itself creates more) under the originalhandle_request
trace.
This gives you a unified view of the entire operation, from the initial request through LLM processing, RAG, and background task execution.
Integrating with langwatch.setup()
When you call langwatch.setup()
, it intelligently interacts with your existing OpenTelemetry environment:
-
Checks for Existing
TracerProvider
:- If you provide a
TracerProvider
instance via thetracer_provider
argument inlangwatch.setup()
, LangWatch will use that specific provider. - If you don’t provide one, LangWatch checks if a global
TracerProvider
has already been set (e.g., by another library or your own OTel setup code). - If neither is found, LangWatch creates a new
TracerProvider
.
- If you provide a
-
Adding the LangWatch Exporter:
- If LangWatch uses an existing
TracerProvider
(either provided via the argument or detected globally), it will add its own OTLP Span Exporter to that provider’s list of Span Processors. It does not remove existing processors or exporters. - If LangWatch creates a new
TracerProvider
, it configures it with the LangWatch OTLP Span Exporter.
- If LangWatch uses an existing
Default Behavior: All Spans Go to LangWatch
A crucial point is that once langwatch.setup()
runs and attaches its exporter to a TracerProvider
, all spans managed by that provider will be exported to the LangWatch backend by default. This includes:
- Spans created using
@langwatch.trace
and@langwatch.span
. - Spans created manually using
langwatch.trace()
orlangwatch.span()
as context managers or viaspan.end()
. - Spans generated by standard OpenTelemetry auto-instrumentation libraries (e.g.,
opentelemetry-instrumentation-requests
,opentelemetry-instrumentation-fastapi
) if they are configured to use the sameTracerProvider
. - Spans you create directly using the OpenTelemetry API (
tracer.start_as_current_span(...)
).
While seeing all application traces can be useful, you might not want every single span sent to LangWatch, especially high-volume or low-value ones (like health checks or database pings).
Selectively Exporting Spans with span_exclude_rules
To control which spans are sent to LangWatch, use the span_exclude_rules
argument during langwatch.setup()
. This allows you to define rules to filter spans before they are exported to LangWatch, without affecting other exporters attached to the same TracerProvider
.
Rules are defined using SpanProcessingExcludeRule
objects.
Refer to the SpanProcessingExcludeRule
definition for all available fields (span_name
, attribute
, library_name
) and operations (exact_match
, contains
, starts_with
, ends_with
, regex
).
Debugging with Console Exporter
When developing or troubleshooting your OpenTelemetry integration, it’s often helpful to see the spans being generated locally without sending them to a backend. The OpenTelemetry SDK provides a ConsoleSpanExporter
for this purpose.
You can add it to your TracerProvider
like this:
This will print all created spans to your console
Accessing the OpenTelemetry Span API
Since LangWatch spans wrap standard OTel spans, the LangWatchSpan
object (returned by langwatch.span()
or accessed via langwatch.get_current_span()
) directly exposes the standard OpenTelemetry trace.Span
API methods. This allows you to interact with the span using familiar OTel functions when needed for advanced use cases or compatibility.
You don’t need to access a separate underlying object; just call the standard OTel methods directly on the LangWatchSpan
instance:
This allows full flexibility, letting you use both LangWatch’s structured data methods (update
, etc.) and the standard OpenTelemetry span manipulation methods on the same object.
Understanding ignore_global_tracer_provider_override_warning
If langwatch.setup()
detects an existing global TracerProvider
(one set via opentelemetry.trace.set_tracer_provider()
) and you haven’t explicitly passed a tracer_provider
argument, LangWatch will log a warning by default. The warning states that it found a global provider and will attach its exporter to it rather than replacing it.
This warning exists because replacing a globally configured provider can sometimes break assumptions made by other parts of your application or libraries. However, in many cases, attaching the LangWatch exporter to the existing global provider is exactly the desired behavior.
If you are intentionally running LangWatch alongside an existing global OpenTelemetry setup and want LangWatch to simply add its exporter to that setup, you can silence this warning by setting: