OpenAI Agents SDK Instrumentation
Learn how to instrument OpenAI Agents with the LangWatch Python SDK
LangWatch allows you to monitor your OpenAI Agents by integrating with their tracing capabilities. Since OpenAI Agents manage their own execution flow, including LLM calls and tool usage, the direct autotrack_openai_calls()
method used for the standard OpenAI client is not applicable here.
Instead, you can integrate LangWatch in one of two ways:
- Using OpenInference Instrumentation (Recommended): Leverage the
openinference-instrumentation-openai-agents
library, which provides OpenTelemetry-based instrumentation for OpenAI Agents. This is generally the simplest and most straightforward method. - Alternative: Using OpenAI Agents’ Built-in Tracing with a Custom Processor: If you choose not to use OpenInference or have highly specific requirements, you can adapt the built-in tracing mechanism of the
openai-agents
SDK to forward trace data to LangWatch by implementing your own customTracingProcessor
.
This guide will walk you through both methods.
1. Using OpenInference Instrumentation for OpenAI Agents (Recommended)
The most straightforward way to integrate LangWatch with OpenAI Agents is by using the OpenInference instrumentation library specifically designed for it: openinference-instrumentation-openai-agents
. This library is currently in an Alpha stage, so while ready for experimentation, it may undergo breaking changes.
This approach uses OpenTelemetry-based instrumentation and is generally recommended for ease of setup.
Installation
First, ensure you have the necessary packages installed:
Integration via langwatch.setup()
You can pass an instance of the OpenAIAgentsInstrumentor
from openinference-instrumentation-openai-agents
to the instrumentors
list in the langwatch.setup()
call. LangWatch will then manage the lifecycle of this instrumentor.
The OpenAIAgentsInstrumentor
is part of the openinference-instrumentation-openai-agents
package. Always refer to its official documentation for the latest updates, especially as it’s in Alpha.
Direct Instrumentation
Alternatively, if you manage your OpenTelemetry TracerProvider
more directly (e.g., if LangWatch is configured to use an existing global provider), you can use the instrumentor’s instrument()
method. LangWatch will pick up the spans if its exporter is part of the active TracerProvider
.
Key points for OpenInference instrumentation:
- It patches
openai-agents
activities globally once instrumented. - Ensure
langwatch.setup()
is called so LangWatch’s OpenTelemetry exporter is active and configured. - The
@langwatch.trace()
decorator on your calling function helps create a parent span under which the agent’s detailed operations will be nested.
2. Alternative: Using OpenAI Agents’ Built-in Tracing with a Custom Processor
If you prefer not to use the OpenInference instrumentor, or if you have highly specific tracing requirements not met by it, you can leverage the openai-agents
SDK’s own built-in tracing system.
This involves creating a custom TracingProcessor
that intercepts trace data from the openai-agents
SDK and then uses the standard OpenTelemetry Python API to create OpenTelemetry spans. LangWatch will then ingest these OpenTelemetry spans, provided langwatch.setup()
has been called.
Conceptual Outline for Your Custom Processor:
- Initialize LangWatch: Ensure
langwatch.setup()
is called in your application. This sets up LangWatch to receive OpenTelemetry data. - Implement Your Custom
TracingProcessor
:- Following the
openai-agents
SDK documentation, create a class that implements theirTracingProcessor
interface (see their docs on Custom Tracing Processors and the API reference forTracingProcessor
). - In your processor’s methods (e.g.,
on_span_start
,on_span_end
), you will receiveTrace
andSpan
objects from theopenai-agents
SDK. - You will then use the
opentelemetry-api
andopentelemetry-sdk
(e.g.,opentelemetry.trace.get_tracer(__name__).start_span()
) to translate this information into OpenTelemetry spans, including their names, attributes, timings, and status. Consult theopenai-agents
documentation on Traces and spans for details on their data structures.
- Following the
- Register Your Custom Processor: Use
openai_agents.tracing.add_trace_processor(your_custom_processor)
oropenai_agents.tracing.set_trace_processors([your_custom_processor])
as per theopenai-agents
SDK documentation.
Implementation Guidance:
LangWatch does not provide a pre-built custom TracingProcessor
for this purpose. The implementation of such a processor is your responsibility and should be based on the official openai-agents
SDK documentation. This ensures your processor correctly interprets the agent’s trace data and remains compatible with openai-agents
SDK updates.
- Key
openai-agents
documentation:
Implementing a custom TracingProcessor
is an advanced task that requires:
- A thorough understanding of both the
openai-agents
tracing internals and OpenTelemetry concepts and semantic conventions. - Careful mapping of
openai-agents
SpanData
types to OpenTelemetry attributes. - Robust handling of span parenting, context propagation, and error states.
- Diligent maintenance to keep your processor aligned with any changes in the
openai-agents
SDK. This approach offers maximum flexibility but comes with significant development and maintenance overhead.
Which Approach to Choose?
-
OpenInference Instrumentation (Recommended):
- Pros: Significantly simpler to set up and maintain. Relies on a community-supported library (
openinference-instrumentation-openai-agents
) designed for OpenTelemetry integration. Aligns well with standard OpenTelemetry practices. - Cons: As the
openinference-instrumentation-openai-agents
library is in Alpha, it may have breaking changes. You have less direct control over the exact span data compared to a fully custom processor.
- Pros: Significantly simpler to set up and maintain. Relies on a community-supported library (
-
Custom
TracingProcessor
(Alternative for advanced needs):- Pros: Offers complete control over the transformation of trace data from
openai-agents
to OpenTelemetry. Allows for highly customized span data and behaviors. - Cons: Far more complex to implement correctly and maintain. Requires deep expertise in both
openai-agents
tracing and OpenTelemetry. You are responsible for adapting your processor to any changes in theopenai-agents
SDK.
- Pros: Offers complete control over the transformation of trace data from
For most users, the OpenInference instrumentation is the recommended path due to its simplicity and lower maintenance burden.
The custom TracingProcessor
approach should generally be reserved for situations where the OpenInference instrumentor is unsuitable, or when you have highly specialized tracing requirements that demand direct manipulation of the agent’s trace data before converting it to OpenTelemetry spans.
Always refer to the latest documentation for langwatch
, openai-agents
, and openinference-instrumentation-openai-agents
for the most up-to-date instructions and API details.