Learn how to instrument OpenAI Agents with the LangWatch Python SDK
autotrack_openai_calls()
method used for the standard OpenAI client is not applicable here.
Instead, you can integrate LangWatch in one of two ways:
openinference-instrumentation-openai-agents
library, which provides OpenTelemetry-based instrumentation for OpenAI Agents. This is generally the simplest and most straightforward method.openai-agents
SDK to forward trace data to LangWatch by implementing your own custom TracingProcessor
.openinference-instrumentation-openai-agents
. This library is currently in an Alpha stage, so while ready for experimentation, it may undergo breaking changes.
This approach uses OpenTelemetry-based instrumentation and is generally recommended for ease of setup.
langwatch.setup()
OpenAIAgentsInstrumentor
from openinference-instrumentation-openai-agents
to the instrumentors
list in the langwatch.setup()
call. LangWatch will then manage the lifecycle of this instrumentor.
OpenAIAgentsInstrumentor
is part of the openinference-instrumentation-openai-agents
package. Always refer to its official documentation for the latest updates, especially as it’s in Alpha.TracerProvider
more directly (e.g., if LangWatch is configured to use an existing global provider), you can use the instrumentor’s instrument()
method. LangWatch will pick up the spans if its exporter is part of the active TracerProvider
.
openai-agents
activities globally once instrumented.langwatch.setup()
is called so LangWatch’s OpenTelemetry exporter is active and configured.@langwatch.trace()
decorator on your calling function helps create a parent span under which the agent’s detailed operations will be nested.openai-agents
SDK’s own built-in tracing system.
This involves creating a custom TracingProcessor
that intercepts trace data from the openai-agents
SDK and then uses the standard OpenTelemetry Python API to create OpenTelemetry spans. LangWatch will then ingest these OpenTelemetry spans, provided langwatch.setup()
has been called.
Conceptual Outline for Your Custom Processor:
langwatch.setup()
is called in your application. This sets up LangWatch to receive OpenTelemetry data.TracingProcessor
:
openai-agents
SDK documentation, create a class that implements their TracingProcessor
interface (see their docs on Custom Tracing Processors and the API reference for TracingProcessor
).on_span_start
, on_span_end
), you will receive Trace
and Span
objects from the openai-agents
SDK.opentelemetry-api
and opentelemetry-sdk
(e.g., opentelemetry.trace.get_tracer(__name__).start_span()
) to translate this information into OpenTelemetry spans, including their names, attributes, timings, and status. Consult the openai-agents
documentation on Traces and spans for details on their data structures.openai_agents.tracing.add_trace_processor(your_custom_processor)
or openai_agents.tracing.set_trace_processors([your_custom_processor])
as per the openai-agents
SDK documentation.TracingProcessor
for this purpose. The implementation of such a processor is your responsibility and should be based on the official openai-agents
SDK documentation. This ensures your processor correctly interprets the agent’s trace data and remains compatible with openai-agents
SDK updates.
openai-agents
documentation:
TracingProcessor
is an advanced task that requires:openai-agents
tracing internals and OpenTelemetry concepts and semantic conventions.openai-agents
SpanData
types to OpenTelemetry attributes.openai-agents
SDK.
This approach offers maximum flexibility but comes with significant development and maintenance overhead.openinference-instrumentation-openai-agents
) designed for OpenTelemetry integration. Aligns well with standard OpenTelemetry practices.openinference-instrumentation-openai-agents
library is in Alpha, it may have breaking changes. You have less direct control over the exact span data compared to a fully custom processor.TracingProcessor
(Alternative for advanced needs):
openai-agents
to OpenTelemetry. Allows for highly customized span data and behaviors.openai-agents
tracing and OpenTelemetry. You are responsible for adapting your processor to any changes in the openai-agents
SDK.TracingProcessor
approach should generally be reserved for situations where the OpenInference instrumentor is unsuitable, or when you have highly specialized tracing requirements that demand direct manipulation of the agent’s trace data before converting it to OpenTelemetry spans.
langwatch
, openai-agents
, and openinference-instrumentation-openai-agents
for the most up-to-date instructions and API details.