autotrack_openai_calls() method used for the standard OpenAI client is not applicable here.
Instead, you can integrate LangWatch in one of two ways:
- Using OpenInference Instrumentation (Recommended): Leverage the
openinference-instrumentation-openai-agentslibrary, which provides OpenTelemetry-based instrumentation for OpenAI Agents. This is generally the simplest and most straightforward method. - Alternative: Using OpenAI Agents’ Built-in Tracing with a Custom Processor: If you choose not to use OpenInference or have highly specific requirements, you can adapt the built-in tracing mechanism of the
openai-agentsSDK to forward trace data to LangWatch by implementing your own customTracingProcessor.
1. Using OpenInference Instrumentation for OpenAI Agents (Recommended)
The most straightforward way to integrate LangWatch with OpenAI Agents is by using the OpenInference instrumentation library specifically designed for it:openinference-instrumentation-openai-agents. This library is currently in an Alpha stage, so while ready for experimentation, it may undergo breaking changes.
This approach uses OpenTelemetry-based instrumentation and is generally recommended for ease of setup.
Installation
First, ensure you have the necessary packages installed:Integration via langwatch.setup()
You can pass an instance of the OpenAIAgentsInstrumentor from openinference-instrumentation-openai-agents to the instrumentors list in the langwatch.setup() call. LangWatch will then manage the lifecycle of this instrumentor.
The
OpenAIAgentsInstrumentor is part of the openinference-instrumentation-openai-agents package. Always refer to its official documentation for the latest updates, especially as it’s in Alpha.Direct Instrumentation
Alternatively, if you manage your OpenTelemetryTracerProvider more directly (e.g., if LangWatch is configured to use an existing global provider), you can use the instrumentor’s instrument() method. LangWatch will pick up the spans if its exporter is part of the active TracerProvider.
- It patches
openai-agentsactivities globally once instrumented. - Ensure
langwatch.setup()is called so LangWatch’s OpenTelemetry exporter is active and configured. - The
@langwatch.trace()decorator on your calling function helps create a parent span under which the agent’s detailed operations will be nested.
2. Alternative: Using OpenAI Agents’ Built-in Tracing with a Custom Processor
If you prefer not to use the OpenInference instrumentor, or if you have highly specific tracing requirements not met by it, you can leverage theopenai-agents SDK’s own built-in tracing system.
This involves creating a custom TracingProcessor that intercepts trace data from the openai-agents SDK and then uses the standard OpenTelemetry Python API to create OpenTelemetry spans. LangWatch will then ingest these OpenTelemetry spans, provided langwatch.setup() has been called.
Conceptual Outline for Your Custom Processor:
- Initialize LangWatch: Ensure
langwatch.setup()is called in your application. This sets up LangWatch to receive OpenTelemetry data. - Implement Your Custom
TracingProcessor:- Following the
openai-agentsSDK documentation, create a class that implements theirTracingProcessorinterface (see their docs on Custom Tracing Processors and the API reference forTracingProcessor). - In your processor’s methods (e.g.,
on_span_start,on_span_end), you will receiveTraceandSpanobjects from theopenai-agentsSDK. - You will then use the
opentelemetry-apiandopentelemetry-sdk(e.g.,opentelemetry.trace.get_tracer(__name__).start_span()) to translate this information into OpenTelemetry spans, including their names, attributes, timings, and status. Consult theopenai-agentsdocumentation on Traces and spans for details on their data structures.
- Following the
- Register Your Custom Processor: Use
openai_agents.tracing.add_trace_processor(your_custom_processor)oropenai_agents.tracing.set_trace_processors([your_custom_processor])as per theopenai-agentsSDK documentation.
TracingProcessor for this purpose. The implementation of such a processor is your responsibility and should be based on the official openai-agents SDK documentation. This ensures your processor correctly interprets the agent’s trace data and remains compatible with openai-agents SDK updates.
- Key
openai-agentsdocumentation:
Implementing a custom
TracingProcessor is an advanced task that requires:- A thorough understanding of both the
openai-agentstracing internals and OpenTelemetry concepts and semantic conventions. - Careful mapping of
openai-agentsSpanDatatypes to OpenTelemetry attributes. - Robust handling of span parenting, context propagation, and error states.
- Diligent maintenance to keep your processor aligned with any changes in the
openai-agentsSDK. This approach offers maximum flexibility but comes with significant development and maintenance overhead.
Which Approach to Choose?
-
OpenInference Instrumentation (Recommended):
- Pros: Significantly simpler to set up and maintain. Relies on a community-supported library (
openinference-instrumentation-openai-agents) designed for OpenTelemetry integration. Aligns well with standard OpenTelemetry practices. - Cons: As the
openinference-instrumentation-openai-agentslibrary is in Alpha, it may have breaking changes. You have less direct control over the exact span data compared to a fully custom processor.
- Pros: Significantly simpler to set up and maintain. Relies on a community-supported library (
-
Custom
TracingProcessor(Alternative for advanced needs):- Pros: Offers complete control over the transformation of trace data from
openai-agentsto OpenTelemetry. Allows for highly customized span data and behaviors. - Cons: Far more complex to implement correctly and maintain. Requires deep expertise in both
openai-agentstracing and OpenTelemetry. You are responsible for adapting your processor to any changes in theopenai-agentsSDK.
- Pros: Offers complete control over the transformation of trace data from
TracingProcessor approach should generally be reserved for situations where the OpenInference instrumentor is unsuitable, or when you have highly specialized tracing requirements that demand direct manipulation of the agent’s trace data before converting it to OpenTelemetry spans.
Always refer to the latest documentation for
langwatch, openai-agents, and openinference-instrumentation-openai-agents for the most up-to-date instructions and API details.