OpenAI Instrumentation
Learn how to instrument OpenAI API calls with the LangWatch Python SDK
LangWatch offers robust integration with OpenAI, allowing you to capture detailed information about your LLM calls automatically. There are two primary approaches to instrumenting your OpenAI interactions:
- Using
autotrack_openai_calls()
: This method, part of the LangWatch SDK, dynamically patches your OpenAI client instance to capture calls made through it within a specific trace. - Using Community OpenTelemetry Instrumentors: Leverage existing OpenTelemetry instrumentation libraries like those from OpenInference or OpenLLMetry. These can be integrated with LangWatch by either passing them to the
langwatch.setup()
function or by using their nativeinstrument()
methods if you’re managing your OpenTelemetry setup more directly.
This guide will walk you through both methods.
Using autotrack_openai_calls()
The autotrack_openai_calls()
function provides a straightforward way to capture all OpenAI calls made with a specific client instance for the duration of the current trace.
You typically call this method on the trace object obtained via langwatch.get_current_trace()
inside a function decorated with @langwatch.trace()
.
Key points for autotrack_openai_calls()
:
- It must be called on an active trace object (e.g., obtained via
langwatch.get_current_trace()
). - It instruments a specific instance of the OpenAI client. If you have multiple clients, you’ll need to call it for each one you want to track.
Using Community OpenTelemetry Instrumentors
If you prefer to use broader OpenTelemetry-based instrumentation, or are already using libraries like OpenInference
or OpenLLMetry
, LangWatch can seamlessly integrate with them. These libraries provide instrumentors that automatically capture data from various LLM providers, including OpenAI.
There are two main ways to integrate these:
1. Via langwatch.setup()
You can pass an instance of the instrumentor (e.g., OpenAIInstrumentor
from OpenInference or OpenLLMetry) to the instrumentors
list in the langwatch.setup()
call. LangWatch will then manage the lifecycle of this instrumentor.
Ensure you have the respective community instrumentation library installed (e.g., pip install openllmetry-instrumentation-openai
or pip install openinference-instrumentation-openai
).
2. Direct Instrumentation
If you have an existing OpenTelemetry TracerProvider
configured in your application (or if LangWatch is configured to use the global provider), you can use the community instrumentor’s instrument()
method directly. LangWatch will automatically pick up the spans generated by these instrumentors as long as its exporter is part of the active TracerProvider
.
Key points for community instrumentors:
- These instrumentors often patch OpenAI at a global level, meaning all OpenAI calls from any client instance will be captured once instrumented.
- If using
langwatch.setup(instrumentors=[...])
, LangWatch handles the setup. - If instrumenting directly (e.g.,
OpenAIInstrumentor().instrument()
), ensure that theTracerProvider
used by the instrumentor is the same one LangWatch is exporting from. This usually means LangWatch is configured to use an existing global provider or one you explicitly pass tolangwatch.setup()
.
Which Approach to Choose?
autotrack_openai_calls()
is ideal for targeted instrumentation within specific traces or when you want fine-grained control over which OpenAI client instances are tracked. It’s simpler if you’re not deeply invested in a separate OpenTelemetry setup.- Community Instrumentors are powerful if you’re already using OpenTelemetry, want to capture OpenAI calls globally across your application, or need to instrument other libraries alongside OpenAI with a consistent OpenTelemetry approach. They provide a more holistic observability solution if you have multiple OpenTelemetry-instrumented components.
Choose the method that best fits your existing setup and instrumentation needs. Both approaches effectively send OpenAI call data to LangWatch for monitoring and analysis.