Learn how to instrument OpenAI API calls with the LangWatch Python SDK
autotrack_openai_calls()
: This method, part of the LangWatch SDK, dynamically patches your OpenAI client instance to capture calls made through it within a specific trace.langwatch.setup()
function or by using their native instrument()
methods if you’re managing your OpenTelemetry setup more directly.autotrack_openai_calls()
autotrack_openai_calls()
function provides a straightforward way to capture all OpenAI calls made with a specific client instance for the duration of the current trace.
You typically call this method on the trace object obtained via langwatch.get_current_trace()
inside a function decorated with @langwatch.trace()
.
autotrack_openai_calls()
:
langwatch.get_current_trace()
).OpenInference
or OpenLLMetry
, LangWatch can seamlessly integrate with them. These libraries provide instrumentors that automatically capture data from various LLM providers, including OpenAI.
There are two main ways to integrate these:
langwatch.setup()
OpenAIInstrumentor
from OpenInference or OpenLLMetry) to the instrumentors
list in the langwatch.setup()
call. LangWatch will then manage the lifecycle of this instrumentor.
pip install openllmetry-instrumentation-openai
or pip install openinference-instrumentation-openai
).TracerProvider
configured in your application (or if LangWatch is configured to use the global provider), you can use the community instrumentor’s instrument()
method directly. LangWatch will automatically pick up the spans generated by these instrumentors as long as its exporter is part of the active TracerProvider
.
langwatch.setup(instrumentors=[...])
, LangWatch handles the setup.OpenAIInstrumentor().instrument()
), ensure that the TracerProvider
used by the instrumentor is the same one LangWatch is exporting from. This usually means LangWatch is configured to use an existing global provider or one you explicitly pass to langwatch.setup()
.autotrack_openai_calls()
is ideal for targeted instrumentation within specific traces or when you want fine-grained control over which OpenAI client instances are tracked. It’s simpler if you’re not deeply invested in a separate OpenTelemetry setup.