LangWatch offers robust integration with OpenAI, allowing you to capture detailed information about your LLM calls automatically. There are two primary approaches to instrumenting your OpenAI interactions:

  1. Using autotrack_openai_calls(): This method, part of the LangWatch SDK, dynamically patches your OpenAI client instance to capture calls made through it within a specific trace.
  2. Using Community OpenTelemetry Instrumentors: Leverage existing OpenTelemetry instrumentation libraries like those from OpenInference or OpenLLMetry. These can be integrated with LangWatch by either passing them to the langwatch.setup() function or by using their native instrument() methods if you’re managing your OpenTelemetry setup more directly.

This guide will walk you through both methods.

Using autotrack_openai_calls()

The autotrack_openai_calls() function provides a straightforward way to capture all OpenAI calls made with a specific client instance for the duration of the current trace.

You typically call this method on the trace object obtained via langwatch.get_current_trace() inside a function decorated with @langwatch.trace().

import langwatch
from openai import OpenAI

# Ensure LANGWATCH_API_KEY is set in your environment, or set it in `setup`
langwatch.setup()

# Initialize your OpenAI client
client = OpenAI()

@langwatch.trace(name="OpenAI Chat Completion")
async def get_openai_chat_response(user_prompt: str):
    # Get the current trace and enable autotracking for the 'client' instance
    langwatch.get_current_trace().autotrack_openai_calls(client)

    # All calls made with 'client' will now be automatically captured as spans
    response = client.chat.completions.create(
        model="gpt-4.1-nano",
        messages=[{"role": "user", "content": user_prompt}],
    )
    completion = response.choices[0].message.content
    return completion

async def main():
    user_query = "Tell me a joke about Python programming."
    response = await get_openai_chat_response(user_query)
    print(f"User: {user_query}")
    print(f"AI: {response}")

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Key points for autotrack_openai_calls():

  • It must be called on an active trace object (e.g., obtained via langwatch.get_current_trace()).
  • It instruments a specific instance of the OpenAI client. If you have multiple clients, you’ll need to call it for each one you want to track.

Using Community OpenTelemetry Instrumentors

If you prefer to use broader OpenTelemetry-based instrumentation, or are already using libraries like OpenInference or OpenLLMetry, LangWatch can seamlessly integrate with them. These libraries provide instrumentors that automatically capture data from various LLM providers, including OpenAI.

There are two main ways to integrate these:

1. Via langwatch.setup()

You can pass an instance of the instrumentor (e.g., OpenAIInstrumentor from OpenInference or OpenLLMetry) to the instrumentors list in the langwatch.setup() call. LangWatch will then manage the lifecycle of this instrumentor.

import langwatch
from openai import OpenAI
import os

# Example using OpenInference's OpenAIInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor

# Initialize LangWatch with the OpenAIInstrumentor
langwatch.setup(
    instrumentors=[OpenAIInstrumentor()]
)

client = OpenAI()

@langwatch.trace(name="OpenAI Call with Community Instrumentor")
def generate_text_with_community_instrumentor(prompt: str):
    # No need to call autotrack explicitly, the community instrumentor handles OpenAI calls globally.
    response = client.chat.completions.create(
        model="gpt-4.1-nano",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content

if __name__ == "__main__":
    user_query = "Tell me a joke about Python programming."
    response = generate_text_with_community_instrumentor(user_query)
    print(f"User: {user_query}")
    print(f"AI: {response}")

Ensure you have the respective community instrumentation library installed (e.g., pip install openllmetry-instrumentation-openai or pip install openinference-instrumentation-openai).

2. Direct Instrumentation

If you have an existing OpenTelemetry TracerProvider configured in your application (or if LangWatch is configured to use the global provider), you can use the community instrumentor’s instrument() method directly. LangWatch will automatically pick up the spans generated by these instrumentors as long as its exporter is part of the active TracerProvider.

import langwatch
from openai import OpenAI
import os
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor, ConsoleSpanExporter

from openinference.instrumentation.openai import OpenAIInstrumentor

langwatch.setup()
client = OpenAI()

# Instrument OpenAI directly using the community library
OpenAIInstrumentor().instrument()

@langwatch.trace(name="OpenAI Call with Direct Community Instrumentation")
def get_story_ending(beginning: str):
    response = client.chat.completions.create(
        model="gpt-4.1-nano",
        messages=[
            {"role": "system", "content": "You are a creative writer. Complete the story."},
            {"role": "user", "content": beginning}
        ]
    )
    return response.choices[0].message.content

if __name__ == "__main__":
    story_start = "In a land of dragons and wizards, a young apprentice found a mysterious map..."
    ending = get_story_ending(story_start)
    print(f"Story Start: {story_start}")
    print(f"AI's Ending: {ending}")

Key points for community instrumentors:

  • These instrumentors often patch OpenAI at a global level, meaning all OpenAI calls from any client instance will be captured once instrumented.
  • If using langwatch.setup(instrumentors=[...]), LangWatch handles the setup.
  • If instrumenting directly (e.g., OpenAIInstrumentor().instrument()), ensure that the TracerProvider used by the instrumentor is the same one LangWatch is exporting from. This usually means LangWatch is configured to use an existing global provider or one you explicitly pass to langwatch.setup().

Which Approach to Choose?

  • autotrack_openai_calls() is ideal for targeted instrumentation within specific traces or when you want fine-grained control over which OpenAI client instances are tracked. It’s simpler if you’re not deeply invested in a separate OpenTelemetry setup.
  • Community Instrumentors are powerful if you’re already using OpenTelemetry, want to capture OpenAI calls globally across your application, or need to instrument other libraries alongside OpenAI with a consistent OpenTelemetry approach. They provide a more holistic observability solution if you have multiple OpenTelemetry-instrumented components.

Choose the method that best fits your existing setup and instrumentation needs. Both approaches effectively send OpenAI call data to LangWatch for monitoring and analysis.