Prerequisites
-
Install LangWatch SDK:
-
Install Instructor AI and OpenInference instrumentor:
- Set up your OpenAI API key: You’ll need to configure your OpenAI API key in your environment.
Instrumentation with OpenInference
LangWatch supports seamless observability for Instructor AI using the OpenInference Instructor AI instrumentor. This dedicated instrumentor automatically captures traces from your Instructor AI calls and sends them to LangWatch.Basic Setup (Automatic Tracing)
Here’s the simplest way to instrument your application:Optional: Using Decorators for Additional Context
If you want to add additional context or metadata to your traces, you can optionally use the@langwatch.trace()
decorator:
How it Works
-
langwatch.setup()
: Initializes the LangWatch SDK, which includes setting up an OpenTelemetry trace exporter. This exporter is ready to receive spans from any OpenTelemetry-instrumented library in your application. -
InstructorInstrumentor()
: The OpenInference instrumentor automatically patches Instructor AI operations to create OpenTelemetry spans for their operations, including:- Structured output generation
- Model calls with response models
- Validation and parsing
- Error handling
- Instructor AI Integration: The dedicated Instructor AI instrumentor captures all Instructor AI operations (structured output generation, validation, etc.) as spans.
-
Optional Decorators: You can optionally use
@langwatch.trace()
to add additional context and metadata to your traces, but it’s not required for basic functionality.
Notes
- You do not need to set any OpenTelemetry environment variables or configure exporters manually—
langwatch.setup()
handles everything. - You can combine Instructor AI instrumentation with other instrumentors (e.g., LangChain, DSPy) by adding them to the
instrumentors
list. - The
@langwatch.trace()
decorator is optional - the OpenInference instrumentor will capture all Instructor AI activity automatically. - For advanced configuration (custom attributes, endpoint, etc.), see the Python integration guide.
Troubleshooting
- Make sure your
LANGWATCH_API_KEY
is set in the environment. - If you see no traces in LangWatch, check that the instrumentor is included in
langwatch.setup()
and that your Instructor AI code is being executed. - Ensure you have the correct OpenAI API key set.
- Verify that your Pydantic models are properly defined and compatible with Instructor AI.