Python Integration Guide
LangWatch Python SDK integration guide
Integrate LangWatch into your Python application to start observing your LLM interactions. This guide covers the setup and basic usage of the LangWatch Python SDK.
Get your LangWatch API Key
First, you need a LangWatch API key. Sign up at app.langwatch.ai and find your API key in your project settings. The SDK will automatically use the LANGWATCH_API_KEY
environment variable if it is set.
Start Instrumenting
First, ensure you have the SDK installed:
Initialize LangWatch early in your application, typically where you configure services:
If you have an existing OpenTelemetry setup in your application, please see the Already using OpenTelemetry? section below.
Capturing Messages
- Each message triggering your LLM pipeline as a whole is captured with a Trace.
- A Trace contains multiple Spans, which are the steps inside your pipeline.
- Traces can be grouped together on LangWatch Dashboard by having the same
thread_id
in their metadata, making the individual messages become part of a conversation.- It is also recommended to provide the
user_id
metadata to track user analytics.
- It is also recommended to provide the
Creating a Trace
To capture an end-to-end operation, like processing a user message, you can wrap the main function or entry point with the @langwatch.trace()
decorator. This automatically creates a root span for the entire operation.
You can customize the trace name and add initial metadata if needed:
Within a traced function, you can access the current trace context using langwatch.get_current_trace()
.
Capturing a Span
To instrument specific parts of your pipeline within a trace (like an llm operation, rag retrieval, or external api call), use the @langwatch.span()
decorator.
The @langwatch.span()
decorator automatically captures the decorated function’s arguments as the span’s input
and its return value as the output
. This behavior can be controlled via the capture_input
and capture_output
arguments (both default to True
).
Spans created within a function decorated with @langwatch.trace()
will automatically be nested under the main trace span. You can add additional type
, name
, metadata
, and events
, or override the automatic input/output using decorator arguments or the update()
method on the span object obtained via langwatch.get_current_span()
.
For detailed guidance on manually creating traces and spans using context managers or direct start/end calls, see the Manual Instrumentation Tutorial.
Full Setup
Options
Your LangWatch API key. If not provided, it uses the LANGWATCH_API_KEY
environment variable.
The LangWatch endpoint URL. Defaults to the LANGWATCH_ENDPOINT
environment variable or https://app.langwatch.ai
.
A dictionary of attributes to add to all spans (e.g., service name, version). Automatically includes SDK name, version, and language.
A list of automatic instrumentors (e.g., OpenAIInstrumentor
, LangChainInstrumentor
) to capture data from supported libraries.
An existing OpenTelemetry TracerProvider
. If provided, LangWatch will use it (adding its exporter) instead of creating a new one. If not provided, LangWatch checks the global provider or creates a new one.
Enable debug logging for LangWatch. Defaults to False
or checks if the LANGWATCH_DEBUG
environment variable is set to "true"
.
If True
, disables sending traces to the LangWatch server. Useful for testing or development.
If True
(the default), the tracer provider will attempt to flush all pending spans when the program exits via atexit
.
If provided, the SDK will exclude spans from being exported to LangWatch based on the rules defined in the list (e.g., matching span names).
If True
, suppresses the warning message logged when an existing global TracerProvider
is detected and LangWatch attaches its exporter to it instead of overriding it.
Integrations
LangWatch offers seamless integrations with a variety of popular Python libraries and frameworks. These integrations provide automatic instrumentation, capturing relevant data from your LLM applications with minimal setup.
Below is a list of currently supported integrations. Click on each to learn more about specific setup instructions and available features:
- AWS Bedrock
- Azure AI
- Crew AI
- DSPy
- Haystack
- Langchain
- LangGraph
- LiteLLM
- OpenAI
- OpenAI Agents
- OpenAI Azure
- Pydantic AI
- Other Frameworks
If you are using a library that is not listed here, you can still instrument your application manually. See the Manual Instrumentation Tutorial for more details. Since LangWatch is built on OpenTelemetry, it also supports any library or framework that integrates with OpenTelemetry. We are also continuously working on adding support for more integrations.