LangWatch Python Repo
LangWatch Python SDK version

Integrate LangWatch into your Python application to start observing your LLM interactions. This guide covers the setup and basic usage of the LangWatch Python SDK.

Protip: wanna to get started even faster? Copy our llms.txt and ask an AI to do this integration

Get your LangWatch API Key

First, you need a LangWatch API key. Sign up at app.langwatch.ai and find your API key in your project settings. The SDK will automatically use the LANGWATCH_API_KEY environment variable if it is set.

Start Instrumenting

First, ensure you have the SDK installed:

pip install langwatch

Initialize LangWatch early in your application, typically where you configure services:

import langwatch
import os

langwatch.setup()

# Your application code...

If you have an existing OpenTelemetry setup in your application, please see the Already using OpenTelemetry? section below.

Capturing Messages

  • Each message triggering your LLM pipeline as a whole is captured with a Trace.
  • A Trace contains multiple Spans, which are the steps inside your pipeline.
    • A span can be an LLM call, a database query for a RAG retrieval, or a simple function transformation.
    • Different types of Spans capture different parameters.
    • Spans can be nested to capture the pipeline structure.
  • Traces can be grouped together on LangWatch Dashboard by having the same thread_id in their metadata, making the individual messages become part of a conversation.
    • It is also recommended to provide the user_id metadata to track user analytics.

Creating a Trace

To capture an end-to-end operation, like processing a user message, you can wrap the main function or entry point with the @langwatch.trace() decorator. This automatically creates a root span for the entire operation.

import langwatch
from openai import OpenAI

client = OpenAI()

@langwatch.trace()
async def handle_message():
    # This whole function execution is now a single trace
    langwatch.get_current_trace().autotrack_openai_calls(client) # Automatically capture OpenAI calls

    # ... rest of your message handling logic ...
    pass

You can customize the trace name and add initial metadata if needed:

@langwatch.trace(name="My Custom Trace Name", metadata={"foo": "bar"})
async def handle_message():
    # ...
    pass

Within a traced function, you can access the current trace context using langwatch.get_current_trace().

Capturing a Span

To instrument specific parts of your pipeline within a trace (like an llm operation, rag retrieval, or external api call), use the @langwatch.span() decorator.

import langwatch
from langwatch.types import RAGChunk

@langwatch.span(type="rag", name="RAG Document Retrieval") # Add type and custom name
def rag_retrieval(query: str):
    # ... logic to retrieve documents ...
    search_results = [
        {"id": "doc-1", "content": "..." },
        {"id": "doc-2", "content": "..." }
    ]

    # Add specific context data to the span
    langwatch.get_current_span().update(
        contexts=[
            RAGChunk(document_id=doc["id"], content=doc["content"])
            for doc in search_results
        ],
        retrieval_strategy="vector_search",
    )

    return search_results

@langwatch.trace()
async def handle_message(message: cl.Message):
    # ...
    retrieved_docs = rag_retrieval(message.content) # This call creates a nested span
    # ...

The @langwatch.span() decorator automatically captures the decorated function’s arguments as the span’s input and its return value as the output. This behavior can be controlled via the capture_input and capture_output arguments (both default to True).

Spans created within a function decorated with @langwatch.trace() will automatically be nested under the main trace span. You can add additional type, name, metadata, and events, or override the automatic input/output using decorator arguments or the update() method on the span object obtained via langwatch.get_current_span().

For detailed guidance on manually creating traces and spans using context managers or direct start/end calls, see the Manual Instrumentation Tutorial.

Full Setup

import os

import langwatch
from langwatch.attributes import AttributeKey
from langwatch.domain import SpanProcessingExcludeRule

from community.instrumentors import OpenAIInstrumentor # Example instrumentor

from opentelemetry.sdk.trace import TracerProvider

# Example: Providing an existing TracerProvider
# existing_provider = TracerProvider()

# Example: Defining exclude rules
exclude_rules = [
    SpanProcessingExcludeRule(
      field_name=["span_name"],
      match_value="GET /health_check",
      match_operation="exact_match"
    ),
]

langwatch.setup(
    api_key=os.getenv("LANGWATCH_API_KEY"),
    endpoint_url="https://your-langwatch-instance.com", # Optional: Defaults to env var or cloud
    base_attributes={
      AttributeKey.ServiceName: "my-awesome-service",
      AttributeKey.ServiceVersion: "1.2.3",
      # Add other custom attributes here
    },
    instrumentors=[OpenAIInstrumentor()], # Optional: List of instrumentors that conform to the `Instrumentor` protocol
    # tracer_provider=existing_provider, # Optional: Provide your own TracerProvider
    debug=True, # Optional: Enable debug logging
    disable_sending=False, # Optional: Disable sending traces
    flush_on_exit=True, # Optional: Flush traces on exit (default: True)
    span_exclude_rules=exclude_rules, # Optional: Rules to exclude spans
    ignore_global_tracer_provider_override_warning=False # Optional: Silence warning if global provider exists
)

# Your application code...

Options

api_key
str | None

Your LangWatch API key. If not provided, it uses the LANGWATCH_API_KEY environment variable.

endpoint_url
str | None

The LangWatch endpoint URL. Defaults to the LANGWATCH_ENDPOINT environment variable or https://app.langwatch.ai.

base_attributes
dict[str, Any] | None

A dictionary of attributes to add to all spans (e.g., service name, version). Automatically includes SDK name, version, and language.

instrumentors
Sequence[Instrumentor] | None

A list of automatic instrumentors (e.g., OpenAIInstrumentor, LangChainInstrumentor) to capture data from supported libraries.

tracer_provider
TracerProvider | None

An existing OpenTelemetry TracerProvider. If provided, LangWatch will use it (adding its exporter) instead of creating a new one. If not provided, LangWatch checks the global provider or creates a new one.

debug
bool
default:"False"

Enable debug logging for LangWatch. Defaults to False or checks if the LANGWATCH_DEBUG environment variable is set to "true".

disable_sending
bool
default:"False"

If True, disables sending traces to the LangWatch server. Useful for testing or development.

flush_on_exit
bool
default:"True"

If True (the default), the tracer provider will attempt to flush all pending spans when the program exits via atexit.

span_exclude_rules
List[SpanProcessingExcludeRule] | None

If provided, the SDK will exclude spans from being exported to LangWatch based on the rules defined in the list (e.g., matching span names).

ignore_global_tracer_provider_override_warning
bool
default:"False"

If True, suppresses the warning message logged when an existing global TracerProvider is detected and LangWatch attaches its exporter to it instead of overriding it.

Integrations

LangWatch offers seamless integrations with a variety of popular Python libraries and frameworks. These integrations provide automatic instrumentation, capturing relevant data from your LLM applications with minimal setup.

Below is a list of currently supported integrations. Click on each to learn more about specific setup instructions and available features:

If you are using a library that is not listed here, you can still instrument your application manually. See the Manual Instrumentation Tutorial for more details. Since LangWatch is built on OpenTelemetry, it also supports any library or framework that integrates with OpenTelemetry. We are also continuously working on adding support for more integrations.