Protip: wanna to get started even faster? Copy our llms.txt and ask an AI to do this integration
Get your LangWatch API Key
First, you need a LangWatch API key. Sign up at app.langwatch.ai and find your API key in your project settings. The SDK will automatically use theLANGWATCH_API_KEY
environment variable if it is set.
Start Instrumenting
First, ensure you have the SDK installed:If you have an existing OpenTelemetry setup in your application, please see
the Already using OpenTelemetry? section
below.
Capturing Messages
- Each message triggering your LLM pipeline as a whole is captured with a Trace.
- A Trace contains multiple Spans, which are the steps inside your pipeline.
- Traces can be grouped together on LangWatch Dashboard by having the same
thread_id
in their metadata, making the individual messages become part of a conversation.- It is also recommended to provide the
user_id
metadata to track user analytics.
- It is also recommended to provide the
Creating a Trace
To capture an end-to-end operation, like processing a user message, you can wrap the main function or entry point with the@langwatch.trace()
decorator. This automatically creates a root span for the entire operation.
langwatch.get_current_trace()
.
Capturing a Span
To instrument specific parts of your pipeline within a trace (like an llm operation, rag retrieval, or external api call), use the@langwatch.span()
decorator.
The
@langwatch.span()
decorator automatically captures the decorated
function’s arguments as the span’s input
and its return value as the
output
. This behavior can be controlled via the capture_input
and
capture_output
arguments (both default to True
).@langwatch.trace()
will automatically be nested under the main trace span. You can add additional type
, name
, metadata
, and events
, or override the automatic input/output using decorator arguments or the update()
method on the span object obtained via langwatch.get_current_span()
.
For detailed guidance on manually creating traces and spans using context managers or direct start/end calls, see the Manual Instrumentation Tutorial.
Full Setup
Options
Your LangWatch API key. If not provided, it uses the
LANGWATCH_API_KEY
environment variable.The LangWatch endpoint URL. Defaults to the
LANGWATCH_ENDPOINT
environment
variable or https://app.langwatch.ai
.A dictionary of attributes to add to all spans (e.g., service name, version).
Automatically includes SDK name, version, and language.
A list of automatic instrumentors (e.g.,
OpenAIInstrumentor
,
LangChainInstrumentor
) to capture data from supported libraries.An existing OpenTelemetry
TracerProvider
. If provided, LangWatch will use it
(adding its exporter) instead of creating a new one. If not provided,
LangWatch checks the global provider or creates a new one.Enable debug logging for LangWatch. Defaults to
False
or checks if the
LANGWATCH_DEBUG
environment variable is set to "true"
.If
True
, disables sending traces to the LangWatch server. Useful for testing
or development.If
True
(the default), the tracer provider will attempt to flush all pending
spans when the program exits via atexit
.If provided, the SDK will exclude spans from being exported to LangWatch based
on the rules defined in the list (e.g., matching span names).
If
True
, suppresses the warning message logged when an existing global
TracerProvider
is detected and LangWatch attaches its exporter to it instead
of overriding it.Integrations
LangWatch offers seamless integrations with a variety of popular Python libraries and frameworks. These integrations provide automatic instrumentation, capturing relevant data from your LLM applications with minimal setup. Below is a list of currently supported integrations. Click on each to learn more about specific setup instructions and available features:- Agno
- AWS Bedrock
- Azure AI
- Crew AI
- DSPy
- Haystack
- Langchain
- LangGraph
- LiteLLM
- OpenAI
- OpenAI Agents
- OpenAI Azure
- Pydantic AI
- Other Frameworks
FAQ: Frequently Asked Questions
How do I track LLM costs and token usage?
How do I track LLM costs and token usage?
LangWatch automatically captures cost and token data for most LLM providers. If you’re missing costs or token counts, our cost tracking tutorial covers troubleshooting steps, model cost configuration, and manual token tracking setup.
How do I capture RAG (Retrieval Augmented Generation) contexts?
How do I capture RAG (Retrieval Augmented Generation) contexts?
To monitor your RAG pipelines and track retrieved documents, see our RAG capturing guide. This enables specialized RAG evaluators and analytics on document usage patterns.
How can I make input and output of the trace more human readable to better read the conversation?
How can I make input and output of the trace more human readable to better read the conversation?
Our input/output mapping guide shows how to properly structure chat messages, handle different data formats, and ensure your LLM conversations are captured correctly for analysis.
How do I add custom metadata and user information to traces?
How do I add custom metadata and user information to traces?
Learn how to enrich your traces with user IDs, session data, and custom attributes in our metadata and attributes tutorial. This is essential for user analytics and filtering traces by custom criteria.
How can I capture a whole conversation?
How can I capture a whole conversation?
To connect multiple traces into a conversation, you can use the
thread_id
metadata. See the metadata and attributes tutorial for more details.How do I capture evaluations and guardrails tracing data?
How do I capture evaluations and guardrails tracing data?
Implement automated quality checks and safety measures with our evaluations and guardrails tutorial. Learn to create custom evaluators and integrate safety guardrails into your LLM workflows.
How can I manually instrument my application for more fine-grained control?
How can I manually instrument my application for more fine-grained control?
For custom frameworks or fine-grained control, our manual instrumentation guide covers creating traces and spans programmatically using context managers and direct API calls.
How do I integrate with existing OpenTelemetry setups?
How do I integrate with existing OpenTelemetry setups?
LangWatch is OpenTelemetry-based, so it can be integrated seamlessly with any OpenTelemetry-compatible application. If you already use OpenTelemetry in your application, our OpenTelemetry integration tutorial explains how to configure LangWatch alongside existing telemetry infrastructure, including custom collectors and exporters.