Python SDK Reference
This page contains the low-level reference for the Python SDK components, for guide on integrating LangWatch into your Python project, see Python Integration Guide.
Trace
The trace is the basic unit of work in LangWatch. It is a collection of spans that are grouped together to form a single unit of work, you can create a trace in three manners:
import langwatch
# As a decorator:
@langwatch.trace()
def my_function():
pass
# As a context manager
with langwatch.trace():
pass
# As a function
trace = langwatch.trace()
All three ways will create the same trace objects, but for the last one you manually need to call trace.deferred_send_spans()
or trace.send_spans()
to send the spans to the LangWatch API.
The first two will also set the trace to the context, which you can retrieve by:
trace = langwatch.get_current_trace()
Both on the trace creation function and .update()
you can set trace_id, metadata and api_key to be used by the trace.
Parameter | Type | Description |
---|---|---|
trace_id | str | The trace id to use for the trace, a random one is generated by default, but you can also pass your own to connect with your internal message id if you have it. |
metadata | dict | The object holding metadata for the trace, it contains a few fields listed below. |
metadata.user_id | str | The user id that is triggering the generation on your LLM pipeline |
metadata.thread_id | str | A thread id can be used to virtually group together all the different traces in a single thread or workflow |
metadata.labels | list[str] | A list of labels to categorize the trace which you can use to filter on later on LangWatch dashboard, trigger evaluations and alerts |
api_key | str | The api key to use for the trace, can be set to override the LANGWATCH_API_KEY environment variable. |
Span
A Span is a single unit of work in a trace, it is the smallest unit of work in LangWatch. Similar to traces, you can create it in three different manners:
import langwatch
# As a decorator
@langwatch.span()
def my_function():
pass
# As a context manager
with langwatch.span():
pass
# As a function
span = langwatch.span()
All three ways will create the same span objects, but for the last one you need to manually end the span by calling span.end()
, which may also take parameters for updating the span data:
span.end(output="sunny")
The first two will also set the span to the context, which you can retrieve by:
span = langwatch.get_current_span()
By default, when the span is created it becomes the child of the current span in context, but you can also explicitly create a children span from a trace or from another span by initiating them from the parent, for example:
trace = langwatch.trace() # or langwatch.get_current_trace()
# Direct child of the trace
span = trace.span(name="child")
# Child of another span, granchild of the trace
subspan = span.span(name="grandchild")
subspan.end()
span.end()
trace.deferred_send_spans()
Both on the span creation function, .update()
and .end()
functions you can set span parameters:
Parameter | Type | Description |
---|---|---|
span_id | str | The span id to use for the span, a random one is generated by default. |
name | str | The name of the span, automatically inferred from the function when using the @langwatch.span() decorator. |
type | "span" | "rag" | "llm" | "chain" | "tool" | "agent" | "guardrail" | The type of the span, defaults to span , with rag and llm spans allowing for some extra parameters. |
parent | ContextSpan | The parent span to use for the span, if not set, the current span in context is used as the parent. |
capture_input | bool | Available only on the @langwatch.span() decorator, whether to capture the input of the function, defaults to True . |
capture_output | bool | Available only on the @langwatch.span() decorator, whether to capture the output of the function, defaults to True . |
input | str | list[ChatMessage] | SpanInputOutput | The span input, it can be either a string, or a list of OpenAI-compatible chat messages format dicts, or a SpanInputOutput object, which captures other generic types such as { "type": "json", "value": {...} } . |
output | str | list[ChatMessage] | SpanInputOutput | The span output, it can be either a string, or a list of OpenAI-compatible chat messages format dicts, or a SpanInputOutput object, which captures other generic types such as { "type": "json", "value": {...} } . |
error | Exception | The error that occurred during the function execution, if any. It is automatically captured with the @langwatch.span() decorator and context manager. |
timestamps | SpanTimestamps | The timestamps of the span, tracked by default when using the @langwatch.span() decorator and context manager. |
timestamps.started_at | int | The start time of the span in milliseconds, the current time is used by default when the span starts. |
timestamps.first_token_at | int | The time when the first token was generated in milliseconds, automatically tracked for streaming LLMs when using framework integrations. |
timestamps.finished_at | int | The time when the span finished in milliseconds, the current time is used by default when the span ends. |
contexts | list[str] | list[RAGChunk] | RAG only: The list of contexts retrieved by the RAG, manually captured to be used later as the context source for RAG evaluators. Check out the Capturing a RAG Span guide for more information. |
model | str | LLM only: The model used for the LLM in the "vendor/model" format (e.g. "openai/gpt-3.5-turbo" ), automatically captured when using framework integrations, otherwise important to manually set it for correct tokens and costs tracking. |
params | LLMSpanParams | LLM only: The parameters used for the LLM, on which parameters were used by the LLM call, automatically captured when using framework integrations |
params.temperature | float | LLM only: The temperature used for the LLM |
params.stream | bool | LLM only: Whether the LLM is streaming or not |
params.tools | list[dict] | LLM only: OpenAI-compatible tools list available to the LLM |
params.tool_choice | str | LLM only: The OpenAI-compatible tool_choice setting for the LLM |
metrics | LLMSpanMetrics | LLM only: The metrics of the LLM span, automatically captured when using framework integrations |
metrics.prompt_tokens | int | LLM only: The number of prompt tokens used by the LLM |
metrics.completion_tokens | int | LLM only: The number of completion tokens used by the LLM |