This page contains the low-level reference for the Python SDK components, for guide on integrating LangWatch into your Python project, see Python Integration Guide.

Trace

The trace is the basic unit of work in LangWatch. It is a collection of spans that are grouped together to form a single unit of work, you can create a trace in three manners:

import langwatch

# As a decorator:
@langwatch.trace()
def my_function():
    pass


# As a context manager
with langwatch.trace():
    pass


# As a function
trace = langwatch.trace()

All three ways will create the same trace objects, but for the last one you manually need to call trace.deferred_send_spans() or trace.send_spans() to send the spans to the LangWatch API.

The first two will also set the trace to the context, which you can retrieve by:

trace = langwatch.get_current_trace()

Both on the trace creation function and .update() you can set trace_id, metadata and api_key to be used by the trace.

ParameterTypeDescription
trace_idstrThe trace id to use for the trace, a random one is generated by default, but you can also pass your own to connect with your internal message id if you have it.
metadatadictThe object holding metadata for the trace, it contains a few fields listed below.
metadata.user_idstrThe user id that is triggering the generation on your LLM pipeline
metadata.thread_idstrA thread id can be used to virtually group together all the different traces in a single thread or workflow
metadata.labelslist[str]A list of labels to categorize the trace which you can use to filter on later on LangWatch dashboard, trigger evaluations and alerts
api_keystrThe api key to use for the trace, can be set to override the LANGWATCH_API_KEY environment variable.

Span

A Span is a single unit of work in a trace, it is the smallest unit of work in LangWatch. Similar to traces, you can create it in three different manners:

import langwatch

# As a decorator
@langwatch.span()
def my_function():
    pass

# As a context manager
with langwatch.span():
    pass

# As a function
span = langwatch.span()

All three ways will create the same span objects, but for the last one you need to manually end the span by calling span.end(), which may also take parameters for updating the span data:

span.end(output="sunny")

The first two will also set the span to the context, which you can retrieve by:

span = langwatch.get_current_span()

By default, when the span is created it becomes the child of the current span in context, but you can also explicitly create a children span from a trace or from another span by initiating them from the parent, for example:

trace = langwatch.trace() # or langwatch.get_current_trace()

# Direct child of the trace
span = trace.span(name="child")

# Child of another span, granchild of the trace
subspan = span.span(name="grandchild")

subspan.end()
span.end()

trace.deferred_send_spans()

Both on the span creation function, .update() and .end() functions you can set span parameters:

ParameterTypeDescription
span_idstrThe span id to use for the span, a random one is generated by default.
namestrThe name of the span, automatically inferred from the function when using the @langwatch.span() decorator.
type"span" | "rag" | "llm" | "chain" | "tool" | "agent" | "guardrail"The type of the span, defaults to span, with rag and llm spans allowing for some extra parameters.
parentContextSpanThe parent span to use for the span, if not set, the current span in context is used as the parent.
capture_inputboolAvailable only on the @langwatch.span() decorator, whether to capture the input of the function, defaults to True.
capture_outputboolAvailable only on the @langwatch.span() decorator, whether to capture the output of the function, defaults to True.
inputstr | list[ChatMessage] | SpanInputOutputThe span input, it can be either a string, or a list of OpenAI-compatible chat messages format dicts, or a SpanInputOutput object, which captures other generic types such as { "type": "json", "value": {...} }.
outputstr | list[ChatMessage] | SpanInputOutputThe span output, it can be either a string, or a list of OpenAI-compatible chat messages format dicts, or a SpanInputOutput object, which captures other generic types such as { "type": "json", "value": {...} }.
errorExceptionThe error that occurred during the function execution, if any. It is automatically captured with the @langwatch.span() decorator and context manager.
timestampsSpanTimestampsThe timestamps of the span, tracked by default when using the @langwatch.span() decorator and context manager.
timestamps.started_atintThe start time of the span in milliseconds, the current time is used by default when the span starts.
timestamps.first_token_atintThe time when the first token was generated in milliseconds, automatically tracked for streaming LLMs when using framework integrations.
timestamps.finished_atintThe time when the span finished in milliseconds, the current time is used by default when the span ends.
contextslist[str] | list[RAGChunk]RAG only: The list of contexts retrieved by the RAG, manually captured to be used later as the context source for RAG evaluators. Check out the Capturing a RAG Span guide for more information.
modelstrLLM only: The model used for the LLM in the "vendor/model" format (e.g. "openai/gpt-3.5-turbo"), automatically captured when using framework integrations, otherwise important to manually set it for correct tokens and costs tracking.
paramsLLMSpanParamsLLM only: The parameters used for the LLM, on which parameters were used by the LLM call, automatically captured when using framework integrations
params.temperaturefloatLLM only: The temperature used for the LLM
params.streamboolLLM only: Whether the LLM is streaming or not
params.toolslist[dict]LLM only: OpenAI-compatible tools list available to the LLM
params.tool_choicestrLLM only: The OpenAI-compatible tool_choice setting for the LLM
metricsLLMSpanMetricsLLM only: The metrics of the LLM span, automatically captured when using framework integrations
metrics.prompt_tokensintLLM only: The number of prompt tokens used by the LLM
metrics.completion_tokensintLLM only: The number of completion tokens used by the LLM