Capturing and Mapping Inputs & Outputs
Learn how to control the capture and structure of input and output data for traces and spans with the LangWatch Python SDK.
Effectively capturing the inputs and outputs of your LLM application’s operations is crucial for observability. LangWatch provides flexible ways to manage this data, whether you prefer automatic capture or explicit control to map complex objects, format data, or redact sensitive information.
This tutorial covers how to:
- Understand automatic input/output capture.
- Explicitly set inputs and outputs for traces and spans.
- Dynamically update this data on active traces/spans.
- Handle different data formats, especially for chat messages.
Automatic Input and Output Capture
By default, when you use @langwatch.trace()
or @langwatch.span()
as decorators on functions, the SDK attempts to automatically capture:
- Inputs: The arguments passed to the decorated function.
- Outputs: The value returned by the decorated function.
This behavior can be controlled using the capture_input
and capture_output
boolean parameters.
Refer to the API reference for @langwatch.trace()
and @langwatch.span()
for more details on capture_input
and capture_output
parameters.
Explicitly Setting Inputs and Outputs
You often need more control over what data is recorded. You can explicitly set inputs and outputs using the input
and output
parameters when initiating a trace or span, or by using the update()
method on the respective objects.
This is useful for:
- Capturing only specific parts of complex objects.
- Formatting data in a more readable or structured way (e.g., as a list of
ChatMessage
objects). - Redacting sensitive information before it’s sent to LangWatch.
- Providing inputs/outputs when not using decorators (e.g., with context managers for parts of a function).
At Initialization
When using @langwatch.trace()
or @langwatch.span()
(either as decorators or context managers), you can pass input
and output
arguments.
If you provide input
or output
directly, it overrides what might have been automatically captured for that field.
Dynamically Updating Inputs and Outputs
You can modify the input or output of an active trace or span using its update()
method. This is particularly useful when the input/output data is determined or refined during the operation.
The update()
method on LangWatchTrace
and LangWatchSpan
objects is versatile. See the reference for LangWatchTrace
methods and LangWatchSpan
methods.
Handling Different Data Formats
LangWatch can store various types of input and output data:
- Strings: Simple text.
- Dictionaries: Automatically serialized as JSON. This is useful for structured data.
- Lists of
ChatMessage
objects: The standard way to represent conversations for LLM interactions. This ensures proper display and analysis in the LangWatch UI.
Capturing Chat Messages
For LLM interactions, structure your inputs and outputs as a list of ChatMessage
objects.
For the detailed structure of ChatMessage
, ToolCall
, and other related types, please refer to the Core Data Types section in the API Reference.
Use Cases and Best Practices
- Redacting Sensitive Information: If your function arguments or return values contain sensitive data (PII, API keys), disable automatic capture (
capture_input=False
,capture_output=False
) and explicitly set sanitized versions usinginput
/output
parameters orupdate()
. - Mapping Complex Objects: If your inputs/outputs are complex Python objects, map them to a dictionary or a simplified string representation for clearer display in LangWatch.
- Improving Readability: For long text inputs/outputs (e.g., full documents), consider capturing a summary or metadata instead of the entire content to reduce noise, unless the full content is essential for debugging or evaluating.
- Clearing Captured Data: You can set
input=None
oroutput=None
via theupdate()
method to remove previously captured (or auto-captured) data if it’s no longer relevant or was captured in error.
Conclusion
Controlling how inputs and outputs are captured in LangWatch allows you to tailor the observability data to your specific needs. By using automatic capture flags, explicit parameters, dynamic updates, and appropriate data formatting (especially ChatMessage
for conversations), you can ensure that your traces provide clear, relevant, and secure insights into your LLM application’s behavior.