Strand Agents is a framework for building AI agents with a focus on simplicity and performance. For more details on Strand Agents, refer to the official Strand Agents documentation. LangWatch can capture traces generated by Strand Agents by leveraging its built-in OpenTelemetry support. This guide will show you how to set it up.

Prerequisites

  1. Install LangWatch SDK:
    pip install langwatch
    
  2. Install Strand Agents and OpenInference instrumentor:
    pip install strand-agents openinference-instrumentation-strand-agents
    
  3. Set up your LLM provider: You’ll need to configure your preferred LLM provider (OpenAI, Anthropic, etc.) with the appropriate API keys.

Instrumentation with OpenInference

LangWatch supports seamless observability for Strand Agents using the OpenInference Strand Agents instrumentor. This approach automatically captures traces from your Strand Agents and sends them to LangWatch.

Basic Setup (Automatic Tracing)

Here’s the simplest way to instrument your application:
import langwatch
from strand_agents import Agent
from openinference.instrumentation.strand_agents import StrandAgentsInstrumentor
import os

# Initialize LangWatch with the Strand Agents instrumentor
langwatch.setup(
    instrumentors=[StrandAgentsInstrumentor()]
)

# Set up environment variables
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"

# Create your agent
agent = Agent(
    name="hello_agent",
    model="gpt-4o-mini",
    instruction="You are a helpful assistant. Always be friendly and concise.",
)

# Use the agent as usual—traces will be sent to LangWatch automatically
def run_agent_interaction(user_message: str):
    response = agent.run(user_message)
    return response

# Example usage
if __name__ == "__main__":
    user_prompt = "Hello! How are you today?"
    response = run_agent_interaction(user_prompt)
    print(f"User: {user_prompt}")
    print(f"Agent: {response}")
That’s it! All Strand Agents activity will now be traced and sent to your LangWatch dashboard automatically.

Optional: Using Decorators for Additional Context

If you want to add additional context or metadata to your traces, you can optionally use the @langwatch.trace() decorator:
import langwatch
from strand_agents import Agent
from openinference.instrumentation.strand_agents import StrandAgentsInstrumentor
import os

langwatch.setup(
    instrumentors=[StrandAgentsInstrumentor()]
)

# ... agent setup code ...

@langwatch.trace(name="Strand Agents Run")
def run_agent_interaction(user_message: str):
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace.update(
            metadata={
                "user_id": "user_123",
                "session_id": "session_abc",
                "agent_name": "hello_agent",
                "model": "gpt-4o-mini"
            }
        )
    
    response = agent.run(user_message)
    return response

How it Works

  1. langwatch.setup(): Initializes the LangWatch SDK, which includes setting up an OpenTelemetry trace exporter. This exporter is ready to receive spans from any OpenTelemetry-instrumented library in your application.
  2. StrandAgentsInstrumentor(): The OpenInference instrumentor automatically patches Strand Agents components to create OpenTelemetry spans for their operations, including:
    • Agent initialization
    • Model calls
    • Tool executions
    • Response generation
  3. Optional Decorators: You can optionally use @langwatch.trace() to add additional context and metadata to your traces, but it’s not required for basic functionality.
With this setup, all agent interactions, model calls, and tool executions will be automatically traced and sent to LangWatch, providing comprehensive visibility into your Strand Agents-powered applications.

Notes

  • You do not need to set any OpenTelemetry environment variables or configure exporters manually—langwatch.setup() handles everything.
  • You can combine Strand Agents instrumentation with other instrumentors (e.g., OpenAI, LangChain) by adding them to the instrumentors list.
  • The @langwatch.trace() decorator is optional - the OpenInference instrumentor will capture all Strand Agents activity automatically.
  • For advanced configuration (custom attributes, endpoint, etc.), see the Python integration guide.

Troubleshooting

  • Make sure your LANGWATCH_API_KEY is set in the environment.
  • If you see no traces in LangWatch, check that the instrumentor is included in langwatch.setup() and that your agent code is being executed.
  • Ensure you have the correct API keys set for your chosen LLM provider.

Interoperability with LangWatch SDK

You can use this integration together with the LangWatch Python SDK to add additional attributes to the trace:
import langwatch
from strand_agents import Agent
from openinference.instrumentation.strand_agents import StrandAgentsInstrumentor

langwatch.setup(
    instrumentors=[StrandAgentsInstrumentor()]
)

@langwatch.trace(name="Custom Strand Agents Application")
def my_custom_strand_agents_app(input_message: str):
    # Your Strand Agents code here
    agent = Agent(
        name="custom_agent",
        model="gpt-4o-mini",
        instruction="Your custom instructions",
    )
    
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace.update(
            metadata={
                "user_id": "user_123",
                "session_id": "session_abc",
                "agent_name": "custom_agent",
                "model": "gpt-4o-mini"
            }
        )
    
    # Run your agent
    response = agent.run(input_message)
    
    return response
This approach allows you to combine the automatic tracing capabilities of Strand Agents with the rich metadata and custom attributes provided by LangWatch.