Strands Agents is a framework for building AI agents with a focus on simplicity and performance. For more details on Strands Agents, refer to the official Strands Agents documentation. LangWatch can capture traces generated by Strands Agents by leveraging its built-in OpenTelemetry support. This guide will show you how to set it up.

Prerequisites

  1. Install LangWatch SDK and Strands Agents:
    pip install langwatch strands-agents[otel] strands-agents-tools
    
  2. Set up your LLM provider: You’ll need to configure your preferred LLM provider (OpenAI, Anthropic, AWS Bedrock, etc.) with the appropriate API keys.

OpenTelemetry Setup Options

LangWatch supports three approaches for instrumenting Strands Agents with OpenTelemetry: This is the simplest approach where LangWatch handles all OpenTelemetry setup:
import langwatch

# Initialize LangWatch - API key is set from environment variable automatically
langwatch.setup()

Option 2: StrandsTelemetry with Custom Configuration

For more control over OpenTelemetry configuration, you can use StrandsTelemetry:
import os
from strands import Agent
from strands.telemetry import StrandsTelemetry
import langwatch

# Configure StrandsTelemetry with LangWatch endpoint
strands_telemetry = StrandsTelemetry()
strands_telemetry.setup_otlp_exporter(
    endpoint=f"{os.environ.get('LANGWATCH_ENDPOINT', 'https://app.langwatch.ai')}/api/otel/v1/traces",
    headers={"Authorization": "Bearer " + os.environ["LANGWATCH_API_KEY"]},
)

# Skip LangWatch OpenTelemetry setup since StrandsTelemetry handles it
langwatch.setup(skip_open_telemetry_setup=True)

Option 3: Skip OpenTelemetry Setup (When Already Configured)

If OpenTelemetry is already configured by another component in your application (like FastAPI, Django, or another framework), you can skip LangWatch’s OpenTelemetry setup:
import langwatch

# Skip OpenTelemetry setup since it's already configured elsewhere
langwatch.setup(skip_open_telemetry_setup=True)
This is useful when:
  • Your backend or infrastructure framework already sets up OpenTelemetry
  • You have a custom OpenTelemetry configuration
  • Multiple components in your stack configure OpenTelemetry

Basic Agent Setup

Here’s a complete example showing how to create and instrument a Strands Agent:
from strands import Agent
from strands.models.litellm import LiteLLMModel
import langwatch

# Initialize LangWatch
langwatch.setup()

class MyAgent:
    def __init__(self):
        # Configure the model using LiteLLM for provider flexibility
        self.model = LiteLLMModel(
            client_args={
                "api_key": os.getenv("OPENAI_API_KEY"),
            },
            model_id="openai/gpt-5-mini",
        )
        
        # Create the agent with tracing attributes
        self.agent = Agent(
            name="my-agent",
            model=self.model,
            system_prompt="You are a helpful AI assistant.",
            tools=[],  # Add your tools here
            trace_attributes={
                "custom.model_id": "openai/gpt-5-mini",
                "custom.example.attribute": "swift",
            },
        )

    def run(self, prompt: str):
        return self.agent(prompt)

# Use the agent
agent = MyAgent()
response = agent.run("Hello, how can you help me?")
print(response)

Integration with Web Frameworks

Chainlit Integration

Here’s how to integrate Strands Agents with Chainlit while maintaining full observability:
import os
from strands import Agent
from strands.models.litellm import LiteLLMModel
import langwatch
import chainlit.config as cl_config
import chainlit as cl
from dotenv import load_dotenv

load_dotenv()

# Chainlit has broken telemetry, so we need to disable it
cl_config.config.project.enable_telemetry = False

# Initialize LangWatch
langwatch.setup()

class KiteAgent:
    def __init__(self):
        self.model = LiteLLMModel(
            client_args={
                "api_key": os.getenv("OPENAI_API_KEY"),
            },
            model_id="openai/gpt-5-mini",
        )
        self.agent = Agent(
            name="kite-agent",
            model=self.model,
            system_prompt="You are a helpful AI assistant.",
            tools=[],
            trace_attributes={
                "custom.model_id": "openai/gpt-5-mini",
                "custom.example.attribute": "swift",
            },
        )

    def run(self, prompt: str):
        return self.agent(prompt)

@cl.on_message
@langwatch.trace()
async def main(message: cl.Message):
    msg = cl.Message(content="")

    # Update the current trace with additional metadata
    langwatch.get_current_trace().update(
        metadata={
            "custom.example.attribute": "swift",
        }
    )

    agent = KiteAgent()
    response = agent.run(message.content)

    await msg.stream_token(str(response))
    await msg.update()

Adding Custom Attributes and Metadata

You can add custom attributes to your traces in several ways:

Agent-Level Attributes

agent = Agent(
    name="my-agent",
    model=model,
    system_prompt="You are a helpful AI assistant.",
    trace_attributes={
        "custom.model_id": "openai/gpt-5-mini",
        "custom.environment": "production",
        "custom.service": "customer-support",
    },
)

Function-Level Metadata

@langwatch.trace(name="Handle Request")
def handle_request(input_message: str):
    langwatch.get_current_trace().update(
        metadata={
            "user_id": "user_001",
            "thread_id": "thead_001"
        }
    )

    return my_custom_strands_agents_app(input_message)

@langwatch.span(name="Handle")
def my_custom_strands_agents_app(input_message: str):
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace
    
    response = agent(input_message)
    return response

How it Works

  1. LangWatch Setup: langwatch.setup() initializes the LangWatch SDK and sets up OpenTelemetry tracing.
  2. Model Configuration: Use LiteLLMModel for flexible provider support or specific model classes like BedrockModel for AWS Bedrock.
  3. Agent Creation: The Agent constructor accepts trace_attributes for consistent metadata across all traces.
  4. Automatic Tracing: All agent interactions, model calls, and tool executions are automatically traced and sent to LangWatch.
  5. Custom Metadata: Use @langwatch.trace() decorators and langwatch.get_current_trace().update() to add context-specific metadata.

Environment Variables

Set up your environment variables in a .env file:
LANGWATCH_API_KEY=your-langwatch-api-key
OPENAI_API_KEY=your-openai-api-key
LANGWATCH_ENDPOINT=https://app.langwatch.ai  # Optional, defaults to this value

Notes

  • The strands-agents[otel] package includes OpenTelemetry support out of the box.
  • The trace_attributes parameter allows you to add consistent metadata to all traces from a specific agent instance.
  • For advanced configuration, see the Python integration guide.

Troubleshooting

  • Make sure your LANGWATCH_API_KEY is set in the environment.
  • If you see no traces in LangWatch, check that the telemetry is properly configured and that your agent code is being executed.
  • If you aren’t using the LangWatch SDK’s automatic OpenTelemetry setup and traces are not showing, double check the url and path given to OpenTelemetry.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:
  • View traces in the LangWatch dashboard
  • Add evaluation scores to your traces
  • Create custom dashboards for monitoring
  • Set up alerts for performance issues
  • Export data for further analysis

Learn More

For more detailed information, refer to the official documentation and other examples: