Strands Agents is a framework for building AI agents with a focus on simplicity and performance. For more details on Strands Agents, refer to the official Strands Agents documentation. LangWatch can capture traces generated by Strands Agents by leveraging its built-in OpenTelemetry support. This guide will show you how to set it up.

Prerequisites

  1. Install LangWatch SDK and Strands Agents:
    pip install langwatch strands-agents[otel] strands-agents-tools
    
  2. Set up your LLM provider: You’ll need to configure your preferred LLM provider (OpenAI, Anthropic, AWS Bedrock, etc.) with the appropriate API keys.

Instrumentation with OpenTelemetry

LangWatch supports seamless observability for Strands Agents using the built-in OpenTelemetry integration. This approach automatically captures traces from your Strands Agents and sends them to LangWatch.

Basic Setup (Automatic Tracing)

Here’s the simplest way to instrument your application:
import langwatch
import os
from strands import Agent
from strands.telemetry import StrandsTelemetry
from strands.models.bedrock import BedrockModel

# Set up environment variables
os.environ["LANGWATCH_API_KEY"] = "your-langwatch-api-key"


# Initialize LangWatch
langwatch.setup()

# Configure the telemetry with LangWatch endpoint
strands_telemetry = StrandsTelemetry().setup_otlp_exporter(
    endpoint="https://api.langwatch.ai/api/otel/v1/traces",
    headers={"Authorization": f"Bearer {os.environ['LANGWATCH_API_KEY']}"}
)

# Configure the Bedrock model
model = BedrockModel(
    model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
)

# Define the system prompt for the agent
system_prompt = """You are "Hotel Concierge", a luxury hotel concierge helping guests with their requests. 
You can assist with room service orders, spa appointments, restaurant reservations, transportation arrangements, 
tour bookings, and general hotel amenities. You reply always politely and mention your name in the reply (Hotel Concierge). 
NEVER skip your name in the start of a new conversation."""

# Create your agent with tracing attributes
agent = Agent(
    model=model,
    system_prompt=system_prompt,
   
)

# Use the agent as usual—traces will be sent to LangWatch automatically
def run_agent_interaction(user_message: str):
    response = agent(user_message)
    return response

# Example usage
if __name__ == "__main__":
    user_prompt = "Hi, I need help booking a spa appointment and arranging transportation to the airport tomorrow."
    response = run_agent_interaction(user_prompt)
    print(f"User: {user_prompt}")
    print(f"Agent: {response}")
That’s it! All Strands Agents activity will now be traced and sent to your LangWatch dashboard automatically.

Optional: Using Decorators for Additional Context

If you want to add additional context or metadata to your traces, you can optionally use the @langwatch.trace() decorator:
import langwatch
import os
from strands import Agent
from strands.telemetry import StrandsTelemetry
from strands.models.bedrock import BedrockModel

# ... setup code ...

@langwatch.trace(name="Strands Agents Run")
def run_agent_interaction(user_message: str):
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace.update(
            metadata={
                "user_id": "guest_456",
                "session_id": "hotel_session_789",
                "agent_name": "hotel_concierge",
                "model": "claude-3-7-sonnet"
            },
        )
    
    response = agent(user_message)
    return response

How it Works

  1. langwatch.setup(): Initializes the LangWatch SDK, which includes setting up an OpenTelemetry trace exporter.
  2. StrandsTelemetry().setup_otlp_exporter(): Configures the Strands Agents telemetry to send traces to LangWatch via OpenTelemetry protocol.
  3. Trace Attributes: The trace_attributes parameter in the Agent constructor allows you to add metadata that will be included with every trace, such as session IDs, user IDs, and custom tags.
  4. Optional Decorators: You can optionally use @langwatch.trace() to add additional context and metadata to your traces, but it’s not required for basic functionality.
With this setup, all agent interactions, model calls, and tool executions will be automatically traced and sent to LangWatch, providing comprehensive visibility into your Strands Agents-powered applications.

Notes

  • The strands-agents[otel] package includes OpenTelemetry support out of the box.
  • You can use different model providers (Bedrock, OpenAI, Anthropic, etc.) by importing the appropriate model class.
  • The trace_attributes parameter allows you to add consistent metadata to all traces from a specific agent instance.
  • For advanced configuration (custom attributes, endpoint, etc.), see the Python integration guide.

Troubleshooting

  • Make sure your LANGWATCH_API_KEY is set in the environment.
  • If you see no traces in LangWatch, check that the telemetry is properly configured and that your agent code is being executed.
  • Ensure you have the correct API keys set for your chosen LLM provider.
  • For AWS Bedrock, make sure your AWS credentials are properly configured.

Interoperability with LangWatch SDK

You can use this integration together with the LangWatch Python SDK to add additional attributes to the trace:
import langwatch
import os
from strands import Agent
from strands.telemetry import StrandsTelemetry
from strands.models.bedrock import BedrockModel

# ... setup code ...

@langwatch.trace(name="Custom Strands Agents Application")
def my_custom_strands_agents_app(input_message: str):
    # Your Strands Agents code here
    model = BedrockModel(model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0")
    
    agent = Agent(
        model=model,
        system_prompt="You are a luxury hotel concierge. Help guests with their requests professionally.",
        trace_attributes={
            "agent_name": "hotel_concierge",
            "model": "claude-3-7-sonnet",
            "service": "hotel_concierge",
            "environment": "production",
    },
        
    )
    
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace.update(
            metadata={
                "user_id": "guest_456",
                "session_id": "hotel_session_789",
                "agent_name": "hotel_concierge",
                "model": "claude-3-7-sonnet"
            }
        )
    
    # Run your agent
    response = agent(input_message)
    
    return response
This approach allows you to combine the automatic tracing capabilities of Strands Agents with the rich metadata and custom attributes provided by LangWatch.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:
  • View traces in the LangWatch dashboard
  • Add evaluation scores to your traces
  • Create custom dashboards for monitoring
  • Set up alerts for performance issues
  • Export data for further analysis

Learn More

For more detailed information, refer to the official documentation and other examples: