Prerequisites
-
Install LangWatch SDK and Strands Agents:
- Set up your LLM provider: You’ll need to configure your preferred LLM provider (OpenAI, Anthropic, AWS Bedrock, etc.) with the appropriate API keys.
OpenTelemetry Setup Options
LangWatch supports three approaches for instrumenting Strands Agents with OpenTelemetry:Option 1: LangWatch SDK Only (Recommended)
This is the simplest approach where LangWatch handles all OpenTelemetry setup:Option 2: StrandsTelemetry with Custom Configuration
For more control over OpenTelemetry configuration, you can use StrandsTelemetry:Option 3: Skip OpenTelemetry Setup (When Already Configured)
If OpenTelemetry is already configured by another component in your application (like FastAPI, Django, or another framework), you can skip LangWatch’s OpenTelemetry setup:- Your backend or infrastructure framework already sets up OpenTelemetry
- You have a custom OpenTelemetry configuration
- Multiple components in your stack configure OpenTelemetry
Basic Agent Setup
Here’s a complete example showing how to create and instrument a Strands Agent:Integration with Web Frameworks
Chainlit Integration
Here’s how to integrate Strands Agents with Chainlit while maintaining full observability:Adding Custom Attributes and Metadata
You can add custom attributes to your traces in several ways:Agent-Level Attributes
Function-Level Metadata
How it Works
-
LangWatch Setup:
langwatch.setup()
initializes the LangWatch SDK and sets up OpenTelemetry tracing. -
Model Configuration: Use
LiteLLMModel
for flexible provider support or specific model classes likeBedrockModel
for AWS Bedrock. -
Agent Creation: The
Agent
constructor acceptstrace_attributes
for consistent metadata across all traces. - Automatic Tracing: All agent interactions, model calls, and tool executions are automatically traced and sent to LangWatch.
-
Custom Metadata: Use
@langwatch.trace()
decorators andlangwatch.get_current_trace().update()
to add context-specific metadata.
Environment Variables
Set up your environment variables in a.env
file:
Notes
- The
strands-agents[otel]
package includes OpenTelemetry support out of the box. - The
trace_attributes
parameter allows you to add consistent metadata to all traces from a specific agent instance. - For advanced configuration, see the Python integration guide.
Troubleshooting
- Make sure your
LANGWATCH_API_KEY
is set in the environment. - If you see no traces in LangWatch, check that the telemetry is properly configured and that your agent code is being executed.
- If you aren’t using the LangWatch SDK’s automatic OpenTelemetry setup and traces are not showing, double check the url and path given to OpenTelemetry.
Next Steps
Once you have instrumented your code, you can manage, evaluate and debug your application:- View traces in the LangWatch dashboard
- Add evaluation scores to your traces
- Create custom dashboards for monitoring
- Set up alerts for performance issues
- Export data for further analysis