AWS Bedrock, accessed via the boto3 library, allows you to leverage powerful foundation models. By using the OpenInference Bedrock instrumentor, you can automatically capture OpenTelemetry traces for your Bedrock API calls. LangWatch, being an OpenTelemetry-compatible observability platform, can seamlessly ingest these traces, providing insights into your LLM interactions.

This guide explains how to configure your Python application to send Bedrock traces to LangWatch.

Prerequisites

  1. Install LangWatch SDK:

    pip install langwatch
    
  2. Install Bedrock Instrumentation and Dependencies: You’ll need boto3 to interact with AWS Bedrock, and the OpenInference instrumentation library for Bedrock.

    pip install boto3 openinference-instrumentation-bedrock
    

    Note: openinference-instrumentation-bedrock will install necessary OpenTelemetry packages. Ensure your boto3 and botocore versions are compatible with the Bedrock features you intend to use (e.g., botocore >= 1.34.116 for the converse API).

Instrumenting AWS Bedrock with LangWatch

The integration involves initializing LangWatch to set up the OpenTelemetry environment and then applying the Bedrock instrumentor.

Steps:

  1. Initialize LangWatch: Call langwatch.setup() at the beginning of your application. This configures the global OpenTelemetry SDK to export traces to LangWatch.
  2. Instrument Bedrock: Import BedrockInstrumentor and call its instrument() method. This will patch boto3 to automatically create spans for Bedrock client calls.
import langwatch
import boto3
import json
import os
import asyncio

# 1. Initialize LangWatch for OpenTelemetry trace export
langwatch.setup()

# 2. Instrument Boto3 for Bedrock
from openinference.instrumentation.bedrock import BedrockInstrumentor
BedrockInstrumentor().instrument()

# Global Bedrock client (initialize after instrumentation)
bedrock_runtime = None
try:
    aws_session = boto3.session.Session(
        region_name=os.environ.get("AWS_REGION_NAME") # Ensure region is set
    )
    bedrock_runtime = aws_session.client("bedrock-runtime")
except Exception as e:
    print(f"Error creating Bedrock client: {e}. Ensure AWS credentials and region are configured.")

@langwatch.span(name="Bedrock - Invoke Claude")
async def invoke_claude(prompt_text: str):
    if not bedrock_runtime:
        print("Bedrock client not initialized. Skipping API call.")
        return None

    current_span = langwatch.get_current_span()
    current_span.update(model_id="anthropic.claude-v2", action="invoke_model")

    try:
        body = json.dumps({
            "prompt": f"Human: {prompt_text} Assistant:",
            "max_tokens_to_sample": 200
        })
        response = bedrock_runtime.invoke_model(modelId="anthropic.claude-v2", body=body)
        response_body = json.loads(response.get("body").read())
        completion = response_body.get("completion")
        current_span.update(outputs={"completion_preview": completion[:50] + "..." if completion else "N/A"})
        return completion
    except Exception as e:
        print(f"Error invoking model: {e}")
        if current_span:
            current_span.record_exception(e)
            current_span.set_status("error", str(e))
        raise

@langwatch.trace(name="Bedrock - Example Usage")
async def main():
    try:
        prompt = "Explain the concept of OpenTelemetry in one sentence."
        print(f"Invoking model with prompt: '{prompt}'")
        response = await invoke_claude(prompt)
        if response:
            print(f"Response from Claude: {response}")
    except Exception as e:
        print(f"An error occurred in main: {e}")

if __name__ == "__main__":
    asyncio.run(main())

Key points for this approach:

  • langwatch.setup(): Initializes the global OpenTelemetry environment configured for LangWatch. This must be called before any instrumented code is run.
  • BedrockInstrumentor().instrument(): This call patches the boto3 library. Any subsequent Bedrock calls made using a boto3.client("bedrock-runtime") will automatically generate OpenTelemetry spans.
  • @langwatch.trace(): Creates a parent trace in LangWatch. The automated Bedrock spans generated by OpenInference will be nested under this parent trace if the Bedrock calls are made within the decorated function. This provides a clear hierarchy for your operations.
  • API Versions: The example shows both invoke_model and converse APIs. The converse API requires botocore version 1.34.116 or newer.

By following these steps, your application’s interactions with AWS Bedrock will be traced, and the data will be sent to LangWatch for monitoring and analysis. This allows you to observe latencies, errors, and other metadata associated with your foundation model calls. For more details on the specific attributes captured by the OpenInference Bedrock instrumentor, please refer to the OpenInference Semantic Conventions. (Note: Link to general OTel AI/OpenInference conventions, specific Bedrock attributes might be detailed in OpenInference’s own docs).

Remember to replace placeholder values for AWS credentials and adapt the model IDs and prompts to your specific use case.