AutoGen is a framework for building multi-agent systems with conversational AI. For more details on AutoGen, refer to the official AutoGen documentation. LangWatch can capture traces generated by AutoGen by leveraging its built-in OpenTelemetry support. This guide will show you how to set it up.

Prerequisites

  1. Install LangWatch SDK:
    pip install langwatch
    
  2. Install AutoGen and OpenInference instrumentor:
    pip install pyautogen openinference-instrumentation-autogen
    
  3. Set up your LLM provider: You’ll need to configure your preferred LLM provider (OpenAI, Anthropic, etc.) with the appropriate API keys.

Instrumentation with OpenInference

LangWatch supports seamless observability for AutoGen using the OpenInference AutoGen instrumentor. This approach automatically captures traces from your AutoGen agents and sends them to LangWatch.

Basic Setup (Automatic Tracing)

Here’s the simplest way to instrument your application:
import langwatch
import autogen
from openinference.instrumentation.autogen import AutoGenInstrumentor
import os

# Initialize LangWatch with the AutoGen instrumentor
langwatch.setup(
    instrumentors=[AutoGenInstrumentor()]
)

# Set up environment variables
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"

# Configure your agents
config_list = [
    {
        "model": "gpt-4o-mini",
        "api_key": os.environ["OPENAI_API_KEY"],
    }
]

# Create your agents
assistant = autogen.AssistantAgent(
    name="assistant",
    llm_config={"config_list": config_list},
    system_message="You are a helpful AI assistant."
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={"work_dir": "workspace"},
    llm_config={"config_list": config_list},
)

# Use the agents as usual—traces will be sent to LangWatch automatically
def run_agent_conversation(user_message: str):
    user_proxy.initiate_chat(
        assistant,
        message=user_message
    )
    return "Conversation completed"

# Example usage
if __name__ == "__main__":
    user_prompt = "Write a Python function to calculate fibonacci numbers"
    result = run_agent_conversation(user_prompt)
    print(f"Result: {result}")
That’s it! All AutoGen agent interactions will now be traced and sent to your LangWatch dashboard automatically.

Optional: Using Decorators for Additional Context

If you want to add additional context or metadata to your traces, you can optionally use the @langwatch.trace() decorator:
import langwatch
import autogen
from openinference.instrumentation.autogen import AutoGenInstrumentor
import os

langwatch.setup(
    instrumentors=[AutoGenInstrumentor()]
)

# ... agent setup code ...

@langwatch.trace(name="AutoGen Multi-Agent Conversation")
def run_agent_conversation(user_message: str):
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace.update(
            metadata={
                "user_id": "user_123",
                "session_id": "session_abc",
                "agent_count": 2,
                "model": "gpt-4o-mini"
            }
        )
    
    user_proxy.initiate_chat(
        assistant,
        message=user_message
    )
    return "Conversation completed"

How it Works

  1. langwatch.setup(): Initializes the LangWatch SDK, which includes setting up an OpenTelemetry trace exporter. This exporter is ready to receive spans from any OpenTelemetry-instrumented library in your application.
  2. AutoGenInstrumentor(): The OpenInference instrumentor automatically patches AutoGen components to create OpenTelemetry spans for their operations, including:
    • Agent initialization
    • Multi-agent conversations
    • LLM calls
    • Tool executions
    • Code execution
    • Message passing between agents
  3. Optional Decorators: You can optionally use @langwatch.trace() to add additional context and metadata to your traces, but it’s not required for basic functionality.
With this setup, all agent interactions, conversations, model calls, and tool executions will be automatically traced and sent to LangWatch, providing comprehensive visibility into your AutoGen-powered applications.

Notes

  • You do not need to set any OpenTelemetry environment variables or configure exporters manually—langwatch.setup() handles everything.
  • You can combine AutoGen instrumentation with other instrumentors (e.g., OpenAI, LangChain) by adding them to the instrumentors list.
  • The @langwatch.trace() decorator is optional - the OpenInference instrumentor will capture all AutoGen activity automatically.
  • For advanced configuration (custom attributes, endpoint, etc.), see the Python integration guide.

Troubleshooting

  • Make sure your LANGWATCH_API_KEY is set in the environment.
  • If you see no traces in LangWatch, check that the instrumentor is included in langwatch.setup() and that your agent code is being executed.
  • Ensure you have the correct API keys set for your chosen LLM provider.

Interoperability with LangWatch SDK

You can use this integration together with the LangWatch Python SDK to add additional attributes to the trace:
import langwatch
import autogen
from openinference.instrumentation.autogen import AutoGenInstrumentor

langwatch.setup(
    instrumentors=[AutoGenInstrumentor()]
)

@langwatch.trace(name="Custom AutoGen Application")
def my_custom_autogen_app(input_message: str):
    # Your AutoGen code here
    config_list = [
        {
            "model": "gpt-4o-mini",
            "api_key": os.environ["OPENAI_API_KEY"],
        }
    ]
    
    assistant = autogen.AssistantAgent(
        name="assistant",
        llm_config={"config_list": config_list},
        system_message="You are a helpful AI assistant."
    )
    
    user_proxy = autogen.UserProxyAgent(
        name="user_proxy",
        human_input_mode="NEVER",
        max_consecutive_auto_reply=10,
        llm_config={"config_list": config_list},
    )
    
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace.update(
            metadata={
                "user_id": "user_123",
                "session_id": "session_abc",
                "agent_count": 2,
                "model": "gpt-4o-mini"
            }
        )
    
    # Run your agents
    user_proxy.initiate_chat(
        assistant,
        message=input_message
    )
    
    return "Conversation completed"
This approach allows you to combine the automatic tracing capabilities of AutoGen with the rich metadata and custom attributes provided by LangWatch.