PromptFlow is a development tool designed to streamline the entire development cycle of AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. For more details on PromptFlow, refer to the official PromptFlow documentation. LangWatch can capture traces generated by PromptFlow by leveraging its built-in OpenTelemetry support. This guide will show you how to set it up.

Prerequisites

  1. Install LangWatch SDK:
    pip install langwatch
    
  2. Install PromptFlow and OpenInference instrumentor:
    pip install promptflow openinference-instrumentation-promptflow
    
  3. Set up your LLM provider: You’ll need to configure your preferred LLM provider (OpenAI, Anthropic, etc.) with the appropriate API keys.

Instrumentation with OpenInference

LangWatch supports seamless observability for PromptFlow using the OpenInference PromptFlow instrumentor. This approach automatically captures traces from your PromptFlow flows and sends them to LangWatch.

Basic Setup (Automatic Tracing)

Here’s the simplest way to instrument your application:
import langwatch
from promptflow import PFClient
from openinference.instrumentation.promptflow import PromptFlowInstrumentor
import os

# Initialize LangWatch with the PromptFlow instrumentor
langwatch.setup(
    instrumentors=[PromptFlowInstrumentor()]
)

# Set up environment variables
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"

# Initialize PromptFlow client
pf = PFClient()

# Use PromptFlow as usual—traces will be sent to LangWatch automatically
def run_promptflow_flow(flow_path: str, inputs: dict):
    # Run a flow
    result = pf.run(
        flow=flow_path,
        inputs=inputs
    )
    return result

# Example usage
if __name__ == "__main__":
    # Example flow path and inputs
    flow_path = "./my_flow"
    inputs = {
        "question": "What is the capital of France?",
        "context": "Geography information"
    }
    
    result = run_promptflow_flow(flow_path, inputs)
    print(f"Flow result: {result}")
That’s it! All PromptFlow operations will now be traced and sent to your LangWatch dashboard automatically.

Optional: Using Decorators for Additional Context

If you want to add additional context or metadata to your traces, you can optionally use the @langwatch.trace() decorator:
import langwatch
from promptflow import PFClient
from openinference.instrumentation.promptflow import PromptFlowInstrumentor
import os

langwatch.setup(
    instrumentors=[PromptFlowInstrumentor()]
)

# ... client setup code ...

@langwatch.trace(name="PromptFlow Flow Execution")
def run_promptflow_flow(flow_path: str, inputs: dict):
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace.update(
            metadata={
                "user_id": "user_123",
                "session_id": "session_abc",
                "flow_path": flow_path,
                "input_count": len(inputs)
            }
        )
    
    result = pf.run(
        flow=flow_path,
        inputs=inputs
    )
    return result

How it Works

  1. langwatch.setup(): Initializes the LangWatch SDK, which includes setting up an OpenTelemetry trace exporter. This exporter is ready to receive spans from any OpenTelemetry-instrumented library in your application.
  2. PromptFlowInstrumentor(): The OpenInference instrumentor automatically patches PromptFlow components to create OpenTelemetry spans for their operations, including:
    • Flow execution
    • Node execution
    • LLM calls
    • Tool executions
    • Data processing
    • Input/output handling
  3. Optional Decorators: You can optionally use @langwatch.trace() to add additional context and metadata to your traces, but it’s not required for basic functionality.
With this setup, all flow executions, node operations, model calls, and data processing will be automatically traced and sent to LangWatch, providing comprehensive visibility into your PromptFlow-powered applications.

Environment Variables

Make sure to set the following environment variables:
# For OpenAI
export OPENAI_API_KEY=your-openai-api-key

# For Anthropic
export ANTHROPIC_API_KEY=your-anthropic-api-key

# LangWatch API key
export LANGWATCH_API_KEY=your-langwatch-api-key

Supported Models

PromptFlow supports various LLM providers including:
  • OpenAI (GPT-4, GPT-3.5-turbo, etc.)
  • Anthropic (Claude models)
  • Local models (via Ollama, etc.)
  • Other providers supported by PromptFlow
All model interactions and flow executions will be automatically traced and captured by LangWatch.

Notes

  • You do not need to set any OpenTelemetry environment variables or configure exporters manually—langwatch.setup() handles everything.
  • You can combine PromptFlow instrumentation with other instrumentors (e.g., OpenAI, LangChain) by adding them to the instrumentors list.
  • The @langwatch.trace() decorator is optional - the OpenInference instrumentor will capture all PromptFlow activity automatically.
  • For advanced configuration (custom attributes, endpoint, etc.), see the Python integration guide.

Troubleshooting

  • Make sure your LANGWATCH_API_KEY is set in the environment.
  • If you see no traces in LangWatch, check that the instrumentor is included in langwatch.setup() and that your PromptFlow code is being executed.
  • Ensure you have the correct API keys set for your chosen LLM provider.

Interoperability with LangWatch SDK

You can use this integration together with the LangWatch Python SDK to add additional attributes to the trace:
import langwatch
from promptflow import PFClient
from openinference.instrumentation.promptflow import PromptFlowInstrumentor

langwatch.setup(
    instrumentors=[PromptFlowInstrumentor()]
)

@langwatch.trace(name="Custom PromptFlow Application")
def my_custom_promptflow_app(flow_path: str, inputs: dict):
    # Your PromptFlow code here
    pf = PFClient()
    
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace.update(
            metadata={
                "user_id": "user_123",
                "session_id": "session_abc",
                "flow_path": flow_path,
                "input_count": len(inputs)
            }
        )
    
    # Run your flow
    result = pf.run(
        flow=flow_path,
        inputs=inputs
    )
    
    return result
This approach allows you to combine the automatic tracing capabilities of PromptFlow with the rich metadata and custom attributes provided by LangWatch.