Semantic Kernel is a lightweight SDK that enables you to easily build AI agents that can combine the power of LLMs with external data sources and APIs. For more details on Semantic Kernel, refer to the official Semantic Kernel documentation. LangWatch can capture traces generated by Semantic Kernel using OpenInference’s OpenAI instrumentation. This guide will show you how to set it up.

Prerequisites

  1. Install LangWatch SDK:
    pip install langwatch
    
  2. Install Semantic Kernel and OpenInference instrumentor:
    pip install semantic-kernel openinference-instrumentation-openai
    
  3. Set up your OpenAI API key: You’ll need to configure your OpenAI API key in your environment.

Instrumentation with OpenInference

LangWatch supports observability for Semantic Kernel using the OpenInference OpenAI instrumentor. This approach captures traces from your Semantic Kernel calls and sends them to LangWatch.

Basic Setup (Automatic Tracing)

Here’s the simplest way to instrument your application:
import langwatch
import asyncio
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from openinference.instrumentation.openai import OpenAIInstrumentor
import os

# Initialize LangWatch with the OpenAI instrumentor
langwatch.setup(
    instrumentors=[OpenAIInstrumentor()]
)

# Set up environment variables
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"

# Create a kernel
kernel = sk.Kernel()

# Add OpenAI chat completion service
kernel.add_service(
    OpenAIChatCompletion(
        service_id="chat-gpt",
        ai_model_id="gpt-4o-mini",
        api_key=os.environ["OPENAI_API_KEY"]
    )
)

# Use the kernel as usual—traces will be sent to LangWatch automatically
async def run_semantic_kernel_example(user_input: str):
    # Create a prompt template
    prompt = """You are a helpful assistant. 
    User: {{$input}}
    Assistant: Let me help you with that."""
    
    # Create a function from the prompt
    kernel.add_function(
        plugin_name="chat_plugin",
        prompt=prompt,
        function_name="chat",
        description="A helpful chat function"
    )
    
    # Invoke the function
    result = await kernel.invoke(
        function_name="chat",
        plugin_name="chat_plugin",
        input=user_input
    )
    return result

# Example usage
async def main():
    user_query = "What's the weather like in New York?"
    response = await run_semantic_kernel_example(user_query)
    print(f"Response: {response}")

if __name__ == "__main__":
    asyncio.run(main())
That’s it! All Semantic Kernel calls will now be traced and sent to your LangWatch dashboard automatically.

Optional: Using Decorators for Additional Context

If you want to add additional context or metadata to your traces, you can optionally use the @langwatch.trace() decorator:
import langwatch
import asyncio
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from openinference.instrumentation.openai import OpenAIInstrumentor
import os

langwatch.setup(
    instrumentors=[OpenAIInstrumentor()]
)

kernel = sk.Kernel()
kernel.add_service(
    OpenAIChatCompletion(
        service_id="chat-gpt",
        ai_model_id="gpt-4o-mini",
        api_key=os.environ["OPENAI_API_KEY"]
    )
)

@langwatch.trace(name="Semantic Kernel Chat Function")
async def chat_with_context(user_input: str):
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace.update(
            metadata={
                "kernel_function": "chat",
                "model": "gpt-4o-mini",
                "input_length": len(user_input)
            }
        )
    
    prompt = """You are a helpful assistant. 
    User: {{$input}}
    Assistant: Let me help you with that."""
    
    kernel.add_function(
        plugin_name="chat_plugin",
        prompt=prompt,
        function_name="chat",
        description="A helpful chat function"
    )
    
    result = await kernel.invoke(
        function_name="chat",
        plugin_name="chat_plugin",
        input=user_input
    )
    return result

How it Works

  1. langwatch.setup(): Initializes the LangWatch SDK, which includes setting up an OpenTelemetry trace exporter. This exporter is ready to receive spans from any OpenTelemetry-instrumented library in your application.
  2. OpenAIInstrumentor(): The OpenInference instrumentor automatically patches OpenAI client operations to create OpenTelemetry spans for their operations, including:
    • Chat completions
    • Model calls
    • Response parsing
    • Error handling
  3. Semantic Kernel Integration: The OpenAI instrumentor captures Semantic Kernel operations (function invocations, prompt processing, etc.) as spans.
  4. Optional Decorators: You can optionally use @langwatch.trace() to add additional context and metadata to your traces, but it’s not required for basic functionality.
With this setup, Semantic Kernel operations, including function invocations, prompt processing, and model calls, will be traced and sent to LangWatch, providing visibility into your Semantic Kernel-powered applications.

Notes

  • You do not need to set any OpenTelemetry environment variables or configure exporters manually—langwatch.setup() handles everything.
  • You can combine Semantic Kernel instrumentation with other instrumentors (e.g., LangChain, DSPy) by adding them to the instrumentors list.
  • The @langwatch.trace() decorator is optional - the OpenInference instrumentor will capture Semantic Kernel activity.
  • For advanced configuration (custom attributes, endpoint, etc.), see the Python integration guide.

Troubleshooting

  • Make sure your LANGWATCH_API_KEY is set in the environment.
  • If you see no traces in LangWatch, check that the instrumentor is included in langwatch.setup() and that your Semantic Kernel code is being executed.
  • Ensure you have the correct OpenAI API key set.
  • Verify that your Semantic Kernel functions are properly defined and invoked.

Interoperability with LangWatch SDK

You can use this integration together with the LangWatch Python SDK to add additional attributes to the trace:
import langwatch
import asyncio
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from openinference.instrumentation.openai import OpenAIInstrumentor
import os

langwatch.setup(
    instrumentors=[OpenAIInstrumentor()]
)

kernel = sk.Kernel()
kernel.add_service(
    OpenAIChatCompletion(
        service_id="chat-gpt",
        ai_model_id="gpt-4o-mini",
        api_key=os.environ["OPENAI_API_KEY"]
    )
)

@langwatch.trace(name="Semantic Kernel Pipeline")
async def run_kernel_pipeline(user_input: str):
    # Update the current trace with additional metadata
    current_trace = langwatch.get_current_trace()
    if current_trace:
        current_trace.update(
            metadata={
                "pipeline_type": "semantic_kernel",
                "model": "gpt-4o-mini",
                "input_length": len(user_input)
            }
        )
    
    # Your Semantic Kernel code here
    prompt = """You are a helpful assistant. 
    User: {{$input}}
    Assistant: Let me help you with that."""
    
    kernel.add_function(
        plugin_name="chat_plugin",
        prompt=prompt,
        function_name="chat",
        description="A helpful chat function"
    )
    
    result = await kernel.invoke(
        function_name="chat",
        plugin_name="chat_plugin",
        input=user_input
    )
    return result
This approach allows you to combine the tracing capabilities of Semantic Kernel with the rich metadata and custom attributes provided by LangWatch.