Skip to main content
The LangWatch MCP Server gives your AI coding assistant (Cursor, Claude Code, Codex, etc.) full access to all LangWatch documentation and features via the Model Context Protocol.
  • Automatically instrument your code with LangWatch tracing for any framework (OpenAI, Agno, Mastra, DSPy, and more)
  • Create and manage prompts using LangWatch’s prompt management system
  • Set up evaluations to test and monitor your LLM outputs
  • Debug production issues by retrieving and analyzing traces from your dashboard
  • Add labels, metadata, and custom tracking following LangWatch best practices
Instead of manually reading docs and writing boilerplate code, just ask your AI assistant to instrument your codebase with LangWatch, and it will do it for you.

Setup

1

Get your LangWatch API key

Get your API key from the LangWatch dashboard.
2

Configure MCP in Cursor

  1. Open Cursor Settings
  2. Navigate to the MCP section in the sidebar
  3. Add the LangWatch MCP server:
{
  "mcpServers": {
    "langwatch": {
      "command": "npx -y @langwatch/mcp-server --apiKey=sk-lw-..."
    }
  }
}
LangWatch MCP Setup in Cursor
You can also set LANGWATCH_API_KEY as an environment variable instead of passing --apiKey in the command.
3

Start using it

Open Cursor Chat (Cmd/Ctrl + I) and ask your AI assistant to help with LangWatch tasks.

Usage Examples

Instrument Your Code with LangWatch

Simply ask your AI assistant to add LangWatch tracking to your existing code:
"Please instrument my code with LangWatch"
The AI assistant will:
  1. Fetch the relevant LangWatch documentation for your framework
  2. Add the necessary imports and setup code
  3. Wrap your functions with @langwatch.trace() decorators
  4. Configure automatic tracking for your LLM calls
  5. Add labels and metadata following best practices
Example transformation: Before:
from openai import OpenAI

client = OpenAI()

def chat(message: str):
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": message}]
    )
    return response.choices[0].message.content
After (automatically added by AI assistant):
from openai import OpenAI
import langwatch

client = OpenAI()
langwatch.setup()

@langwatch.trace()
def chat(message: str):
    langwatch.get_current_trace().autotrack_openai_calls(client)
    langwatch.get_current_trace().update(
        metadata={"labels": ["document_parsing"]}
    )

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": message}]
    )
    return response.choices[0].message.content

Create Prompts with Prompt Management

Ask your AI assistant to set up prompt management:
Cursor Chat
"Create a prompt for my agents to parse PDFs using the prompts CLI"
The AI assistant will guide you through creating, versioning, and using prompts from LangWatch’s Prompts CLI.

Debug Production Issues

When you encounter an issue in production, ask your AI to investigate:
Cursor Chat
"Check the latest traces for errors and help me debug the issue"
The AI assistant will:
  1. Retrieve recent traces from your LangWatch dashboard
  2. Analyze the spans and identify problematic steps
  3. Suggest fixes based on the trace data
  4. Update your code with the fixes
LangWatch MCP debugging traces

Set Up Evaluations

Ask your AI assistant to add evaluations to your LLM outputs:
Cursor Chat
"Create a notebook to evaluate the faithfulness of my RAG pipeline using LangWatch's Evaluating via Code guide"
The AI assistant will create a notebook with the necessary code to evaluate the faithfulness of your RAG pipeline following Evaluating via Code.

Advanced: Self-Building AI Agents

The LangWatch MCP is so powerful that it can help AI agents automatically instrument themselves while being built. This enables self-improving AI systems that can track and debug their own behavior.

MCP Tools Reference

The MCP server provides the following tools that your AI assistant can use:

fetch_langwatch_docs

Fetches LangWatch documentation pages to understand how to implement features. Parameters:
  • url (optional): The full URL of a specific doc page. If not provided, fetches the docs index.

get_latest_traces

Retrieves the latest LLM traces from your LangWatch dashboard. Parameters:
  • pageOffset (optional): Page offset for pagination
  • daysBackToSearch (optional): Number of days back to search. Defaults to 1.

get_trace_by_id

Retrieves a specific trace by its ID for detailed debugging. Parameters:
  • id: The trace ID to retrieve

list_traces_by_user_id

Lists traces filtered by user ID. Parameters:
  • userId: The user ID to filter by
  • pageSize (optional): Number of traces per page
  • pageOffset (optional): Page offset for pagination
  • daysBackToSearch (optional): Number of days back to search

list_traces_by_thread_id

Lists traces filtered by thread/session ID. Parameters:
  • threadId: The thread/session ID to filter by
  • pageSize (optional): Number of traces per page
  • pageOffset (optional): Page offset for pagination
  • daysBackToSearch (optional): Number of days back to search
Your AI assistant will automatically choose the right tools based on your request. You don’t need to call these tools manually.