- Set up agent testing with Scenario to test agent behavior through user simulations and edge cases
- Automatically instrument your code with LangWatch tracing for any framework (OpenAI, Agno, Mastra, DSPy, and more)
- Create and manage prompts using LangWatch’s prompt management system
- Set up evaluations to test and monitor your LLM outputs
- Add labels, metadata, and custom tracking following LangWatch best practices
Setup
1
Configure your MCP
- Cursor
- Claude Code
- Other Editors
- Open Cursor Settings
- Navigate to the Tools and MCP section in the sidebar
- Add the LangWatch MCP server:
2
Start using it
Open your AI assistant chat (e.g.,
Cmd/Ctrl + I in Cursor, or Cmd/Ctrl + Shift + P > “Claude Code: Open Chat” in Claude Code) and ask it to help with LangWatch tasks.Usage Examples
Write Agent Tests with Scenario
Simply ask your AI assistant to write scenario tests for your agents:- Fetch the Scenario documentation and best practices
- Create test files with proper imports and setup
- Write scenario scripts that simulate user interactions
- Add verification logic to check agent behavior
- Include judge criteria to evaluate conversation quality
Instrument Your Code with LangWatch
Simply ask your AI assistant to add LangWatch tracking to your existing code:- Fetch the relevant LangWatch documentation for your framework
- Add the necessary imports and setup code
- Wrap your functions with
@langwatch.trace()decorators - Configure automatic tracking for your LLM calls
- Add labels and metadata following best practices
Create Prompts with Prompt Management
Ask your AI assistant to set up prompt management:Set Up Evaluations
Ask your AI assistant to set up evaluation code for your LLM outputs:- Fetch the relevant LangWatch evaluation documentation
- Create evaluation notebooks or scripts with proper setup
- Add evaluation metrics and criteria for your use case
- Include code to run evaluations following Evaluating via Code
Advanced: Self-Building AI Agents
The LangWatch MCP is so powerful that it can help AI agents automatically instrument themselves while being built. This enables self-improving AI systems that can track and debug their own behavior.MCP Tools Reference
The MCP server provides the following tools that your AI assistant can use:fetch_langwatch_docs
Fetches LangWatch documentation pages to understand how to implement features.
Parameters:
url(optional): The full URL of a specific doc page. If not provided, fetches the docs index.
fetch_scenario_docs
Fetches Scenario documentation pages to understand how to write agent tests.
Parameters:
url(optional): The full URL of a specific doc page. If not provided, fetches the docs index.
Your AI assistant will automatically choose the right tools based on your request. You don’t need to call these tools manually.