This guide will walk you through the basic setup required to run your first simulation and see the results in LangWatch.

For more in-depth information and advanced use cases, please refer to the official scenario library documentation.

1. Installation

First, you need to install the scenario library in your project. Choose your language below.

bash uv add langwatch-scenario

2. Configure Environment Variables

We recommend creating a .env file in the root of your project to manage your environment variables.

.env
LANGWATCH_API_KEY="your-api-key"
LANGWATCH_ENDPOINT="https://app.langwatch.ai"

You can find your LANGWATCH_API_KEY in your LangWatch project settings.

3. Create a Basic Scenario

Here’s how to create and run a simple scenario to test an agent.

First, you need to create an agent adapter that implements your agent logic. For detailed information about agent integration patterns, see the agent integration guide.

import pytest
import scenario
import litellm

# Configure the default model for simulations
scenario.configure(default_model="openai/gpt-4.1-mini")

@pytest.mark.agent_test
@pytest.mark.asyncio
async def test_vegetarian_recipe_agent():
    # 1. Create your agent adapter
    class RecipeAgent(scenario.AgentAdapter):
        async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
            return vegetarian_recipe_agent(input.messages)

    # 2. Run the scenario
    result = await scenario.run(
        name="dinner recipe request",
        description="""
            It's saturday evening, the user is very hungry and tired,
            but have no money to order out, so they are looking for a recipe.
        """,
        agents=[
            RecipeAgent(),
            scenario.UserSimulatorAgent(),
            scenario.JudgeAgent(criteria=[
                "Agent should not ask more than two follow-up questions",
                "Agent should generate a recipe",
                "Recipe should include a list of ingredients",
                "Recipe should include step-by-step cooking instructions",
                "Recipe should be vegetarian and not include any sort of meat",
            ])
        ],
    )

    # 3. Assert the result
    assert result.success

# Example agent implementation using litellm
@scenario.cache()
def vegetarian_recipe_agent(messages) -> scenario.AgentReturnTypes:
    response = litellm.completion(
        model="openai/gpt-4.1-mini",
        messages=[
            {
                "role": "system",
                "content": """
                    You are a vegetarian recipe agent.
                    Given the user request, ask AT MOST ONE follow-up question,
                    then provide a complete recipe. Keep your responses concise and focused.
                """,
            },
            *messages,
        ],
    )
    return response.choices[0].message

Once you run this code, you will see a new scenario run appear in the Simulations section of your LangWatch project.

4. Grouping Your Sets and Batches

While optional, we strongly recommend setting stable identifiers for your scenarios, sets, and batches for better organization and tracking in LangWatch.

  • id: A unique and stable identifier for your scenario. If not provided, it’s often generated from the name, which can be brittle if you rename the test.
  • setId: Groups related scenarios into a test suite. This corresponds to the “Simulation Set” in the UI.
  • batchId: Groups all scenarios that were run together in a single execution (e.g., a single CI job). You can use a CI environment variable like process.env.GITHUB_RUN_ID for this.
import os

result = await scenario.run(
    id="vegetarian-recipe-scenario",
    name="dinner recipe request",
    description="Test that the agent can provide vegetarian recipes.",
    set_id="recipe-test-suite",
    batch_id=os.environ.get("GITHUB_RUN_ID", "local-run"),
    agents=[
        RecipeAgent(),
        scenario.UserSimulatorAgent(),
        scenario.JudgeAgent(criteria=[
            "Agent should generate a recipe",
            "Recipe should be vegetarian",
        ])
    ]
)