Skip to main content
LangWatch helps you understand every user interaction (Thread), each individual AI task (Trace), and all the underlying steps (Span) involved. We’ve made getting started super smooth. Let’s get cracking.
1

Create your LangWatch account

First step: if you haven’t already, grab your LangWatch account. Head over to langwatch.ai, sign up, and get your API key ready.
2

Sign in with LangWatch

Open a terminal and run the following command to sign in with LangWatch:
npx langwatch login
This will add the LANGWATCH_API_KEY to your local .env file.
3

Let LangWatch MCP do the rest for you (Optional)

Install the LangWatch MCP Server and ask your coding assistant (Cursor, Claude Code, Codex, etc.) to instrument your codebase with LangWatch, OR keep following the steps below to instrument your codebase manually.Add the LangWatch MCP to your editor:
{
  "mcpServers": {
    "langwatch": {
      "command": "npx",
      "args": ["-y", "@langwatch/mcp-server"]
    }
  }
}
Then ask your coding assistant to instrument your codebase with LangWatch:
"Instrument my codebase with LangWatch"
4

Install the LangWatch SDK

We have official SDKs for Python and Node.js ready to go. If you’re using another language, our OpenTelemetry Integration Guide provides the details you need.
pip install langwatch
# or
uv add langwatch
5

Add LangWatch to your project

Time to connect LangWatch. Initialize the SDK within your project. Here’s how you can set it up:
import langwatch
import os
from langwatch.instrumentors import OpenAIInstrumentor

langwatch.setup(
    api_key=os.getenv("LANGWATCH_API_KEY"), # Your LangWatch API key
    instrumentors=[OpenAIInstrumentor()] # Add the instrumentor for your LLM
)
6

Start observing!

You’re all set! Jump into your LangWatch dashboard to see your data flowing in. You’ll find Traces (individual AI tasks) and their detailed Spans (the steps within), all organized into Threads (complete user sessions). Start exploring and use User IDs or custom Labels to dive deeper!