Protip: wanna to get started even faster? Copy our llms.txt and ask an AI to do this integration
Prerequisites
- Obtain your
LANGWATCH_API_KEYfrom the LangWatch dashboard.
Installation
Configuration
EnsureLANGWATCH_API_KEY is set:
- Environment variable
- Client parameters
.env
Basic Concepts
- Each message triggering your LLM pipeline as a whole is captured with a Trace.
- A Trace contains multiple Spans, which are the steps inside your pipeline.
- Traces can be grouped together on LangWatch Dashboard by having the same
thread_idin their metadata, making the individual messages become part of a conversation.- It is also recommended to provide the
user_idmetadata to track user analytics.
- It is also recommended to provide the
Installation
Usage
The LangWatch API key is configured by default via the
LANGWATCH_API_KEY environment variable.experimental_telemetry.isEnabled is set to true. For Next.js applications, configure OpenTelemetry in your instrumentation.ts file using LangWatchExporter.
Related
- Capturing RAG - Learn how to capture RAG data from retrievers and tools
- Capturing Metadata and Attributes - Add custom metadata and attributes to your traces and spans
- Capturing Evaluations & Guardrails - Log evaluations and implement guardrails in your Vercel AI SDK applications