LangWatch TypeScript Repo
LangWatch TypeScript SDK version
LangWatch library is the easiest way to integrate your TypeScript application with LangWatch, the messages are synced on the background so it doesn’t intercept or block your LLM calls.
Protip: wanna to get started even faster? Copy our llms.txt and ask an AI to do this integration

Prerequisites

Installation

npm install langwatch

Configuration

Ensure LANGWATCH_API_KEY is set:
.env
LANGWATCH_API_KEY='your_api_key_here'

Basic Concepts

  • Each message triggering your LLM pipeline as a whole is captured with a Trace.
  • A Trace contains multiple Spans, which are the steps inside your pipeline.
    • A span can be an LLM call, a database query for a RAG retrieval, or a simple function transformation.
    • Different types of Spans capture different parameters.
    • Spans can be nested to capture the pipeline structure.
  • Traces can be grouped together on LangWatch Dashboard by having the same thread_id in their metadata, making the individual messages become part of a conversation.
    • It is also recommended to provide the user_id metadata to track user analytics.

Integration

The Vercel AI SDK supports tracing via Next.js OpenTelemetry integration. By using the LangWatchExporter, you can automatically collect those traces to LangWatch. First, you need to install the necessary dependencies:
npm install @vercel/otel langwatch @opentelemetry/api-logs @opentelemetry/instrumentation @opentelemetry/sdk-logs
Then, set up the OpenTelemetry for your application, follow one of the tabs below depending whether you are using AI SDK with Next.js or on Node.js:
You need to enable the instrumentationHook in your next.config.js file if you haven’t already:
/** @type {import('next').NextConfig} */
const nextConfig = {
  experimental: {
    instrumentationHook: true,
  },
};

module.exports = nextConfig;
Next, you need to create a file named instrumentation.ts (or .js) in the root directory of the project (or inside src folder if using one), with LangWatchExporter as the traceExporter:
import { registerOTel } from '@vercel/otel';
import { LangWatchExporter } from 'langwatch';

export function register() {
  registerOTel({
    serviceName: 'next-app',
    traceExporter: new LangWatchExporter({
      apiKey: process.env.LANGWATCH_API_KEY
    }),
  })
}
(Read more about Next.js OpenTelemetry configuration on the official guide)Finally, enable experimental_telemetry tracking on the AI SDK calls you want to trace:
import { attributes } from 'langwatch';

const result = await generateText({
  model: openai('gpt-5'),
  prompt: 'Explain why a chicken would make a terrible astronaut, be creative and humorous about it.',
  experimental_telemetry: {
    isEnabled: true,
    // optional metadata
    metadata: {
      "langwatch.user.id": "myuser-123",
      "langwatch.thread.id": "mythread-123",
    },
  },
});
That’s it! Your messages will now be visible on LangWatch: Vercel AI SDK

Example Project

You can find a full example project with a more complex pipeline and Vercel AI SDK and LangWatch integration on our GitHub. For more advanced Vercel AI SDK integration patterns and best practices:
For production Vercel AI SDK applications, combine manual instrumentation with Semantic Conventions for consistent observability and better analytics.