LangWatch TypeScript Repo
LangWatch TypeScript SDK version
Get started with LangWatch TypeScript SDK in under 5 minutes. This guide will walk you through setting up observability for your LLM applications, from basic tracing to advanced features.
Protip: wanna to get started even faster? Copy our llms.txt and ask an AI to do this integration

Prerequisites

Before you start, make sure you have:
  • Node.js 18+ installed
  • A LangWatch account (sign up at app.langwatch.ai)
  • Your LangWatch API key from the dashboard
  • An OpenAI API key (for the LLM example)

Quick Start (5 minutes)

Step 1: Install Dependencies

npm install langwatch @opentelemetry/sdk-node @opentelemetry/context-async-hooks
npm install @ai-sdk/openai ai
The @ai-sdk/openai and ai packages are only required for the example in this guide. You can skip this step if you’re only looking to install the LangWatch SDK.

Step 2: Set Up API Keys

  1. LangWatch API Key:
    • Go to app.langwatch.ai and sign up
    • Create a new project
    • Copy your API key from the project settings
  2. OpenAI API Key:
  3. Set environment variables:
export LANGWATCH_API_KEY=your_langwatch_api_key_here
export OPENAI_API_KEY=your_openai_api_key_here

Step 3: Your First LLM Trace

Create a new file app.ts:
import { setupObservability } from "langwatch/observability/setup/node";
import { getLangWatchTracer } from "langwatch";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

// Setup LangWatch Observability (uses LANGWATCH_API_KEY by default)
await setupObservability({
  serviceName: "my-ai-laundry-startup",
});

// Create a tracer
const tracer = getLangWatchTracer("laundry-chatbot");

// Your first traced LLM interaction
async function askAI(question: string) {
  return await tracer.withActiveSpan("ask-ai", async (span) => {
    // Make the LLM call using Vercel AI SDK
    const response = await generateText({
      model: openai("gpt-5-mini"),
      prompt: question,
      maxTokens: 100,
      // The LangWatch SDK will automatically capture LLM data
      // input, output, metrics, etc.
      experimental_telemetry: { isEnabled: true },
    });

    return response.text;
  });
}

// Test it
const answer = await askAI("What is LangWatch?");
console.log("AI Response:", answer);
console.log("Check your LangWatch dashboard!");

Step 4: Run and See Results

npx tsx app.ts
Now visit your LangWatch dashboard - you should see your first trace! 🎉
What you’ll see: A trace named “greet-user” with input/output data, timing, and status.

What Just Happened?

Let’s break down what we just set up:
  • Trace: The entire greetUser function execution
  • Span: The individual operation within the trace
  • Input/Output: The data flowing through your function
  • Timing: How long each operation took
  • Status: Whether the operation succeeded

Core Concepts

Think of LangWatch like a debugger for your LLM applications:
  • Traces = Complete user interactions (e.g., “What’s the weather?”)
  • Spans = Individual steps within a trace (e.g., “LLM call”, “database query”)
  • Threads = Conversations (group related traces together)
  • Users = Individual users (for analytics)
For detailed explanations of all concepts, see our Concepts Guide.
For consistent observability across your application, learn about Semantic Conventions - standardized naming guidelines for attributes and metadata.

Integrations

LangWatch offers seamless integrations with many popular TypeScript libraries and frameworks. These integrations provide automatic instrumentation, capturing relevant data from your LLM applications with minimal setup. Below is a list of currently supported integrations. Click on each to learn more about specific setup instructions and available features:
For detailed integration guides, see our integration documentation. Each integration includes framework-specific examples and best practices.

Common Development Scenarios

Scenario 1: LLM Application

import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

async function chatWithAI(userMessage: string) {
  return await tracer.withActiveSpan("chat-with-ai", async (span) => {
    // Make the LLM call
    const response = await generateText({
      model: openai("gpt-5-mini"),
      prompt: userMessage,
      experimental_telemetry: { isEnabled: true }, // Auto-captures LLM data
    });

    return response.text;
  });
}

Scenario 2: RAG Application

async function answerWithRAG(question: string) {
  return await tracer.withActiveSpan("rag-answer", async (span) => {
    // 1. Retrieve documents
    const docs = await tracer.withActiveSpan("retrieve-docs", async (retrieveSpan) => {
      retrieveSpan.setType("rag");
      const documents = await searchDocuments(question);
      
      // Record what documents were retrieved
      retrieveSpan.setRAGContexts(
        documents.map(doc => ({
          document_id: doc.id,
          chunk_id: doc.chunkId,
          content: doc.content
        }))
      );
      
      return documents;
    });
    
    // 2. Generate answer
    const answer = await generateAnswer(question, docs);
    
    span.setOutput(answer);

    return answer;
  });
}
For consistent attribute naming and TypeScript autocomplete support, see our Semantic Conventions guide. For advanced span management techniques, check out Manual Instrumentation.

Scenario 3: Conversation Threading

async function handleConversation(userId: string, threadId: string, message: string) {
  return await tracer.withActiveSpan("conversation-turn", {
    attributes: {
      "langwatch.user.id": userId,
      "langwatch.thread.id": threadId
    }
  }, async (span) => {
    // Your conversation logic here
    const response = await processMessage(message);
    
    span.setOutput(response);

    return response;
  });
}

Configuration

Basic Configuration

import { setupObservability } from "langwatch/observability/setup/node";

const handle = await setupObservability({
  // Required: Your service name
  serviceName: "my-ai-service",
  
  // Optional: Custom API key (defaults to LANGWATCH_API_KEY env var)
  langwatch: {
    apiKey: process.env.LANGWATCH_API_KEY,
  },
  
  // Optional: Global attributes for all traces
  attributes: {
    "service.version": "1.0.0",
    "environment": process.env.NODE_ENV,
  }
});

Environment-Specific Setup

const handle = await setupObservability({
  serviceName: "my-laundry-startup",
  dataCapture: "all", // Capture everything in dev
  attributes: {
    "deployment.environment.name": process.env.NODE_ENV,
  }
});

Graceful Shutdown

The setupObservability function returns an ObservabilityHandle that provides a shutdown method for graceful cleanup. This ensures all pending traces are exported before your application terminates.

Automatic Shutdown

By default, LangWatch automatically handles shutdown when your application receives a SIGTERM signal:
// Automatic shutdown is enabled by default
const handle = await setupObservability({
  serviceName: "my-service",
  langwatch: {
    apiKey: process.env.LANGWATCH_API_KEY
  }
});

// No manual shutdown needed - handled automatically

Manual Shutdown

For environments where you can’t listen to SIGTERM or need custom shutdown logic, you can manually call the shutdown method:
const handle = await setupObservability({
  serviceName: "my-service",
  langwatch: {
    apiKey: process.env.LANGWATCH_API_KEY
  },
  advanced: {
    disableAutoShutdown: true, // Disable automatic SIGTERM handling
  }
});

// Manual shutdown when your application terminates
process.on('SIGTERM', async () => {
  console.log('Shutting down observability...');
  await handle.shutdown();
  console.log('Observability shutdown complete');
  process.exit(0);
});

// Force shutdown with timeout
process.on('SIGINT', async () => {
  console.log('Force shutdown...');
  await Promise.race([
    handle.shutdown(),
    new Promise(resolve => setTimeout(resolve, 5000))
  ]);
  process.exit(1);
});

What Happens During Shutdown

The shutdown process ensures data integrity:
  1. Flushes pending traces to the exporter
  2. Closes the trace exporter connection
  3. Shuts down the tracer provider
  4. Cleans up registered instrumentations
Always call shutdown() before your application exits to prevent data loss. The method is safe to call multiple times.
If you don’t call shutdown(), some traces may be lost when your application terminates abruptly.

Development Workflow

Local Development

  1. Set up environment:
export LANGWATCH_API_KEY=your_key
export NODE_ENV=development
  1. Run your app:
npm run dev
  1. Check dashboard: Visit app.langwatch.ai to see traces

Debugging

Enable console logging for local development:
const handle = await setupObservability({
  serviceName: "my-service",
  langwatch: {
    apiKey: process.env.LANGWATCH_API_KEY,
  },
  debug: {
    consoleTracing: true,
    consoleLogging: true,
    logLevel: 'info' // Lower this to `debug` if you're debugging the LangWatch integration
  },
});

Troubleshooting

Common Issues

Getting Help

Next Steps

Now that you have basic tracing working, explore:
Start simple and add complexity gradually. You can always add more detailed tracing later as your application grows!