LangWatch TypeScript Repo
LangWatch TypeScript SDK version
LangWatch library is the easiest way to integrate your TypeScript application with LangWatch, the messages are synced on the background so it doesn’t intercept or block your LLM calls.
Protip: wanna to get started even faster? Copy our llms.txt and ask an AI to do this integration

Prerequisites

Installation

npm install langwatch

Configuration

Ensure LANGWATCH_API_KEY is set:
.env
LANGWATCH_API_KEY='your_api_key_here'

Basic Concepts

  • Each message triggering your LLM pipeline as a whole is captured with a Trace.
  • A Trace contains multiple Spans, which are the steps inside your pipeline.
    • A span can be an LLM call, a database query for a RAG retrieval, or a simple function transformation.
    • Different types of Spans capture different parameters.
    • Spans can be nested to capture the pipeline structure.
  • Traces can be grouped together on LangWatch Dashboard by having the same thread_id in their metadata, making the individual messages become part of a conversation.
    • It is also recommended to provide the user_id metadata to track user analytics.

Integration

Start by setting up observability and initializing the LangWatch tracer:
import { setupObservability } from "langwatch/observability/node";
import { getLangWatchTracer } from "langwatch";

// Setup observability first
setupObservability();

const tracer = getLangWatchTracer("my-service");
Then to capture your LLM calls, you can use the withActiveSpan method to create an LLM span with automatic lifecycle management:
import { AzureOpenAI } from "openai";

// Model to be used and messages that will be sent to the LLM
const model = "gpt-5-mini";
const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
  { role: "system", content: "You are a helpful assistant." },
  {
    role: "user",
    content: "Write a tweet-size vegetarian lasagna recipe for 4 people.",
  },
];

const openai = new AzureOpenAI({
  apiKey: process.env.AZURE_OPENAI_API_KEY,
  apiVersion: "2024-02-01",
  endpoint: process.env.AZURE_OPENAI_ENDPOINT,
});

// Use withActiveSpan for automatic error handling and span cleanup
const result = await tracer.withActiveSpan("llm-call", async (span) => {
  // Set span type and input
  span.setType("llm");
  span.setInput("chat_messages", messages);
  span.setRequestModel(model);

  // Make the Azure OpenAI call
  const chatCompletion = await openai.chat.completions.create({
    messages: messages,
    model: model,
  });

  // Set output and metrics
  span.setOutput("chat_messages", [chatCompletion.choices[0]!.message]);
  span.setMetrics({
    promptTokens: chatCompletion.usage?.prompt_tokens,
    completionTokens: chatCompletion.usage?.completion_tokens,
  });

  return chatCompletion;
});
The withActiveSpan method automatically:
  • Creates the span with the specified name
  • Handles errors and sets appropriate span status
  • Ends the span when the function completes
  • Returns the result of your async function

Community Auto-Instrumentation

For automatic instrumentation without manual span creation, you can use the OpenInference instrumentation for OpenAI, which also works with Azure OpenAI:
1

Install the OpenInference instrumentation

npm install @arizeai/openinference-instrumentation-openai
2

Register the instrumentation

import { NodeSDK } from "@opentelemetry/sdk-node";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
import { setupObservability } from "langwatch/observability/node";

// Setup observability with the instrumentation
setupObservability({
  instrumentations: [new OpenAIInstrumentation()],
});
3

Use Azure OpenAI normally

import { AzureOpenAI } from "openai";

const openai = new AzureOpenAI({
  apiKey: process.env.AZURE_OPENAI_API_KEY,
  apiVersion: "2024-02-01",
  endpoint: process.env.AZURE_OPENAI_ENDPOINT,
});

// This call will be automatically instrumented
const completion = await openai.chat.completions.create({
  model: "gpt-5-mini",
  messages: [{ role: "user", content: "Hello!" }],
});
The OpenInference instrumentation automatically captures:
  • Input messages and model configuration
  • Output responses and token usage
  • Error handling and status codes
  • Request/response timing
  • Azure-specific configuration (endpoint, API version)
When using auto-instrumentation, you may need to configure data capture settings to control what information is sent to LangWatch.
On short-live environments like Lambdas or Serverless Functions, be sure to call
await trace.sendSpans(); to wait for all pending requests to be sent before the runtime is destroyed.

Capture a RAG Span

Appart from LLM spans, another very used type of span is the RAG span. This is used to capture the retrieved contexts from a RAG that will be used by the LLM, and enables a whole new set of RAG-based features evaluations for RAG quality on LangWatch. To capture a RAG, you can simply start a RAG span inside the trace, giving it the input query being used:
const ragSpan = trace.startRAGSpan({
  name: "my-vectordb-retrieval", // optional
  input: { type: "text", value: "search query" },
});

// proceed to do the retrieval normally
Then, after doing the retrieval, you can end the RAG span with the contexts that were retrieved and will be used by the LLM:
ragSpan.end({
  contexts: [
    {
      documentId: "doc1",
      content: "document chunk 1",
    },
    {
      documentId: "doc2",
      content: "document chunk 2",
    },
  ],
});
On LangChain.js, RAG spans are captured automatically by the LangWatch callback when using LangChain Retrievers, with source as the documentId.

Capture an arbritary Span

You can also use generic spans to capture any type of operation, its inputs and outputs, for example for a function call:
// before the function starts
const span = trace.startSpan({
  name: "weather_function",
  input: {
    type: "json",
    value: {
      city: "Tokyo",
    },
  },
});

// ...after the function ends
span.end({
  output: {
    type: "json",
    value: {
      weather: "sunny",
    },
  },
});
You can also nest spans one inside the other, capturing your pipeline structure, for example:
const span = trace.startSpan({
  name: "pipeline",
});

const nestedSpan = span.startSpan({
  name: "nested_pipeline",
});

nestedSpan.end()

span.end()
Both LLM and RAG spans can also be nested like any arbritary span.

Capturing Exceptions

To capture also when your code throws an exception, you can simply wrap your code around a try/catch, and update or end the span with the exception:
try {
  throw new Error("unexpected error");
} catch (error) {
  span.end({
    error: error,
  });
}

Capturing custom evaluation results

LangWatch Evaluators can run automatically on your traces, but if you have an in-house custom evaluator, you can also capture the evaluation results of your custom evaluator on the current trace or span by using the .addEvaluation method:
import { type LangWatchTrace } from "langwatch";

async function llmStep({ message, trace }: { message: string, trace: LangWatchTrace }): Promise<string> {
    const span = trace.startLLMSpan({ name: "llmStep" });

    // ... your existing code

    span.addEvaluation({
        name: "custom evaluation",
        passed: true,
        score: 0.5,
        label: "category_detected",
        details: "explanation of the evaluation results",
    });
}
The evaluation name is required and must be a string. The other fields are optional, but at least one of passed, score or label must be provided. For more advanced Azure AI integration patterns and best practices:
For production Azure AI applications, combine manual instrumentation with Semantic Conventions for consistent observability and better analytics.