LangWatch TypeScript Repo
LangWatch TypeScript SDK version

LangWatch library is the easiest way to integrate your TypeScript application with LangWatch, the messages are synced on the background so it doesn’t intercept or block your LLM calls.

Protip: wanna to get started even faster? Copy our llms.txt and ask an AI to do this integration

Prerequisites

Installation

npm install langwatch

Configuration

Ensure LANGWATCH_API_KEY is set:

.env
LANGWATCH_API_KEY='your_api_key_here'

Basic Concepts

  • Each message triggering your LLM pipeline as a whole is captured with a Trace.
  • A Trace contains multiple Spans, which are the steps inside your pipeline.
    • A span can be an LLM call, a database query for a RAG retrieval, or a simple function transformation.
    • Different types of Spans capture different parameters.
    • Spans can be nested to capture the pipeline structure.
  • Traces can be grouped together on LangWatch Dashboard by having the same thread_id in their metadata, making the individual messages become part of a conversation.
    • It is also recommended to provide the user_id metadata to track user analytics.

Integration

Start by initializing LangWatch client and creating a new trace to capture your chain:

import { LangWatch } from 'langwatch';

const langwatch = new LangWatch();

const trace = langwatch.getTrace({
  metadata: { threadId: "mythread-123", userId: "myuser-123" },
});

Then, to capture your LLM calls and all other chain steps, LangWatch provides a callback hook for LangChain.js that automatically tracks everything for you.

First, define your chain as you would normally do:

import { StringOutputParser } from '@langchain/core/output_parsers'
import { ChatPromptTemplate } from '@langchain/core/prompts'
import { ChatOpenAI } from '@langchain/openai'

const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'Translate the following from English into Italian'],
  ['human', '{input}']
])
const model = new ChatOpenAI({ model: 'gpt-3.5-turbo' })
const outputParser = new StringOutputParser()

const chain = prompt.pipe(model).pipe(outputParser)

Now, when calling your chain either with invoke or stream, pass in trace.getLangChainCallback() as one of the callbacks:

const stream = await chain.stream(
  { input: message },
  { callbacks: [trace.getLangChainCallback()] }
)

That’s it! The full trace with all spans for each chain step will be sent automatically to LangWatch in the background on periodic intervals. After capturing your first LLM Span, go to LangWatch Dashboard, your message should be there!

On short-live environments like Lambdas or Serverless Functions, be sure to call
await trace.sendSpans(); to wait for all pending requests to be sent before the runtime is destroyed.

Capture a RAG Span

Appart from LLM spans, another very used type of span is the RAG span. This is used to capture the retrieved contexts from a RAG that will be used by the LLM, and enables a whole new set of RAG-based features evaluations for RAG quality on LangWatch.

To capture a RAG, you can simply start a RAG span inside the trace, giving it the input query being used:

const ragSpan = trace.startRAGSpan({
  name: "my-vectordb-retrieval", // optional
  input: { type: "text", value: "search query" },
});

// proceed to do the retrieval normally

Then, after doing the retrieval, you can end the RAG span with the contexts that were retrieved and will be used by the LLM:

ragSpan.end({
  contexts: [
    {
      documentId: "doc1",
      content: "document chunk 1",
    },
    {
      documentId: "doc2",
      content: "document chunk 2",
    },
  ],
});

On LangChain.js, RAG spans are captured automatically by the LangWatch callback when using LangChain Retrievers, with source as the documentId.

Capture an arbritary Span

You can also use generic spans to capture any type of operation, its inputs and outputs, for example for a function call:

// before the function starts
const span = trace.startSpan({
  name: "weather_function",
  input: {
    type: "json",
    value: {
      city: "Tokyo",
    },
  },
});

// ...after the function ends
span.end({
  output: {
    type: "json",
    value: {
      weather: "sunny",
    },
  },
});

You can also nest spans one inside the other, capturing your pipeline structure, for example:

const span = trace.startSpan({
  name: "pipeline",
});

const nestedSpan = span.startSpan({
  name: "nested_pipeline",
});

nestedSpan.end()

span.end()

Both LLM and RAG spans can also be nested like any arbritary span.

Capturing Exceptions

To capture also when your code throws an exception, you can simply wrap your code around a try/catch, and update or end the span with the exception:

try {
  throw new Error("unexpected error");
} catch (error) {
  span.end({
    error: error,
  });
}

Capturing custom evaluation results

LangWatch Evaluators can run automatically on your traces, but if you have an in-house custom evaluator, you can also capture the evaluation results of your custom evaluator on the current trace or span by using the .addEvaluation method:

import { type LangWatchTrace } from "langwatch";

async function llmStep({ message, trace }: { message: string, trace: LangWatchTrace }): Promise<string> {
    const span = trace.startLLMSpan({ name: "llmStep" });

    // ... your existing code

    span.addEvaluation({
        name: "custom evaluation",
        passed: true,
        score: 0.5,
        label: "category_detected",
        details: "explanation of the evaluation results",
    });
}

The evaluation name is required and must be a string. The other fields are optional, but at least one of passed, score or label must be provided.