Setup
setupObservability()
Initializes the LangWatch observability system for Node.js environments, enabling data collection and tracing for your LLM application. This is typically the first function you’ll call when integrating LangWatch.
Basic Setup
Advanced Setup
Custom Configuration
import { setupObservability } from "langwatch/observability/node" ;
setupObservability ({
langwatch: {
apiKey: process . env . LANGWATCH_API_KEY ,
endpoint: process . env . LANGWATCH_ENDPOINT_URL
},
serviceName: "my-service"
});
options
SetupObservabilityOptions
default: "{}"
Configuration options for the LangWatch observability system.
Returns
An object containing a shutdown()
method for graceful cleanup.
SetupObservabilityOptions
Configuration options for setting up LangWatch observability.
langwatch
Optional<LangWatchConfig | 'disabled'>
LangWatch configuration. Set to ‘disabled’ to completely disable LangWatch integration.
Name of the service being instrumented.
attributes
Optional<SemConvAttributes>
Global attributes added to all telemetry data.
dataCapture
Optional<DataCaptureOptions>
Configuration for automatic data capture. Can be “all”, “input”, “output”, “none”, or a configuration object.
spanProcessors
Optional<SpanProcessor[]>
Custom span processors for advanced trace processing.
Debug and development options.
advanced
Optional<AdvancedOptions>
Advanced and potentially unsafe configuration options.
LangWatchConfig
Configuration for LangWatch integration.
apiKey
Optional<string>
default: "process.env.LANGWATCH_API_KEY"
LangWatch API key for authentication.
endpoint
Optional<string>
default: "https://api.langwatch.ai"
LangWatch endpoint URL for sending traces and logs.
processorType
Optional<'simple' | 'batch'>
default: "'simple'"
Type of span processor to use for LangWatch exporter.
DebugOptions
Debug and development options.
consoleTracing
Optional<boolean>
default: "false"
Enable console output for traces (debugging).
consoleLogging
Optional<boolean>
default: "false"
Enable console output for logs (debugging).
logLevel
Optional<'debug' | 'info' | 'warn' | 'error'>
default: "'warn'"
Log level for LangWatch SDK internal logging.
Custom logger for LangWatch SDK internal logging.
ObservabilityHandle
Handle returned from observability setup.
Gracefully shuts down the observability system.
Tracing
getLangWatchTracer()
Returns a LangWatch tracer instance that provides enhanced tracing capabilities for LLM applications.
import { getLangWatchTracer } from "langwatch" ;
const tracer = getLangWatchTracer ( "my-service" , "1.0.0" );
The name of the tracer/service.
version
Optional<string>
default: "undefined"
The version of the tracer/service.
Returns
A LangWatchTracer
instance with enhanced methods for LLM observability.
getLangWatchTracerFromProvider()
Get a LangWatch tracer from a specific OpenTelemetry tracer provider.
import { getLangWatchTracerFromProvider } from "langwatch/observability" ;
const tracer = getLangWatchTracerFromProvider (
customTracerProvider ,
"my-service" ,
"1.0.0"
);
The OpenTelemetry tracer provider to use.
The name of the tracer/service.
version
Optional<string>
default: "undefined"
The version of the tracer/service.
LangWatchTracer
The LangWatchTracer
extends the standard OpenTelemetry Tracer
with additional methods for LLM observability.
Methods
startSpan
(name: string, options?: SpanOptions, context?: Context) => LangWatchSpan
Starts a new LangWatchSpan
without setting it on context. This method does NOT modify the current Context.
startActiveSpan
(name: string, fn: (span: LangWatchSpan) => T) => T
Starts a new LangWatchSpan
and calls the given function passing it the created span as first argument. The new span gets set in context and this context is activated for the duration of the function call.
withActiveSpan
(name: string, fn: (span: LangWatchSpan) => Promise<T> | T) => Promise<T>
Starts a new LangWatchSpan
, runs the provided async function, and automatically handles error recording, status setting, and span ending. This is a safer and more ergonomic alternative to manually using try/catch/finally blocks.
createLangWatchSpan()
Creates a LangWatchSpan, which adds additional methods to an OpenTelemetry Span. You probably don’t need to use this directly, but it’s here for completeness.
import { createLangWatchSpan } from "langwatch/observability" ;
const otelSpan = tracer . startSpan ( 'llm-call' );
const span = createLangWatchSpan ( otelSpan );
span . setType ( 'llm' ). setInput ( 'Prompt' ). setOutput ( 'Completion' );
The OpenTelemetry Span to add LangWatch methods to.
Returns
A LangWatchSpan with additional methods for LLM/GenAI observability.
LangWatchSpan
The LangWatchSpan
extends the standard OpenTelemetry Span
with additional methods for LLM observability.
Span Configuration Methods
Set the type of the span (e.g., ‘llm’, ‘rag’, ‘tool’, etc). This is used for downstream filtering and analytics.
Set the request model name for the span. This is typically the model name sent in the API request (e.g., ‘gpt-5’, ‘claude-3’).
Set the response model name for the span. This is the model name returned in the API response, if different from the request.
setRAGContexts
(ragContexts: LangWatchSpanRAGContext[]) => this
Set multiple RAG contexts for the span. Use this to record all retrieved documents/chunks used as context for a generation.
setRAGContext
(ragContext: LangWatchSpanRAGContext) => this
Set a single RAG context for the span. Use this if only one context was retrieved.
setMetrics
(metrics: LangWatchSpanMetrics) => this
Set the metrics for the span.
Set the selected prompt for the span. This will attach this prompt to the trace. If this is set on multiple spans, the last one will be used.
Record the input to the span with automatic type detection.
setInput
(type: InputOutputType, input: unknown) => this
Record the input to the span with explicit type control. Supports “text”, “raw”, “chat_messages”, “list”, “json”, “guardrail_result”, and “evaluation_result” types.
setOutput
(output: unknown) => this
Record the output from the span with automatic type detection.
setOutput
(type: InputOutputType, output: unknown) => this
Record the output from the span with explicit type control. Supports “text”, “raw”, “chat_messages”, “list”, “json”, “guardrail_result”, and “evaluation_result” types.
Client SDK
LangWatch
The main LangWatch client class that provides access to LangWatch services.
import { LangWatch } from "langwatch" ;
const langwatch = new LangWatch ({
apiKey: process . env . LANGWATCH_API_KEY ,
endpoint: process . env . LANGWATCH_ENDPOINT_URL
});
options
LangWatchConstructorOptions
default: "{}"
Configuration options for the LangWatch client.
Properties
Access to prompt management functionality.
Access to trace management functionality.
Prompt Management
langwatch.prompts.get()
Retrieves a prompt from the LangWatch platform.
// Get a prompt without variables
const prompt = await langwatch . prompts . get ( "prompt-id" );
The ID of the prompt to retrieve.
Returns
The prompt or compiled prompt object.
Throws an error if the specified prompt version is not found.
langwatch.prompts.create()
Creates a new prompt in the LangWatch platform.
// Create a basic prompt
const prompt = await langwatch . prompts . create ({
handle: "customer-support-bot" ,
name: "Customer Support Bot" ,
prompt: "You are a helpful customer support assistant." ,
// ... any other prompt properties
});
options
CreatePromptOptions
required
Configuration options for creating the prompt.
Returns
The newly created prompt object.
langwatch.prompts.update()
Updates an existing prompt, creating a new version automatically.
// Update prompt content
const updatedPrompt = await langwatch . prompts . update ( "customer-support-bot" , {
prompt: "You are a helpful and friendly customer support assistant." ,
// ... any other prompt properties
});
The handle (identifier) of the prompt to update.
options
UpdatePromptOptions
required
Configuration options for updating the prompt.
Returns
The updated prompt object (new version).
Each update operation creates a new version of the prompt. Previous versions are preserved for version control and rollback purposes.
langwatch.prompts.delete()
Deletes a prompt and all its versions from the LangWatch platform.
// Delete a prompt
const result = await langwatch . prompts . delete ( "customer-support-bot" );
The handle (identifier) of the prompt to delete.
Returns
Confirmation of the deletion operation.
This action is irreversible and will permanently remove the prompt and all its versions.
Prompt Compilation
prompt.compile()
Compiles a prompt template with provided variables, using lenient compilation that handles missing variables gracefully.
// Compile a prompt with variables
const compiledPrompt = prompt . compile ({
name: "Alice" ,
topic: "weather"
});
variables
Record<string, any>
required
Variables to substitute into the prompt template.
Returns
The compiled prompt with resolved variables and messages.
Lenient compilation will not throw errors for missing variables, making it suitable for dynamic content where some variables may be optional.
prompt.compileStrict()
Compiles a prompt template with strict variable validation, throwing an error if any required variables are missing.
// Strict compilation with all required variables
const compiledPrompt = prompt . compileStrict ({
name: "Alice" ,
topic: "weather"
});
variables
Record<string, any>
required
Variables to substitute into the prompt template. All template variables must be provided.
Returns
The compiled prompt with resolved variables and messages.
Throws an error if any template variables are missing or invalid.
Strict compilation will throw a PromptCompilationError
if any template variables are missing, ensuring all required data is provided.
Processors
FilterableBatchSpanProcessor
A span processor that filters spans before processing them.
import { LangWatchExporter , FilterableBatchSpanProcessor } from "langwatch" ;
const processor = new FilterableBatchSpanProcessor (
new LangWatchExporter (), // Uses environment variables
[
{
fieldName: "span_name" ,
matchValue: "health-check" ,
matchOperation: "exact_match"
}
]
);
The span exporter to use.
excludeRules
SpanProcessingExcludeRule[]
required
Rules to exclude spans from processing.
LangChain Integration
LangWatchCallbackHandler
A LangChain callback handler that automatically traces LangChain operations and integrates them with LangWatch.
import { LangWatchCallbackHandler } from "langwatch/observability/instrumentation/langchain" ;
import { ChatOpenAI } from "@langchain/openai" ;
const handler = new LangWatchCallbackHandler ();
const llm = new ChatOpenAI ({
callbacks: [ handler ]
});
// All operations will now be automatically traced
const response = await llm . invoke ( "Hello, world!" );
The LangWatchCallbackHandler
automatically:
Creates spans for LLM calls, chains, tools, and retrievers
Captures input/output data
Sets appropriate span types and metadata
Handles errors and status codes
Integrates with the LangWatch tracing system
convertFromLangChainMessages
Utility function to convert LangChain messages to a format compatible with LangWatch GenAI events.
import { convertFromLangChainMessages } from "langwatch/observability/instrumentation/langchain" ;
import { HumanMessage , SystemMessage } from "@langchain/core/messages" ;
const messages = [
new SystemMessage ( "You are a helpful assistant." ),
new HumanMessage ( "Hello!" ),
];
const convertedMessages = convertFromLangChainMessages ( messages );
// Use with span.setInput("chat_messages", convertedMessages)
Exporters
LangWatchExporter
A LangWatch exporter for sending traces to the LangWatch platform. Extends the OpenTelemetry OTLP HTTP trace exporter with proper authentication and metadata headers.
import { LangWatchExporter } from "langwatch" ;
// Using environment variables/fallback configuration
const exporter = new LangWatchExporter ();
// Using custom API key and endpoint
const exporter = new LangWatchExporter ({
apiKey: process . env . LANGWATCH_API_KEY ,
endpoint: process . env . LANGWATCH_ENDPOINT_URL
});
apiKey
Optional<string>
default: "process.env.LANGWATCH_API_KEY"
Optional API key for LangWatch authentication. If not provided, will use environment variables or fallback configuration.
Optional custom endpoint URL for LangWatch ingestion. If not provided, will use environment variables or fallback configuration.
LangWatchTraceExporter
A LangWatch trace exporter with configuration options.
import { LangWatchTraceExporter } from "langwatch/observability" ;
const exporter = new LangWatchTraceExporter ({
apiKey: process . env . LANGWATCH_API_KEY ,
endpoint: process . env . LANGWATCH_ENDPOINT_URL
});
LangWatchLogsExporter
A LangWatch logs exporter with configuration options.
import { LangWatchLogsExporter } from "langwatch/observability" ;
const exporter = new LangWatchLogsExporter ({
apiKey: process . env . LANGWATCH_API_KEY ,
endpoint: process . env . LANGWATCH_ENDPOINT_URL
});
Data Capture
DataCaptureOptions
Configuration for automatic data capture.
// Simple mode
dataCapture : "all" | "input" | "output" | "none"
// Configuration object
dataCapture : {
mode : "all" | "input" | "output" | "none"
}
DataCapturePresets
Predefined data capture configurations.
import { DataCapturePresets } from "langwatch/observability" ;
// Use predefined configurations
dataCapture : DataCapturePresets . CAPTURE_ALL // Captures everything
dataCapture : DataCapturePresets . CAPTURE_NONE // Captures nothing
dataCapture : DataCapturePresets . INPUT_ONLY // Captures only inputs
dataCapture : DataCapturePresets . OUTPUT_ONLY // Captures only outputs
Logging
getLangWatchLogger()
Returns a LangWatch logger instance for structured logging.
import { getLangWatchLogger } from "langwatch" ;
const logger = getLangWatchLogger ( "my-service" );
getLangWatchLoggerFromProvider()
Get a LangWatch logger from a specific logger provider.
import { getLangWatchLoggerFromProvider } from "langwatch/observability" ;
const logger = getLangWatchLoggerFromProvider (
customLoggerProvider ,
"my-service"
);
ConsoleLogger
A console-based logger with configurable log levels and prefixes.
import { logger } from "langwatch" ;
const logger = new logger . ConsoleLogger ({
level: "info" ,
prefix: "MyApp"
});
logger . info ( "Application started" );
logger . warn ( "Deprecated feature used" );
logger . error ( "An error occurred" );
options
ConsoleLoggerOptions
default: "{ level: 'warn' }"
Logger configuration options.
NoOpLogger
A no-operation logger that discards all log messages.
import { logger } from "langwatch" ;
const logger = new logger . NoOpLogger ();
// All log calls are ignored
CLI
The LangWatch CLI provides command-line tools for managing prompts and interacting with the LangWatch platform.
# Login to LangWatch
langwatch login
# Initialize a new prompts project
langwatch prompt init
# Create a new prompt
langwatch prompt create my-prompt
# Add a prompt from the registry
langwatch prompt add sentiment-analyzer
# List installed prompts
langwatch prompt list
# Sync prompts with the registry
langwatch prompt sync
# Remove a prompt
langwatch prompt remove my-prompt
Core Data Types
SpanType
Supported types of spans for LangWatch observability:
type SpanType =
| "span"
| "llm"
| "chain"
| "tool"
| "agent"
| "guardrail"
| "evaluation"
| "rag"
| "prompt"
| "workflow"
| "component"
| "module"
| "server"
| "client"
| "producer"
| "consumer"
| "task"
| "unknown" ;
Supported input/output types for span data:
type InputOutputType =
| "text"
| "raw"
| "chat_messages"
| "list"
| "json"
| "guardrail_result"
| "evaluation_result" ;
LangWatchSpanRAGContext
Context for a RAG (Retrieval-Augmented Generation) span.
Unique identifier for the source document.
Unique identifier for the chunk within the document.
The actual content of the chunk provided to the model.
LangWatchSpanMetrics
Metrics for a LangWatch span.
The number of prompt tokens used.
The number of completion tokens used.
SpanProcessingExcludeRule
Defines a rule to filter out spans before they are exported to LangWatch.
The field of the span to match against. Currently, only "span_name"
is supported.
The value to match for the specified fieldName
.
matchOperation
'includes' | 'exact_match' | 'starts_with' | 'ends_with'
required
The operation to use for matching.
PromptResponse
The raw prompt response type extracted from the OpenAPI schema.
type PromptResponse = NonNullable <
paths [ "/api/prompts/{id}" ][ "get" ][ "responses" ][ "200" ][ "content" ][ "application/json" ]
>;
Prompt
A prompt object retrieved from the LangWatch platform with compilation capabilities.
Unique identifier for the prompt.
Project identifier the prompt belongs to.
Organization identifier the prompt belongs to.
Optional handle/slug for the prompt.
scope
'ORGANIZATION' | 'PROJECT'
Scope of the prompt - either organization-wide or project-specific.
Model used for the prompt.
Array of message objects.
Response format configuration.
Input definitions for the prompt.
Output definitions for the prompt.
CompiledPrompt
A compiled prompt that extends Prompt with reference to the original template.
The original prompt object before compilation.
TemplateVariables
Template variables for prompt compilation.
type TemplateVariables = Record < string , string | number | boolean | object | null >;
PromptCompilationError
Error thrown when prompt compilation fails.
The template that failed to compile.
The original compilation error.
LangWatchConstructorOptions
Configuration options for the LangWatch client.
Your LangWatch API key. Defaults to process.env.LANGWATCH_API_KEY
.
The LangWatch endpoint URL. Defaults to process.env.LANGWATCH_ENDPOINT
.
options
Optional<{ logger?: Logger }>
Additional options including custom logger.
Usage Examples
Basic Tracing
import { setupObservability } from "langwatch/observability/node" ;
import { getLangWatchTracer } from "langwatch" ;
setupObservability ({
langwatch: {
apiKey: process . env . LANGWATCH_API_KEY
},
serviceName: "my-service"
});
const tracer = getLangWatchTracer ( "my-service" );
await tracer . withActiveSpan ( "process-request" , async ( span ) => {
span . setType ( "llm" );
span . setRequestModel ( "gpt-5" );
// Your LLM call here
const response = await openai . chat . completions . create ({
model: "gpt-5" ,
messages: [{ role: "user" , content: "Hello!" }]
});
span . setOutput ( response . choices [ 0 ]. message . content );
span . setMetrics ({
promptTokens: response . usage ?. prompt_tokens ,
completionTokens: response . usage ?. completion_tokens
});
});
RAG Operations
await tracer . withActiveSpan ( "rag-query" , async ( span ) => {
span . setType ( "rag" );
// Retrieve documents
const documents = await vectorStore . similaritySearch ( "query" , 5 );
// Set RAG contexts
span . setRAGContexts ( documents . map ( doc => ({
document_id: doc . metadata . id ,
chunk_id: doc . metadata . chunk_id ,
content: doc . pageContent
})));
// Generate response
const response = await llm . generate ([ documents , "query" ]);
span . setOutput ( response );
});
Using Semantic Conventions
import { attributes , VAL_GEN_AI_SYSTEM_OPENAI } from "langwatch/observability" ;
import { getLangWatchTracer } from "langwatch" ;
const tracer = getLangWatchTracer ( "my-service" );
await tracer . withActiveSpan ( "llm-call" , async ( span ) => {
// Use semantic convention attributes for consistency
span . setType ( "llm" );
span . setAttribute ( "langwatch.streaming" , false );
// Set input/output with proper typing
span . setInput ( "chat_messages" , [
{ role: "system" , content: "You are a helpful assistant." },
{ role: "user" , content: "Hello!" }
]);
// Set output
span . setOutput ( "text" , "Hello! How can I help you today?" );
});
Prompt Management
import { LangWatch } from "langwatch" ;
const langwatch = new LangWatch ({
apiKey: process . env . LANGWATCH_API_KEY
});
const prompt = await langwatch . prompts . get ( "customer-support" );
const compiledPrompt = prompt . compile ({
name: "John Doe" ,
product: "LangWatch"
});
console . log ( compiledPrompt . content );
// Output: "Hello John Doe! How can I help you with LangWatch today?"
LangChain Integration
import { LangWatchCallbackHandler } from "langwatch/observability/instrumentation/langchain" ;
import { ChatOpenAI } from "@langchain/openai" ;
import { setupObservability } from "langwatch/observability/node" ;
setupObservability ();
const handler = new LangWatchCallbackHandler ();
const llm = new ChatOpenAI ({
callbacks: [ handler ],
model: "gpt-5-mini"
});
// All operations are automatically traced
const response = await llm . invoke ( "What is the capital of France?" );
Custom Span Processing
import { LangWatchExporter , FilterableBatchSpanProcessor } from "langwatch" ;
import { setupObservability } from "langwatch/observability/node" ;
const processor = new FilterableBatchSpanProcessor (
new LangWatchExporter (),
[
{
fieldName: "span_name" ,
matchValue: "health-check" ,
matchOperation: "exact_match"
}
]
);
setupObservability ({
langwatch: 'disabled' , // Prevent double exporting to LangWatch
spanProcessors: [ processor ]
});
Advanced Setup with Data Capture
import { setupObservability } from "langwatch/observability/node" ;
setupObservability ({
langwatch: {
apiKey: process . env . LANGWATCH_API_KEY ,
processorType: "batch"
},
serviceName: "my-service" ,
attributes: {
"service.version" : "1.0.0" ,
"deployment.environment.name" : "production"
},
dataCapture : ( context ) => {
// Don't capture sensitive data in production
if ( context . environment === "production" &&
context . operationName . includes ( "password" )) {
return "none" ;
}
return "all" ;
},
debug: {
consoleTracing: true ,
logLevel: "debug"
}
});
Graceful Shutdown
import { setupObservability } from "langwatch/observability/node" ;
const { shutdown } = setupObservability ({
langwatch: {
apiKey: process . env . LANGWATCH_API_KEY
}
});
// Graceful shutdown
process . on ( 'SIGTERM' , async () => {
console . log ( 'Shutting down observability...' );
await shutdown ();
console . log ( 'Observability shutdown complete' );
process . exit ( 0 );
});
For practical examples and advanced usage patterns, see:
Start with the Integration Guide for a quick setup, then refer to this API reference for detailed configuration options.