Skip to main content

LangWatch + n8n Integration

Integrate LangWatch with your n8n workflows to get comprehensive LLM observability, evaluation capabilities, and prompt management. This integration provides both automatic workflow instrumentation and powerful LangWatch nodes for building intelligent automation workflows.

Quick Start

1

Get your LangWatch API Key

Sign up at app.langwatch.ai and get your API key from the project settings.
2

Install LangWatch Nodes

For installing with a local n8n instance:
cd ~/.n8n/nodes

npm i @langwatch/n8n-observability @langwatch/n8n-nodes-langwatch

export EXTERNAL_HOOK_FILES=$(node -e "console.log(require.resolve('@langwatch/n8n-observability/hooks'))")
export N8N_OTEL_SERVICE_NAME=my-n8n-instance-name
export LANGWATCH_API_KEY=sk-lw-...
3

Set up Credentials

In n8n, go to Settings → Credentials → New → LangWatch API and add your API key.
4

Start Building

Add LangWatch nodes to your workflows and start building intelligent automation!

LangWatch Nodes

The LangWatch n8n nodes provide powerful capabilities for building intelligent workflows with evaluation, prompt management, and dataset processing.

Available Nodes

Node Types

Triggers:
  • Dataset Batch Trigger: Emits one item per dataset row sequentially
  • Dataset Row Trigger: Fetches single dataset rows with cursor management
Actions:
  • Evaluation: Runs evaluators and records results with multiple operation modes
  • Prompt: Retrieves and compiles prompts from LangWatch Prompt Manager

Dataset Batch Trigger

Process your datasets row by row with full experiment context for batch evaluations.
Dataset Batch Trigger node configuration
Key Features:
  • Sequential row processing with progress tracking
  • Experiment context initialization for batch evaluations
  • Flexible row selection (start/end, step size, limits)
  • Shuffle support with seed for randomized processing
Configuration Options:
  • Dataset: Slug or ID of your LangWatch dataset
  • Experiment: Enable experiment context with ID/name
  • Row Processing: Configure start row, end row, step size, and limits
  • Emit Interval: Control processing speed (milliseconds)
Output Fields:
  • entry - Your dataset row payload
  • row_number, row_id, datasetId, projectId - Row metadata
  • _progress - Processing progress information
  • _langwatch.dataset - Dataset context
  • _langwatch.experiment - Experiment context (when enabled)

Dataset Row Trigger

Fetch individual dataset rows with internal cursor management for stepwise processing.
Dataset Row Trigger node configuration
Key Features:
  • Single row processing per execution
  • Internal cursor management
  • Reset progress capability
  • Shuffle rows with seed support
Use Cases:
  • Scheduled dataset processing
  • Step-by-step evaluation workflows
  • Incremental data processing

Evaluation Node

Run evaluators and record results with multiple operation modes for comprehensive evaluation workflows.
Evaluation node configuration showing auto mode
Operation Modes: Key Parameters:
  • Run ID: Override or infer from _langwatch.batch.runId
  • Evaluator: Manual selection or dropdown of available evaluators
  • Evaluation Data: Input data for the evaluation
  • Guardrail Settings: Configure asGuardrail and failOnFail options
  • Dataset Output: Map results to dataset fields

Prompt Node

Retrieve and compile prompts from LangWatch Prompt Manager with variable substitution.
Prompt node configuration interface
Key Features:
  • Prompt selection by handle or ID
  • Version control (latest or specific version)
  • Variable compilation with multiple sources
  • Strict compilation mode for missing variables
Variable Sources:
  • Manual Variables
  • Input Data Variables
  • Mixed Mode
Define name/value pairs directly in the node configuration.
Configuration Options:
  • Prompt Selection: Manual (handle/ID) or dropdown selection
  • Version: Latest or specific version
  • Compile Prompt: Enable/disable variable substitution
  • Strict Compilation: Fail if required variables are missing

Workflow Observability

Automatically instrument your n8n workflows with OpenTelemetry to capture comprehensive observability data.
n8n observability setup showing workflow instrumentation
Workflow observability is only available for self-hosted n8n instances, not n8n Cloud.

Features

  • Automatic Workflow Tracing: Capture complete workflow execution with spans for each node
  • Error Tracking: Automatic error capture and metadata collection
  • I/O Capture: Safe JSON input/output capture (toggleable)
  • Node Filtering: Include/exclude specific nodes from tracing
  • Flexible Deployment: Works with Docker, bare metal, or programmatic setup

Setup Options

  • Docker - Custom Image
  • Docker - Volume Mount
  • Bare Metal
  • Programmatic
Create a custom n8n image with LangWatch observability pre-installed.
FROM n8nio/n8n:latest
USER root
WORKDIR /usr/local/lib/node_modules/n8n
RUN npm install @langwatch/n8n-observability
ENV EXTERNAL_HOOK_FILES=/usr/local/lib/node_modules/n8n/node_modules/@langwatch/n8n-observability/dist/hooks.cjs
USER node
docker build -t my-n8n-langwatch .
docker run -p 5678:5678 \
  -e LANGWATCH_API_KEY=your_api_key \
  -e N8N_OTEL_SERVICE_NAME=my-n8n \
  my-n8n-langwatch

Configuration

LANGWATCH_API_KEY
string
required
Your LangWatch API key. Get this from your LangWatch project settings.
N8N_OTEL_SERVICE_NAME
string
default:"n8n"
Service name for your n8n instance in LangWatch.
N8N_OTEL_NODE_INCLUDE
string
Comma-separated list of node names/types to include in tracing. If not set, all nodes are traced.
N8N_OTEL_NODE_EXCLUDE
string
Comma-separated list of node names/types to exclude from tracing.
N8N_OTEL_CAPTURE_INPUT
boolean
default:"true"
Whether to capture node input data. Set to false to disable.
N8N_OTEL_CAPTURE_OUTPUT
boolean
default:"true"
Whether to capture node output data. Set to false to disable.
LW_DEBUG
boolean
default:"false"
Enable LangWatch SDK debug logging.
N8N_OTEL_DEBUG
boolean
default:"false"
Enable observability hook debugging and diagnostics.

Verification

Verify your observability setup is working:
node -e "console.log(require.resolve('@langwatch/n8n-observability/hooks'))"
Look for this startup message:
[@langwatch/n8n-observability] observability ready and patches applied

Complete Integration Example

Here’s how to combine both LangWatch nodes and observability for a comprehensive evaluation workflow:
Complete n8n workflow with LangWatch nodes and observability
Workflow Steps:
  1. Dataset Batch Trigger - Process evaluation dataset
  2. Prompt Node - Retrieve and compile prompts with variables
  3. HTTP Request - Call your LLM API
  4. Evaluation Node - Run evaluators and record results
  5. Observability - Automatic tracing of all steps

LangWatch Concepts

For a complete understanding of LangWatch concepts like traces, spans, threads, and user IDs, see our Concepts Guide.
Key concepts for n8n integration:
  • Traces: Each n8n workflow execution becomes a trace in LangWatch
  • Spans: Individual nodes within a workflow become spans
  • Threads: Group related workflow executions using thread_id
  • User ID: Track which user triggered the workflow
  • Labels: Tag workflows for organization and filtering

Troubleshooting

  • Ensure the package is installed: npm list @langwatch/n8n-nodes-langwatch
  • Restart n8n after installation
  • Check n8n logs for any loading errors
  • Verify environment variables are set correctly
  • Check that the hook file path is correct
  • Look for the startup message in n8n logs
  • Ensure you’re using self-hosted n8n (not n8n Cloud)
  • Verify your API key is correct in LangWatch dashboard
  • Check the endpoint URL (should be https://app.langwatch.ai for cloud)
  • Test the connection in the credential settings
  • Check that workflows are actually executing
  • Verify the service name in LangWatch matches your configuration
  • Look for any error messages in n8n logs
  • Ensure your LangWatch project is active

Resources