LangWatch + n8n Integration
Integrate LangWatch with your n8n workflows to get comprehensive LLM observability, evaluation capabilities, and prompt management. This integration provides both automatic workflow instrumentation and powerful LangWatch nodes for building intelligent automation workflows.LangWatch Nodes
Add LangWatch nodes to your workflows for evaluation, prompts, and datasets
Workflow Observability
Automatically trace your n8n workflows with OpenTelemetry instrumentation
Quick Start
1
Get your LangWatch API Key
Sign up at app.langwatch.ai and get your API key from the project settings.
2
Install LangWatch Nodes
For installing with a local n8n instance:
3
Set up Credentials
In n8n, go to Settings → Credentials → New → LangWatch API and add your API key.
4
Start Building
Add LangWatch nodes to your workflows and start building intelligent automation!
LangWatch Nodes
The LangWatch n8n nodes provide powerful capabilities for building intelligent workflows with evaluation, prompt management, and dataset processing.Available Nodes
Dataset Batch Trigger
Process dataset rows sequentially with experiment context
Dataset Row Trigger
Fetch single dataset rows with cursor management
Evaluation
Run evaluators and record results with multiple modes
Prompt
Retrieve and compile prompts from LangWatch Prompt Manager
Node Types
Triggers:- Dataset Batch Trigger: Emits one item per dataset row sequentially
- Dataset Row Trigger: Fetches single dataset rows with cursor management
- Evaluation: Runs evaluators and records results with multiple operation modes
- Prompt: Retrieves and compiles prompts from LangWatch Prompt Manager
Dataset Batch Trigger
Process your datasets row by row with full experiment context for batch evaluations.
- Sequential row processing with progress tracking
- Experiment context initialization for batch evaluations
- Flexible row selection (start/end, step size, limits)
- Shuffle support with seed for randomized processing
- Dataset: Slug or ID of your LangWatch dataset
- Experiment: Enable experiment context with ID/name
- Row Processing: Configure start row, end row, step size, and limits
- Emit Interval: Control processing speed (milliseconds)
entry
- Your dataset row payloadrow_number
,row_id
,datasetId
,projectId
- Row metadata_progress
- Processing progress information_langwatch.dataset
- Dataset context_langwatch.experiment
- Experiment context (when enabled)
Dataset Row Trigger
Fetch individual dataset rows with internal cursor management for stepwise processing.
- Single row processing per execution
- Internal cursor management
- Reset progress capability
- Shuffle rows with seed support
- Scheduled dataset processing
- Step-by-step evaluation workflows
- Incremental data processing
Evaluation Node
Run evaluators and record results with multiple operation modes for comprehensive evaluation workflows.
- Auto (Recommended)
- Check If Evaluating
- Record Result
- Run and Record
- Set Outputs (Dataset)
Automatically selects behavior based on available inputs and context.
- Run ID: Override or infer from
_langwatch.batch.runId
- Evaluator: Manual selection or dropdown of available evaluators
- Evaluation Data: Input data for the evaluation
- Guardrail Settings: Configure
asGuardrail
andfailOnFail
options - Dataset Output: Map results to dataset fields
Prompt Node
Retrieve and compile prompts from LangWatch Prompt Manager with variable substitution.
- Prompt selection by handle or ID
- Version control (latest or specific version)
- Variable compilation with multiple sources
- Strict compilation mode for missing variables
- Manual Variables
- Input Data Variables
- Mixed Mode
Define name/value pairs directly in the node configuration.
- Prompt Selection: Manual (handle/ID) or dropdown selection
- Version: Latest or specific version
- Compile Prompt: Enable/disable variable substitution
- Strict Compilation: Fail if required variables are missing
Workflow Observability
Automatically instrument your n8n workflows with OpenTelemetry to capture comprehensive observability data.
Workflow observability is only available for self-hosted n8n instances, not n8n Cloud.
Features
- Automatic Workflow Tracing: Capture complete workflow execution with spans for each node
- Error Tracking: Automatic error capture and metadata collection
- I/O Capture: Safe JSON input/output capture (toggleable)
- Node Filtering: Include/exclude specific nodes from tracing
- Flexible Deployment: Works with Docker, bare metal, or programmatic setup
Setup Options
- Docker - Custom Image
- Docker - Volume Mount
- Bare Metal
- Programmatic
Create a custom n8n image with LangWatch observability pre-installed.
Configuration
Your LangWatch API key. Get this from your LangWatch project settings.
Service name for your n8n instance in LangWatch.
Comma-separated list of node names/types to include in tracing. If not set, all nodes are traced.
Comma-separated list of node names/types to exclude from tracing.
Whether to capture node input data. Set to
false
to disable.Whether to capture node output data. Set to
false
to disable.Enable LangWatch SDK debug logging.
Enable observability hook debugging and diagnostics.
Verification
Verify your observability setup is working:Complete Integration Example
Here’s how to combine both LangWatch nodes and observability for a comprehensive evaluation workflow:
- Dataset Batch Trigger - Process evaluation dataset
- Prompt Node - Retrieve and compile prompts with variables
- HTTP Request - Call your LLM API
- Evaluation Node - Run evaluators and record results
- Observability - Automatic tracing of all steps
LangWatch Concepts
For a complete understanding of LangWatch concepts like traces, spans, threads, and user IDs, see our Concepts Guide.
- Traces: Each n8n workflow execution becomes a trace in LangWatch
- Spans: Individual nodes within a workflow become spans
- Threads: Group related workflow executions using
thread_id
- User ID: Track which user triggered the workflow
- Labels: Tag workflows for organization and filtering
Troubleshooting
Nodes not appearing in n8n
Nodes not appearing in n8n
- Ensure the package is installed:
npm list @langwatch/n8n-nodes-langwatch
- Restart n8n after installation
- Check n8n logs for any loading errors
Observability not working
Observability not working
- Verify environment variables are set correctly
- Check that the hook file path is correct
- Look for the startup message in n8n logs
- Ensure you’re using self-hosted n8n (not n8n Cloud)
Credentials not working
Credentials not working
- Verify your API key is correct in LangWatch dashboard
- Check the endpoint URL (should be
https://app.langwatch.ai
for cloud) - Test the connection in the credential settings
No traces appearing in LangWatch
No traces appearing in LangWatch
- Check that workflows are actually executing
- Verify the service name in LangWatch matches your configuration
- Look for any error messages in n8n logs
- Ensure your LangWatch project is active