Protip: wanna to get started even faster? Copy our llms.txt and ask an AI to do this integration
Prerequisites
Before you start, make sure you have:- Node.js 18+ installed
- A LangWatch account (sign up at app.langwatch.ai)
- Your LangWatch API key from the dashboard
- An OpenAI API key (for the LLM example)
Quick Start (5 minutes)
Step 1: Install Dependencies
The
@ai-sdk/openai
and ai
packages are only required for the example in this guide. You can skip this step if you’re only looking to install the LangWatch SDK.Step 2: Set Up API Keys
-
LangWatch API Key:
- Go to app.langwatch.ai and sign up
- Create a new project
- Copy your API key from the project settings
-
OpenAI API Key:
- Get your API key from platform.openai.com
- Set environment variables:
Step 3: Your First LLM Trace
Create a new fileapp.ts
:
Step 4: Run and See Results
What you’ll see: A trace named “greet-user” with input/output data, timing, and status.
What Just Happened?
Let’s break down what we just set up:- Trace: The entire
greetUser
function execution - Span: The individual operation within the trace
- Input/Output: The data flowing through your function
- Timing: How long each operation took
- Status: Whether the operation succeeded
Core Concepts
Think of LangWatch like a debugger for your LLM applications:- Traces = Complete user interactions (e.g., “What’s the weather?”)
- Spans = Individual steps within a trace (e.g., “LLM call”, “database query”)
- Threads = Conversations (group related traces together)
- Users = Individual users (for analytics)
For detailed explanations of all concepts, see our Concepts Guide.
For consistent observability across your application, learn about Semantic Conventions - standardized naming guidelines for attributes and metadata.
Integrations
LangWatch offers seamless integrations with many popular TypeScript libraries and frameworks. These integrations provide automatic instrumentation, capturing relevant data from your LLM applications with minimal setup. Below is a list of currently supported integrations. Click on each to learn more about specific setup instructions and available features:For detailed integration guides, see our integration documentation. Each integration includes framework-specific examples and best practices.
Common Development Scenarios
Scenario 1: LLM Application
Scenario 2: RAG Application
For consistent attribute naming and TypeScript autocomplete support, see our Semantic Conventions guide. For advanced span management techniques, check out Manual Instrumentation.
Scenario 3: Conversation Threading
Configuration
Basic Configuration
Environment-Specific Setup
Graceful Shutdown
ThesetupObservability
function returns an ObservabilityHandle
that provides a shutdown
method for graceful cleanup. This ensures all pending traces are exported before your application terminates.
Automatic Shutdown
By default, LangWatch automatically handles shutdown when your application receives aSIGTERM
signal:
Manual Shutdown
For environments where you can’t listen toSIGTERM
or need custom shutdown logic, you can manually call the shutdown method:
What Happens During Shutdown
The shutdown process ensures data integrity:- Flushes pending traces to the exporter
- Closes the trace exporter connection
- Shuts down the tracer provider
- Cleans up registered instrumentations
Always call
shutdown()
before your application exits to prevent data loss. The method is safe to call multiple times.If you don’t call
shutdown()
, some traces may be lost when your application terminates abruptly.Development Workflow
Local Development
- Set up environment:
- Run your app:
- Check dashboard: Visit app.langwatch.ai to see traces
Debugging
Enable console logging for local development:Troubleshooting
Common Issues
No traces appearing in dashboard
No traces appearing in dashboard
- Check your API key is correct
- Verify network connectivity to app.langwatch.ai
- Ensure
setupObservability
is called before any tracing - Check browser console for errors
- See Debugging and Troubleshooting for detailed solutions
High memory usage
High memory usage
- Use batch processing:
processorType: 'batch'
- Implement graceful shutdown
- Consider reducing data capture in production
Performance impact
Performance impact
- Tracer overhead is minimal (~1-2ms per span)
- Use module-level tracers (not function-level)
- Consider sampling in high-traffic scenarios
Getting Help
- Documentation: docs.langwatch.ai
- GitHub: github.com/langwatch/langwatch
- Discord: discord.gg/langwatch
Next Steps
Now that you have basic tracing working, explore:- API Reference - Complete API documentation for the LangWatch TypeScript SDK
- Manual Instrumentation - Advanced span management and fine-grained control
- Semantic Conventions - Standardized naming guidelines for attributes and metadata
- Debugging and Troubleshooting - Debug tracing issues and optimize performance
- OpenTelemetry Migration - Migrate your existing OpenTelemetry setup with LangWatch
- Framework Integrations - Specific guides for OpenAI, LangChain, Azure, and more
Start simple and add complexity gradually. You can always add more detailed tracing later as your application grows!