Installation
For a quick start guide with step-by-step instructions, see the Go Integration Guide. For practical examples of creating traces and spans, see the Core Concepts section in the guide.
Core SDK (langwatch
)
This package contains the primary functions for setting up LangWatch and creating traces and spans.
Setup
Setup()
initializes the LangWatch OpenTelemetry exporter and sets it as the global tracer provider. It should be called once when your application starts.
ctx
- Context for the setup operation
shutdown
- Function that should be deferred to ensure traces are flushed on exit
Always call the shutdown function to ensure traces are properly flushed when your application exits.
For a complete setup example with environment variables and error handling, see the Setup section in the integration guide.
Tracer
Tracer()
retrieves a LangWatchTracer
instance, which is a thin wrapper around an OpenTelemetry Tracer
.
Parameter | Type | Description |
---|---|---|
instrumentationName | string | Name of the library or application being instrumented. |
opts | ...trace.TracerOption | Optional OpenTelemetry tracer options (e.g., trace.WithInstrumentationVersion ). |
LangWatchTracer
TheLangWatchTracer
interface provides a Start
method that mirrors OpenTelemetry’s but returns a LangWatchSpan
.
LangWatchSpan
TheLangWatchSpan
interface embeds the standard trace.Span
and adds several helper methods for LangWatch-specific data.
Sets the span type for categorization in LangWatch. This enables specialized UI treatment and analytics.Example:
Using span types is optional but highly recommended as it enables LangWatch to provide more tailored insights and visualizations.
Assigns a thread ID to group this trace with a conversation. Useful for multi-turn conversations.Example:
All spans within the same trace will share the same thread ID, allowing you to group related interactions together.
Assigns a user ID to the trace for user-centric analytics and filtering.Example:
Records a simple string as the span’s input. Ideal for user queries or simple text inputs.Example:
Records a structured object (e.g., struct, map) as the span’s input, serialized to JSON. Use for complex request objects.Example:
Records a simple string as the span’s output. Ideal for AI responses or simple text outputs.Example:
Records a structured object as the span’s output, serialized to JSON. Use for complex response objects.Example:
Sets the model identifier used for a request (e.g., an LLM call). This is the model you requested to use.Example:
Sets the model identifier reported in a response. This is the actual model that processed your request.Example:
The response model may differ from the request model, especially with OpenAI’s model updates.
Attaches a slice of retrieved context chunks for RAG analysis. This enables LangWatch to analyze the relevance and quality of retrieved documents.Example:
OpenAI Instrumentation
Thegithub.com/langwatch/langwatch/sdk-go/instrumentation/openai
package provides middleware for the official openai-go
client.
For step-by-step instructions on setting up OpenAI instrumentation, see the OpenAI integration guide.
Middleware
Middleware()
creates an openai.Middleware
that automatically traces OpenAI API calls.
instrumentationName
- Name of your application or serviceopts
- Optional configuration options
...Option
):
Records the full input payload as a span attribute. This captures the complete request sent to the LLM.Example:
Enabling input capture may include sensitive data in your traces. Ensure this aligns with your data privacy requirements.
Records the full response payload as a span attribute. For streams, this is the final accumulated response.Example:
This is particularly useful for debugging and understanding what the LLM actually returned.
Sets the
gen_ai.system
attribute. Useful for identifying providers like "anthropic"
or "azure"
. Defaults to "openai"
.Example:Specifies the
trace.TracerProvider
to use. Defaults to the global provider.Example:LangWatch Span Types
SpanType
is a string constant used with span.SetType()
to categorize spans in LangWatch for specialized UI treatment and analytics.
Constant | Description | Use Case |
---|---|---|
SpanTypeLLM | A call to a Large Language Model. | Direct LLM API calls, chat completions |
SpanTypeChain | A sequence of related operations or a sub-pipeline. | Multi-step processing, workflow orchestration |
SpanTypeTool | A call to an external tool or function. | Function calls, API integrations, database queries |
SpanTypeAgent | An autonomous agent’s operation or decision-making step. | Agent reasoning, decision points, planning |
SpanTypeRAG | An overarching RAG operation, often containing retrieval and LLM spans. | Complete RAG workflows |
SpanTypeRetrieval | The specific step of retrieving documents from a knowledge base. | Vector database queries, document search |
SpanTypeQuery | A generic database or API query. | SQL queries, REST API calls |
SpanTypeEmbedding | The specific step of generating embeddings. | Text embedding generation |
Using these span types is optional but highly recommended, as it enables LangWatch to provide more tailored insights and visualizations for your traces.
Collected Attributes
The OpenAI instrumentation automatically adds these attributes to spans:Request Attributes
gen_ai.system
- AI system name (e.g., “openai”)gen_ai.request.model
- Model used for the requestgen_ai.request.temperature
- Temperature parametergen_ai.request.top_p
- Top-p parametergen_ai.request.top_k
- Top-k parametergen_ai.request.frequency_penalty
- Frequency penaltygen_ai.request.presence_penalty
- Presence penaltygen_ai.request.max_tokens
- Maximum tokenslangwatch.gen_ai.streaming
- Boolean indicating streaminggen_ai.operation.name
- Operation name (e.g., “completions”)langwatch.input.value
- Input content (if WithCaptureInput enabled)
Response Attributes
gen_ai.response.id
- Response ID from the APIgen_ai.response.model
- Model that generated the responsegen_ai.response.finish_reasons
- Completion finish reasonsgen_ai.usage.input_tokens
- Number of input tokens usedgen_ai.usage.output_tokens
- Number of output tokens generatedgen_ai.openai.response.system_fingerprint
- OpenAI system fingerprintlangwatch.output.value
- Output content (if WithCaptureOutput enabled)
HTTP Attributes
Standard HTTP client attributes are also included:http.request.method
- HTTP methodurl.path
- Request pathserver.address
- Server addresshttp.response.status_code
- HTTP status code
Request/Response Examples
Basic Chat Completion
RAG Pipeline
Error Handling
All SDK methods handle errors gracefully. In case of failures:- Serialization errors - Fallback to string representation
- Network errors - Logged but don’t interrupt application flow
- Invalid data - Sanitized or excluded from traces
Environment Variables
The SDK respects these environment variables:LANGWATCH_API_KEY
- Your LangWatch API key (required)LANGWATCH_ENDPOINT
- Custom LangWatch endpoint (optional)OTEL_*
- Standard OpenTelemetry environment variables
Complete Example
Here’s a comprehensive example showing a complete RAG application with proper error handling and best practices:Version Compatibility
- Go Version: 1.19 or later
- OpenTelemetry: v1.24.0 or later
- OpenAI Go SDK: Latest version
Support
For additional help:For common setup issues and troubleshooting tips, see the Troubleshooting section in the integration guide.