Using OpenInference Instrumentation
The recommended approach for instrumenting Google Vertex AI calls with LangWatch is to use the OpenInference instrumentation library, which provides comprehensive tracing for Google Vertex AI API calls.What OpenInference Captures
The OpenInference Vertex AI instrumentation automatically captures:- LLM Calls: All text generation, chat completion, and embedding requests
- Model Information: Model name, version, and configuration parameters
- Input/Output: Prompts, responses, and token usage
- Performance Metrics: Latency, token counts, and cost information
- Error Handling: Failed requests and error details
- Context Information: Session IDs, user IDs, and custom metadata
Installation and Setup
Prerequisites
-
Install the OpenInference Vertex AI instrumentor:
-
Install LangWatch SDK:
-
Set up your Google Cloud credentials:
Basic Setup
There are two main ways to integrate OpenInference Vertex AI instrumentation with LangWatch:1. Via langwatch.setup()
(Recommended)
You can pass an instance of the VertexAIInstrumentor
to the instrumentors
list in the langwatch.setup()
call. LangWatch will then manage the lifecycle of this instrumentor.
2. Direct Instrumentation
If you have an existing OpenTelemetryTracerProvider
configured in your application, you can use the instrumentor’s instrument()
method directly. LangWatch will automatically pick up the spans generated by these instrumentors as long as its exporter is part of the active TracerProvider
.
Which Approach to Choose?
- OpenInference Instrumentation is recommended for most use cases as it provides comprehensive, automatic instrumentation with minimal setup
- Direct OpenTelemetry Setup is useful when you need fine-grained control over the tracing configuration or are already using OpenTelemetry extensively