You can trace Google Gemini models with LangWatch by using the OpenAI-compatible endpoint provided by Google Cloud Vertex AI. This requires setting up an authenticated endpoint in your Google Cloud project that serves the Gemini model.
Before you begin, ensure you have configured the LangWatch SDK by following the Go setup guide.
This setup is more involved than other providers and requires Google Cloud authentication.
1. Enable Vertex AI
Ensure the Vertex AI API is enabled in your Google Cloud project.
2. Get an Authentication Token
The Vertex AI API uses a Google Cloud access token, not a static API key. You must generate this token using the gcloud
CLI. The token is short-lived (typically 1 hour) and needs to be refreshed.
# Log in to gcloud
gcloud auth login
# Get a token and set it as an environment variable
export GOOGLE_ACCESS_TOKEN=$(gcloud auth print-access-token)
In a production application, you should use a service account and the Google Cloud client libraries for Go to programmatically generate access tokens instead of using gcloud
directly.
3. Construct the Endpoint URL
Your Vertex AI endpoint URL will follow this format:
https://<region>-aiplatform.googleapis.com/v1/projects/<project-id>/locations/<region>/publishers/google/models/<model-name>
For example:
https://us-central1-aiplatform.googleapis.com/v1/projects/my-gcp-project/locations/us-central1/publishers/google/models/gemini-1.5-pro-001
Example
Configure your openai.Client
with the constructed Vertex AI URL and the temporary access token. The gen_ai.system
attribute should be set to "google"
.
package main
import (
"context"
"log"
"os"
"github.com/langwatch/langwatch-go/instrumentation/openai"
"github.com/sashabaranov/go-openai"
)
func main() {
ctx := context.Background()
// Assumes LangWatch is already set up.
// Your Google Cloud details
projectID := "your-gcp-project-id"
region := "us-central1"
modelName := "gemini-1.5-pro-001" // Or another supported Gemini model
// Construct the Vertex AI endpoint URL
baseURL := "https://" + region + "-aiplatform.googleapis.com/v1/projects/" + projectID + "/locations/" + region + "/publishers/google/models/" + modelName
config := openai.DefaultConfig(os.Getenv("GOOGLE_ACCESS_TOKEN"))
config.BaseURL = baseURL
// Add the middleware, identifying the system as "google"
config.HTTPClient = langwatch_openai.Instrument(
config.HTTPClient,
"my-gemini-app",
langwatch_openai.WithGenAISystem("google"),
langwatch_openai.WithCaptureInput(),
langwatch_openai.WithCaptureOutput(),
)
client := openai.NewClientWithConfig(config)
// Make a call to the Gemini model
response, err := client.CreateChatCompletion(ctx, openai.ChatCompletionRequest{
// The model parameter is not used in the path but can be set
Model: modelName,
Messages: []openai.ChatCompletionMessage{
{
Role: openai.ChatMessageRoleUser,
Content: "Hello, Gemini! Explain the concept of multimodal models.",
},
},
})
if err != nil {
log.Fatalf("Google Gemini API call failed: %v", err)
}
log.Printf("Response from Gemini: %s", response.Choices[0].Message.Content)
}
The model name is part of the URL itself. While the Model
parameter in the request body is less critical, itβs good practice to set it to the model you are targeting.