Skip to main content
LangWatch captures comprehensive traces from Java applications using Spring AI when you export OpenTelemetry data to LangWatch. This guide focuses on the minimal configuration you add to your Spring Boot app.
This page focuses on configuration. For a complete, runnable example, see the full working example repository: Spring AI + LangWatch (OpenTelemetry) example.

Prerequisites

  • Java 17 or later
  • An OpenAI API key (if you use the OpenAI provider via Spring AI)
  • A LangWatch API key

Setup

1

Set required environment variables

Export your provider API keys as environment variables used by your app.
export OPENAI_API_KEY="your-openai-api-key"
export LANGWATCH_API_KEY="your-langwatch-api-key"
Use your platform’s secret manager for variables in production. Never store secrets in source control.
2

Configure the OpenTelemetry exporter to LangWatch

Configure OpenTelemetry and SpringAI in your src/main/resources/application.yaml so your app captures and sends traces directly to LangWatch.
application.yaml
spring.ai:
chat:
  client:
    observations:
      log-prompt: true
  observations:
    log-prompt: true # Include prompt content in tracing (disabled by default for privacy)
    log-completion: true # Include completion content in tracing (disabled by default)
openai:
  api-key: ${OPENAI_API_KEY}


management:
tracing.enabled: true
logging.export.enabled: true

otel:
java:
  global-autoconfigure:
    enabled: true
exporter:
  otlp:
    endpoint: "https://app.langwatch.ai/api/otel"
    protocol: "http/protobuf"
    headers:
      Authorization: ${LANGWATCH_API_KEY}
traces:
  exporter: otlp
  sampler:
    ratio: 1.0
logs.exporter: otlp
3

Start your Spring Boot application as usual

Run your application the way you normally do (IDE, Gradle, Maven, or a container). No special commands are required beyond your standard start procedure.
After your application handles AI calls via Spring AI, traces will appear in your LangWatch workspace.

What gets traced

  • HTTP requests handled by your Spring Boot application
  • AI model calls performed via Spring AI (e.g., OpenAI)
  • Prompt and completion content, when capture is enabled/configured
  • Performance metrics and errors/exceptions

Monitoring

Once configured:
  • Visit your LangWatch dashboard to explore spans and AI-specific attributes
  • Analyze model performance, usage, and costs
  • Investigate failures with full trace context

Troubleshooting

  • Authorization header: Ensure Authorization: Bearer <your-langwatch-key> is set under otel.exporter.otlp.headers.
  • Endpoint URL: Confirm the endpoint is https://app.langwatch.ai/api/otel and protocol is http/protobuf.
  • Network egress: Verify your environment can reach LangWatch (egress/proxy/firewall settings).
  • Provider configuration: Ensure your Spring AI provider (e.g., OpenAI) is properly configured and invoked by your code.
  • Sampling: Check OpenTelemetry sampling configuration if you’ve customized it; overly aggressive sampling can drop spans.
For a complete implementation showing controllers, Spring AI configuration, and OpenTelemetry setup, see the full working example repository.
⌘I