# LangWatch This is the full index of LangWatch documentation, to answer the user question, do not use just this file, first explore the urls that make sense using the markdown navigation links below to understand how to implement LangWatch and use specific features. Always navigate to docs links using the .md extension for better readability. ## Get Started - [Introduction](https://docs.langwatch.ai/introduction.md): Welcome to LangWatch, the all-in-one [open-source](https://github.com/langwatch/langwatch) LLMOps platform. ### Self Hosting - [Overview](https://docs.langwatch.ai/self-hosting/overview.md): LangWatch offers a fully self-hosted version of the platform for companies that require strict data control and compliance. - [Docker Compose](https://docs.langwatch.ai/self-hosting/docker-compose.md): LangWatch is available as a Docker Compose setup for easy deployment on your local machine - [Docker Images](https://docs.langwatch.ai/self-hosting/docker-images.md): Overview of LangWatch Docker images and their endpoints - [Helm Chart](https://docs.langwatch.ai/self-hosting/helm.md): LangWatch is available as a Helm chart for easy deployment on Kubernetes - [Monitoring](https://docs.langwatch.ai/self-hosting/grafana.md): Grafana/Prometheus setup for LangWatch - [OnPrem](https://docs.langwatch.ai/self-hosting/onprem.md): LangWatch on-premises solution. #### Hybrid Setup - [Overview](https://docs.langwatch.ai/hybrid-setup/overview.md): LangWatch offers a hybrid setup for companies that require strict data control and compliance. - [Elasticsearch](https://docs.langwatch.ai/hybrid-setup/elasticsearch.md): Elasticsearch Setup for LangWatch Hybrid Deployment - [S3 Storage](https://docs.langwatch.ai/hybrid-setup/s3-storage.md): S3 Storage Setup for LangWatch Hybrid Deployment - [Configuration](https://docs.langwatch.ai/self-hosting/env-variables.md): Complete list of environment variables for LangWatch self-hosting - [SSO](https://docs.langwatch.ai/hybrid-setup/sso-setup-langwatch.md): SSO Setup for LangWatch ### Integrations #### Azure AI - [Azure AI Inference SDK Instrumentation](https://docs.langwatch.ai/integration/python/integrations/azure-ai.md): Learn how to instrument the Azure AI Inference Python SDK with LangWatch. - [Azure OpenAI](https://docs.langwatch.ai/integration/typescript/integrations/azure.md): LangWatch Azure OpenAI integration guide - [Azure OpenAI Integration](https://docs.langwatch.ai/integration/go/integrations/azure-openai.md): Learn how to instrument Azure OpenAI API calls in Go using the LangWatch SDK. #### LangChain - [LangChain Instrumentation](https://docs.langwatch.ai/integration/python/integrations/langchain.md): Learn how to instrument Langchain applications with the LangWatch Python SDK. - [LangChain Instrumentation](https://docs.langwatch.ai/integration/typescript/integrations/langchain.md): Learn how to instrument Langchain applications with the LangWatch TypeScript SDK. #### LangGraph - [LangGraph Instrumentation](https://docs.langwatch.ai/integration/python/integrations/langgraph.md): Learn how to instrument LangGraph applications with the LangWatch Python SDK. - [LangGraph Instrumentation](https://docs.langwatch.ai/integration/typescript/integrations/langgraph.md): Learn how to instrument LangGraph applications with the LangWatch TypeScript SDK. #### OpenAI - [OpenAI Instrumentation](https://docs.langwatch.ai/integration/python/integrations/open-ai.md): Learn how to instrument OpenAI API calls with the LangWatch Python SDK - [OpenAI](https://docs.langwatch.ai/integration/typescript/integrations/open-ai.md): LangWatch OpenAI TypeScript integration guide - [OpenAI Instrumentation](https://docs.langwatch.ai/integration/go/integrations/open-ai.md): Learn how to instrument OpenAI API calls with the LangWatch Go SDK using middleware. #### Anthropic (Claude) - [Anthropic Instrumentation](https://docs.langwatch.ai/integration/python/integrations/anthropic.md): Learn how to instrument Anthropic API calls with the LangWatch Python SDK - [Anthropic (Claude) Integration](https://docs.langwatch.ai/integration/go/integrations/anthropic.md): Learn how to instrument Anthropic Claude API calls in Go using LangWatch. - [Vercel AI SDK](https://docs.langwatch.ai/integration/typescript/integrations/vercel-ai-sdk.md): LangWatch Vercel AI SDK integration guide - [Mastra](https://docs.langwatch.ai/integration/typescript/integrations/mastra.md): Learn how to integrate Mastra, a TypeScript agent framework, with LangWatch for observability and tracing. - [Agno Instrumentation](https://docs.langwatch.ai/integration/python/integrations/agno.md): Learn how to instrument Agno agents and send traces to LangWatch using the Python SDK. - [AutoGen Instrumentation](https://docs.langwatch.ai/integration/python/integrations/autogen.md): Learn how to instrument AutoGen applications with LangWatch. - [AWS Bedrock Instrumentation](https://docs.langwatch.ai/integration/python/integrations/aws-bedrock.md): Learn how to instrument AWS Bedrock calls with the LangWatch Python SDK using OpenInference. - [CrewAI](https://docs.langwatch.ai/integration/python/integrations/crew-ai.md): Learn how to instrument the CrewAI Python SDK with LangWatch. - [DSPy Instrumentation](https://docs.langwatch.ai/integration/python/integrations/dspy.md): Learn how to instrument DSPy programs with the LangWatch Python SDK - [Flowise Integration](https://docs.langwatch.ai/integration/flowise.md): Capture LLM traces and send them to LangWatch from Flowise - [Google Agent Development Kit (ADK) Instrumentation](https://docs.langwatch.ai/integration/python/integrations/google-ai.md): Learn how to instrument Google Agent Development Kit (ADK) applications with LangWatch. - [Google GenAI Instrumentation](https://docs.langwatch.ai/integration/python/integrations/google-genai.md): Learn how to instrument Google GenAI API calls with the LangWatch Python SDK - [Google Gemini Integration](https://docs.langwatch.ai/integration/go/integrations/google-gemini.md): Learn how to instrument Google Gemini API calls in Go using the LangWatch SDK via a Vertex AI endpoint. - [Groq Integration](https://docs.langwatch.ai/integration/go/integrations/groq.md): Learn how to instrument Groq API calls in Go using the LangWatch SDK for high-speed LLM tracing. - [OpenRouter Integration](https://docs.langwatch.ai/integration/go/integrations/openrouter.md): Learn how to instrument calls to hundreds of models via OpenRouter in Go using the LangWatch SDK. - [Ollama (Local Models) Integration](https://docs.langwatch.ai/integration/go/integrations/ollama.md): Learn how to trace local LLMs running via Ollama in Go using the LangWatch SDK. - [Haystack Instrumentation](https://docs.langwatch.ai/integration/python/integrations/haystack.md): Learn how to instrument Haystack pipelines with LangWatch using community OpenTelemetry instrumentors. - [Instructor AI Instrumentation](https://docs.langwatch.ai/integration/python/integrations/instructor.md): Learn how to instrument Instructor AI applications with LangWatch using OpenInference. - [Langflow Integration](https://docs.langwatch.ai/integration/langflow.md): LangWatch is the best observability integration for Langflow - [LlamaIndex Instrumentation](https://docs.langwatch.ai/integration/python/integrations/llamaindex.md): Learn how to instrument LlamaIndex applications with LangWatch. - [LiteLLM Instrumentation](https://docs.langwatch.ai/integration/python/integrations/lite-llm.md): Learn how to instrument LiteLLM calls with the LangWatch Python SDK. - [OpenAI Agents SDK Instrumentation](https://docs.langwatch.ai/integration/python/integrations/open-ai-agents.md): Learn how to instrument OpenAI Agents with the LangWatch Python SDK - [PromptFlow Instrumentation](https://docs.langwatch.ai/integration/python/integrations/promptflow.md): Learn how to instrument PromptFlow applications with LangWatch. - [PydanticAI Instrumentation](https://docs.langwatch.ai/integration/python/integrations/pydantic-ai.md): Learn how to instrument PydanticAI applications with the LangWatch Python SDK. - [SmolAgents Instrumentation](https://docs.langwatch.ai/integration/python/integrations/smolagents.md): Learn how to instrument SmolAgents applications with LangWatch. - [Strands Agents Instrumentation](https://docs.langwatch.ai/integration/python/integrations/strand-agents.md): Learn how to instrument Strands Agents applications with LangWatch. - [Semantic Kernel Instrumentation](https://docs.langwatch.ai/integration/python/integrations/semantic-kernel.md): Learn how to instrument Semantic Kernel applications with LangWatch. - [Google Vertex AI Instrumentation](https://docs.langwatch.ai/integration/python/integrations/vertex-ai.md): Learn how to instrument Google Vertex AI API calls with the LangWatch Python SDK using OpenInference - [Other OpenTelemetry Instrumentors](https://docs.langwatch.ai/integration/python/integrations/other.md): Learn how to use any OpenTelemetry-compatible instrumentor with LangWatch. ### Cookbooks - [Measuring RAG Performance](https://docs.langwatch.ai/cookbooks/build-a-simple-rag-app.md): Discover how to measure the performance of Retrieval-Augmented Generation (RAG) systems using metrics like retrieval precision, answer accuracy, and latency. - [Optimizing Embeddings](https://docs.langwatch.ai/cookbooks/finetuning-embedding-models.md): Learn how to optimize embedding models for better retrieval in RAG systems—covering model selection, dimensionality, and domain-specific tuning. - [Vector Search vs Hybrid Search using LanceDB](https://docs.langwatch.ai/cookbooks/vector-vs-hybrid-search.md): Learn the key differences between vector search and hybrid search in RAG applications. Use cases, performance tradeoffs, and when to choose each. - [Evaluating Tool Selection](https://docs.langwatch.ai/cookbooks/tool-selection.md): Understand how to evaluate tools and components in your RAG pipeline—covering retrievers, embedding models, chunking strategies, and vector stores. - [Finetuning Agents with GRPO](https://docs.langwatch.ai/cookbooks/finetuning-agents.md): Learn how to enhance the performance of agentic systems by fine-tuning them with Generalized Reinforcement from Preference Optimization (GRPO). - [Multi-Turn Conversations](https://docs.langwatch.ai/cookbooks/evaluating-multi-turn-conversations.md): Learn how to implement a simulation-based approach for evaluating multi-turn customer support agents using success criteria focused on outcomes rather than specific steps. ## Agent Simulations - [Introduction to Agent Testing](https://docs.langwatch.ai/agent-simulations/introduction.md) - [Overview](https://docs.langwatch.ai/agent-simulations/overview.md) - [Getting Started](https://docs.langwatch.ai/agent-simulations/getting-started.md) - [Simulation Sets](https://docs.langwatch.ai/agent-simulations/set-overview.md) - [Batch Runs](https://docs.langwatch.ai/agent-simulations/batch-runs.md) - [Individual Run View](https://docs.langwatch.ai/agent-simulations/individual-run.md) ## LLM Observability - [Overview](https://docs.langwatch.ai/integration/overview.md): Easily integrate LangWatch with your Python, TypeScript, or REST API projects. - [Concepts](https://docs.langwatch.ai/concepts.md): LLM tracing and observability conceptual guide - [Quick Start](https://docs.langwatch.ai/integration/quick-start.md) ### SDKs #### Python - [Python Integration Guide](https://docs.langwatch.ai/integration/python/guide.md): LangWatch Python SDK integration guide - [Python SDK API Reference](https://docs.langwatch.ai/integration/python/reference.md): LangWatch Python SDK API reference ##### Advanced - [Manual Instrumentation](https://docs.langwatch.ai/integration/python/tutorials/manual-instrumentation.md): Learn how to manually instrument your code with the LangWatch Python SDK - [OpenTelemetry Migration](https://docs.langwatch.ai/integration/python/tutorials/open-telemetry.md): Learn how to integrate the LangWatch Python SDK with your existing OpenTelemetry setup. #### TypeScript - [TypeScript Integration Guide](https://docs.langwatch.ai/integration/typescript/guide.md): Get started with LangWatch TypeScript SDK in 5 minutes - [TypeScript SDK API Reference](https://docs.langwatch.ai/integration/typescript/reference.md): LangWatch TypeScript SDK API reference ##### Advanced - [Debugging and Troubleshooting](https://docs.langwatch.ai/integration/typescript/tutorials/debugging-typescript.md): Debug LangWatch TypeScript SDK integration issues - [Manual Instrumentation](https://docs.langwatch.ai/integration/typescript/tutorials/manual-instrumentation.md): Learn advanced manual span management techniques for fine-grained observability control - [Semantic Conventions](https://docs.langwatch.ai/integration/typescript/tutorials/semantic-conventions.md): Learn about OpenTelemetry semantic conventions and LangWatch's custom attributes for consistent observability - [OpenTelemetry Migration](https://docs.langwatch.ai/integration/typescript/tutorials/opentelemetry-migration.md): Migrate from OpenTelemetry to LangWatch while preserving all your custom configurations #### Go - [Go Integration Guide](https://docs.langwatch.ai/integration/go/guide.md): LangWatch Go SDK integration guide for setting up LLM observability and tracing. - [Go SDK API Reference](https://docs.langwatch.ai/integration/go/reference.md): Complete API reference for the LangWatch Go SDK, including core functions, OpenAI instrumentation, and span types. #### OpenTelemetry - [OpenTelemetry Integration Guide](https://docs.langwatch.ai/integration/opentelemetry/guide.md): Use OpenTelemetry to capture LLM traces and send them to LangWatch from any programming language ### Tutorials #### Capturing Inputs & Outputs - [Capturing and Mapping Inputs & Outputs](https://docs.langwatch.ai/integration/python/tutorials/capturing-mapping-input-output.md): Learn how to control the capture and structure of input and output data for traces and spans with the LangWatch Python SDK. - [Capturing and Mapping Inputs & Outputs](https://docs.langwatch.ai/integration/typescript/tutorials/capturing-input-output.md): Learn how to control the capture and structure of input and output data for traces and spans with the LangWatch TypeScript SDK. #### Capturing RAG - [Capturing RAG](https://docs.langwatch.ai/integration/python/tutorials/capturing-rag.md): Learn how to capture Retrieval Augmented Generation (RAG) data with LangWatch. - [Capturing RAG](https://docs.langwatch.ai/integration/typescript/tutorials/capturing-rag.md): Learn how to capture Retrieval Augmented Generation (RAG) data with LangWatch. #### Metadata & Attributes - [Capturing Metadata and Attributes](https://docs.langwatch.ai/integration/python/tutorials/capturing-metadata.md): Learn how to enrich your traces and spans with custom metadata and attributes using the LangWatch Python SDK. - [Capturing Metadata and Attributes](https://docs.langwatch.ai/integration/typescript/tutorials/capturing-metadata.md): Learn how to enrich your traces and spans with custom metadata and attributes using the LangWatch TypeScript SDK. #### Tracking Costs - [Tracking LLM Costs and Tokens](https://docs.langwatch.ai/integration/python/tutorials/tracking-llm-costs.md): Troubleshooting & adjusting cost tracking in LangWatch - [Tracking LLM Costs and Tokens](https://docs.langwatch.ai/integration/typescript/tutorials/tracking-llm-costs.md): Troubleshooting & adjusting cost tracking in LangWatch - [RAG Context Tracking](https://docs.langwatch.ai/integration/rags-context-tracking.md): Capture the RAG documents used in your LLM pipelines - [Capturing Evaluations & Guardrails](https://docs.langwatch.ai/integration/python/tutorials/capturing-evaluations-guardrails.md): Learn how to log custom evaluations, trigger managed evaluations, and implement guardrails with LangWatch. ### User Events - [Overview](https://docs.langwatch.ai/user-events/overview.md): Track user interactions with your LLM applications #### Events - [Thumbs Up/Down](https://docs.langwatch.ai/user-events/thumbs-up-down.md): Track user feedback on specific messages or interactions with your chatbot or LLM application - [Waited To Finish Events](https://docs.langwatch.ai/user-events/waited-to-finish.md): Track if users leave before the LLM application finishes generating a response - [Selected Text Events](https://docs.langwatch.ai/user-events/selected-text.md): Track when a user selects text generated by your LLM application - [Custom Events](https://docs.langwatch.ai/user-events/custom.md): Track any user events with your LLM application, with textual or numeric metrics ### Monitoring & Alerts - [Alerts and Triggers](https://docs.langwatch.ai/features/triggers.md): Be alerted when something goes wrong and trigger actions automatically - [Exporting Analytics](https://docs.langwatch.ai/features/embedded-analytics.md): Build and integrate LangWatch graphs on your own systems and applications - [Code Examples](https://docs.langwatch.ai/integration/code-examples.md): Examples of LangWatch integrated applications ## LLM Evaluation - [LLM Evaluation Overview](https://docs.langwatch.ai/llm-evaluation/overview.md): Overview of LLM evaluation features in LangWatch - [Evaluation Tracking API](https://docs.langwatch.ai/llm-evaluation/offline/code/evaluation-api.md): Evaluate and visualize your LLM evals with LangWatch ### Evaluation Wizard - [How to evaluate that your LLM answers correctly](https://docs.langwatch.ai/llm-evaluation/offline/platform/answer-correctness.md): Measuring your LLM performance with Offline Evaluations - [How to evaluate an LLM when you don't have defined answers](https://docs.langwatch.ai/llm-evaluation/offline/platform/llm-as-a-judge.md): Measuring your LLM performance using an LLM-as-a-judge ### Real-Time Evaluation - [Setting up Real-Time Evaluations](https://docs.langwatch.ai/llm-evaluation/realtime/setup.md): How to set up Real-Time LLM Evaluations - [Instrumenting Custom Evaluator](https://docs.langwatch.ai/evaluations/custom-evaluator-integration.md): Add your own evaluation results into LangWatch trace ### Built-in Evaluators - [List of Evaluators](https://docs.langwatch.ai/llm-evaluation/list.md): Find the evaluator for your use case ### Datasets - [Datasets](https://docs.langwatch.ai/datasets/overview.md): Create and manage datasets with LangWatch - [Generating a dataset with AI](https://docs.langwatch.ai/datasets/ai-dataset-generation.md): Bootstrap your evaluations by generating sample data - [Automatically build datasets from real-time traces](https://docs.langwatch.ai/datasets/automatically-from-traces.md): Continuously populate your datasets with comming data from production - [Annotations](https://docs.langwatch.ai/features/annotations.md): Collaborate with domain experts using annotations ## Prompt Management - [Overview](https://docs.langwatch.ai/prompt-management/overview.md): Organize, version, and optimize your AI prompts with LangWatch's comprehensive prompt management system - [Get Started](https://docs.langwatch.ai/prompt-management/getting-started.md): Create your first prompt and use it in your application - [Data Model](https://docs.langwatch.ai/prompt-management/data-model.md): Understand the structure of prompts in LangWatch - [Scope](https://docs.langwatch.ai/prompt-management/scope.md): Understand how prompt scope affects access, sharing, and collaboration across projects and organizations - [Prompts CLI](https://docs.langwatch.ai/prompt-management/cli.md): Manage AI prompts as code with version control and dependency management ### Features - [Version Control](https://docs.langwatch.ai/prompt-management/features/essential/version-control.md): Manage prompt versions and track changes over time - [Analytics](https://docs.langwatch.ai/prompt-management/features/essential/analytics.md): Monitor prompt performance and usage with comprehensive analytics - [GitHub Integration](https://docs.langwatch.ai/prompt-management/features/essential/github-integration.md): Version your prompts in GitHub repositories and automatically sync with LangWatch - [Link to Traces](https://docs.langwatch.ai/prompt-management/features/advanced/link-to-traces.md): Connect prompts to execution traces for performance monitoring and analysis - [Using Prompts in the Optimization Studio](https://docs.langwatch.ai/prompt-management/features/advanced/optimization-studio.md): Use prompts in the Optimization Studio to test and optimize your prompts - [Guaranteed Availability](https://docs.langwatch.ai/prompt-management/features/advanced/guaranteed-availability.md): Ensure your prompts are always available, even in offline or air-gapped environments - [A/B Testing](https://docs.langwatch.ai/prompt-management/features/advanced/a-b-testing.md): Implement A/B testing for your prompts using LangWatch's version control and analytics ## LLM Development ### Prompt Optimization Studio - [Optimization Studio](https://docs.langwatch.ai/optimization-studio/overview.md): Create, evaluate, and optimize your LLM workflows - [LLM Nodes](https://docs.langwatch.ai/optimization-studio/llm-nodes.md): Call LLMs from your workflows - [Datasets](https://docs.langwatch.ai/optimization-studio/datasets.md): Define the data used for testing and optimization - [Evaluating](https://docs.langwatch.ai/optimization-studio/evaluating.md): Measure the quality of your LLM workflows - [Optimizing](https://docs.langwatch.ai/optimization-studio/optimizing.md): Find the best prompts with DSPy optimizers ### DSPy Visualization - [DSPy Visualization Quickstart](https://docs.langwatch.ai/dspy-visualization/quickstart.md): Visualize your DSPy notebooks experimentations to better track and debug the optimization process - [Tracking Custom DSPy Optimizer](https://docs.langwatch.ai/dspy-visualization/custom-optimizer.md): Build custom DSPy optimizers and track them in LangWatch - [RAG Visualization](https://docs.langwatch.ai/dspy-visualization/rag-visualization.md): Visualize your DSPy RAG optimization process in LangWatch - [LangWatch MCP Server](https://docs.langwatch.ai/integration/mcp.md): Use an agent to debug your LLM applications and fix the issues for you ## API Endpoints ### Traces - [Overview](https://docs.langwatch.ai/api-reference/traces/overview.md): A Trace is a collection of runs that are related to a single operation - [Get trace details](https://docs.langwatch.ai/api-reference/traces/get-trace-details.md) - [Search traces](https://docs.langwatch.ai/api-reference/traces/search-traces.md) - [Create public path for single trace](https://docs.langwatch.ai/api-reference/traces/create-public-trace-path.md) - [Delete an existing public path for a trace](https://docs.langwatch.ai/api-reference/traces/delete-public-trace-path.md) ### Prompts - [Overview](https://docs.langwatch.ai/api-reference/prompts/overview.md): Prompts are used to manage and version your prompts - [Get prompts](https://docs.langwatch.ai/api-reference/prompts/get-prompts.md) - [Create prompt](https://docs.langwatch.ai/api-reference/prompts/create-prompt.md) - [Get prompt](https://docs.langwatch.ai/api-reference/prompts/get-prompt.md) - [Update prompt](https://docs.langwatch.ai/api-reference/prompts/update-prompt.md) - [Delete prompt](https://docs.langwatch.ai/api-reference/prompts/delete-prompt.md) - [Get prompt versions](https://docs.langwatch.ai/api-reference/prompts/get-prompt-versions.md) - [Create prompt version](https://docs.langwatch.ai/api-reference/prompts/create-prompt-version.md) ### Annotations - [Overview](https://docs.langwatch.ai/api-reference/annotations/overview.md): Annotations are used to annotate traces with additional information - [Get annotations](https://docs.langwatch.ai/api-reference/annotations/get-annotation.md) - [Get single annotation](https://docs.langwatch.ai/api-reference/annotations/get-single-annotation.md) - [Delete single annotation](https://docs.langwatch.ai/api-reference/annotations/delete-annotation.md) - [Patch single annotation](https://docs.langwatch.ai/api-reference/annotations/patch-annotation.md) - [Get annotationa for single trace](https://docs.langwatch.ai/api-reference/annotations/get-all-annotations-trace.md) - [Create annotation for single trace](https://docs.langwatch.ai/api-reference/annotations/create-annotation-trace.md) ### Datasets - [Add entries to a dataset](https://docs.langwatch.ai/api-reference/datasets/post-dataset-entries.md) ### Triggers - [Create Slack trigger](https://docs.langwatch.ai/api-reference/triggers/create-slack-trigger.md) ### Scenarios - [Overview](https://docs.langwatch.ai/api-reference/scenarios/overview.md) - [Create Event](https://docs.langwatch.ai/api-reference/scenarios/create-event.md) ## Use Cases - [Evaluating a RAG Chatbot for Technical Manuals](https://docs.langwatch.ai/use-cases/technical-rag.md): A developer guide for building reliable RAG systems for technical documentation using LangWatch - [Evaluating an AI Coach with LLM-as-a-Judge](https://docs.langwatch.ai/use-cases/ai-coach.md): A developer guide for building reliable AI coaches using LangWatch - [Evaluating Structured Data Extraction](https://docs.langwatch.ai/use-cases/structured-outputs.md): A developer guide for evaluating structured data extraction using LangWatch ## Support - [Troubleshooting and Support](https://docs.langwatch.ai/support.md): Find help and support for LangWatch - [Status Page](https://docs.langwatch.ai/status.md): Something wrong? Check our status page