By default, LangWatch will automatically capture cost and token data for your LLM calls. LLM costs analytics graph If you don’t see costs being tracked or you see it being tracked as $0, this guide will help you identify and fix issues when cost and token tracking is not working as expected.

Understanding Cost and Token Tracking

LangWatch calculates costs and tracks tokens by:
  1. Capturing model names in LLM spans to match against cost tables
  2. Recording token metrics (prompt_tokens, completion_tokens) in span data, or estimating when not available
  3. Mapping models to costs using the pricing table in Settings > Model Costs
When any of these components are missing, you might see missing or $0 costs and tokens.

Step 1: Verify LLM Span Data Capture

The most common issue is that your LLM spans aren’t capturing the required data: model name, inputs, outputs, and token metrics.

Check Your Current Spans

First, examine what data is being captured in your LLM spans. In the LangWatch dashboard:
  1. Navigate to a trace that should have cost/token data
  2. Click on the LLM span to inspect its details
  3. Look for these key fields:
    • Model: Should show the model identifier (e.g., openai/gpt-5)
    • Input/Output: Should contain the actual messages sent and received
    • Metrics: Should show prompt + completion tokens
LLM span showing model, input/output, and token metrics

Step 2: Fix Missing Model Information

If your spans don’t show model information, the integration framework you’re using might not be capturing it automatically.

Solution A: Use Framework Auto-tracking

LangWatch provides auto-tracking for popular frameworks that automatically captures all the necessary data for cost calculation. Check the Integrations menu in the sidebar to find specific setup instructions for your framework, which will show you how to properly configure automatic model and token tracking.

Solution B: Manually Set Model Information

If auto-tracking isn’t available for your framework, manually update the span with model information:
import { setupObservability } from "langwatch/observability/node";
import { getLangWatchTracer } from "langwatch";

// Setup observability
setupObservability();

const tracer = getLangWatchTracer("cost-tracking-example");

async function customLLMCall(prompt: string): Promise<string> {
  return await tracer.withActiveSpan("CustomLLMCall", async (span) => {
    // Mark the span as an LLM type span
    span.setType("llm");
    span.setRequestModel("gpt-5-mini"); // Use the exact model identifier
    span.setInput("text", prompt);

    // Simulate an LLM response
    const response = await yourCustomLLMClient.generate(prompt);

    // Set output and token metrics
    span.setOutput("text", response.text);
    span.setMetrics({
      promptTokens: response.usage.prompt_tokens,
      completionTokens: response.usage.completion_tokens,
    });

    return response.text;
  });
}

Step 3: Configure Model Cost Mapping

If your model information is being captured but costs still show $0, you need to configure the cost mapping.

Check Existing Model Costs

  1. Go to Settings > Model Costs in your LangWatch dashboard
  2. Look for your model in the list
  3. Check if the regex pattern matches your model identifier
Model Costs settings page showing cost configuration

Add Custom Model Costs

If your model isn’t in the cost table, add it:
  1. Click “Add New Model” in Settings > Model Costs
  2. Configure the model entry:
    • Model Name: Descriptive name (e.g., “gpt-5-mini”)
    • Regex Match Rule: Pattern to match your model identifier (e.g., ^gpt-5-mini$)
    • Input Cost: Cost per input token (e.g., 0.0000004)
    • Output Cost: Cost per output token (e.g., 0.0000016)

Common Model Identifier Patterns

Make sure your regex patterns match how the model names appear in your spans:
FrameworkModel Identifier FormatRegex Pattern
OpenAI SDKgpt-5-mini^gpt-5-mini$
Azure OpenAIgpt-5-mini^gpt-5-mini$
LangChainopenai/gpt-5-mini^openai/gpt-5-mini$
Custommy-custom-model-v1^my-custom-model-v1$

Verification Checklist

After running your test, verify in the LangWatch dashboard: Trace appears in the dashboard
LLM span shows model name (e.g., gpt-5-mini)
Input and output are captured
Token metrics are present (prompt_tokens, completion_tokens)
Cost is calculated and displayed (non-zero value)

Common Issues and Solutions

Issue: Auto-tracking not working

Symptoms: Spans appear but without model/metrics data Solutions:
  • Ensure setupObservability() is called before any LLM operations
  • Check that the client instance being tracked is the same one making calls
  • Verify the integration is initialized correctly

Issue: Custom models not calculating costs

Symptoms: Model name appears but cost remains $0 Solutions:
  • Check regex pattern in Model Costs settings
  • Ensure the pattern exactly matches your model identifier
  • Verify input and output costs are configured correctly

Issue: Token counts are 0 but model is captured

Symptoms: Model name is present but token metrics are missing Solutions:
  • Manually set token metrics using span.setMetrics() if not automatically captured
  • Check if your LLM provider returns usage information
  • Ensure the integration is extracting token counts from responses

Issue: Framework with OpenTelemetry not capturing model data

Symptoms: Using a framework with OpenTelemetry integration that’s not capturing model names or token counts Solutions:

Advanced Examples

LangChain Integration

The LangWatchCallbackHandler automatically captures model information and token metrics:
import { setupObservability } from "langwatch/observability/node";
import { LangWatchCallbackHandler } from "langwatch/instrumentation/langchain";
import { ChatOpenAI } from "@langchain/openai";

setupObservability();

const llm = new ChatOpenAI({
  modelName: "gpt-5-mini",
  temperature: 0.7,
  callbacks: [new LangWatchCallbackHandler()],
});

Manual Token Counting

If your LLM provider doesn’t return token counts:
import { setupObservability } from "langwatch/observability/node";
import { getLangWatchTracer } from "langwatch";

const tracer = getLangWatchTracer("manual-token-counting");

async function llmWithManualTokenCounting(prompt: string): Promise<string> {
  return await tracer.withActiveSpan("LLMWithManualCounting", async (span) => {
    span.setType("llm");
    span.setRequestModel("custom-model-v1");
    span.setInput("text", prompt);

    const response = await yourCustomLLMClient.generate(prompt);

    // Manual token counting (simplified example)
    const estimatedPromptTokens = Math.ceil(prompt.length / 4);
    const estimatedCompletionTokens = Math.ceil(response.text.length / 4);

    span.setOutput("text", response.text);
    span.setMetrics({
      promptTokens: estimatedPromptTokens,
      completionTokens: estimatedCompletionTokens,
    });

    return response.text;
  });
}

Getting Help

If you’re still experiencing issues after following this guide:
  1. Check the LangWatch logs for any error messages
  2. Verify your API key and endpoint configuration
  3. Share a minimal reproduction with the specific framework you’re using
Cost and token tracking should work reliably once the model information and metrics are properly captured. Most issues stem from missing model identifiers or incorrect cost table configuration.