Learn how to create your first prompt in LangWatch and use it in your application with dynamic variables. This enables your team to update AI interactions without code changes.

Get API keys

  1. Create a LangWatch account or set up self-hosted LangWatch
  2. Create new API credentials in your project settings
  3. Note your API key for use in the steps below

Create a prompt

Use the LangWatch UI to create a new prompt or update an existing one.
  1. Navigate to your project dashboard
  2. Go to Prompt Management in the sidebar
  3. Click “Create New Prompt”
  4. Fill in the prompt details and save
Editing a prompt in LangWatch UI

Use prompt

At runtime, you can fetch the latest version of your prompt from LangWatch using the prompt handle.
use_prompt.py
import langwatch
from litellm import completion

# Get the latest prompt by handle
prompt = langwatch.prompts.get("customer-support-bot")

# Compile prompt with variables
compiled_prompt = prompt.compile(
    user_name="John Doe",
    user_email="[email protected]",
    input="How do I reset my password?"
)

# Use with LiteLLM (unified interface to multiple providers)
response = completion(
    model=prompt.model,  # LiteLLM handles provider prefixes automatically
    messages=compiled_prompt.messages
)

print(response.choices[0].message.content)
You can link your prompt to LLM generation traces to track performance and see which prompt versions work best. For detailed information about linking prompts to traces, see the Link to Traces page.
tracing.py
import langwatch
from litellm import completion

# Initialize LangWatch
langwatch.setup()

# Create a trace function
@langwatch.trace()
def customer_support_generation():
    # Get prompt (automatically linked to trace when API key is present)
    prompt = langwatch.prompts.get("customer-support-bot")

    # Compile prompt with variables
    compiled_prompt = prompt.compile(
        user_name="John Doe",
        user_email="[email protected]",
        input="I need help with my account"
    )

    # Use with LiteLLM (unified interface to multiple providers)
    response = completion(
        model=prompt.model,  # LiteLLM handles provider prefixes automatically
        messages=compiled_prompt.messages
    )

    return response.choices[0].message.content

# Call the function
result = customer_support_generation()

← Back to Prompt Management Overview