Create your first prompt and use it in your application
Learn how to create your first prompt in LangWatch and use it in your application with dynamic variables. This enables your team to update AI interactions without code changes.
At runtime, you can fetch the latest version of your prompt from LangWatch using the prompt handle.
use_prompt.py
Copy
import langwatchfrom litellm import completion# Get the latest prompt by handleprompt = langwatch.prompts.get("customer-support-bot")# Compile prompt with variablescompiled_prompt = prompt.compile( user_name="John Doe", user_email="[email protected]", input="How do I reset my password?")# Use with LiteLLM (unified interface to multiple providers)response = completion( model=prompt.model, # LiteLLM handles provider prefixes automatically messages=compiled_prompt.messages)print(response.choices[0].message.content)
You can link your prompt to LLM generation traces to track performance and see which prompt versions work best. For detailed information about linking prompts to traces, see the Link to Traces page.
tracing.py
Copy
import langwatchfrom litellm import completion# Initialize LangWatchlangwatch.setup()# Create a trace function@langwatch.trace()def customer_support_generation(): # Get prompt (automatically linked to trace when API key is present) prompt = langwatch.prompts.get("customer-support-bot") # Compile prompt with variables compiled_prompt = prompt.compile( user_name="John Doe", user_email="[email protected]", input="I need help with my account" ) # Use with LiteLLM (unified interface to multiple providers) response = completion( model=prompt.model, # LiteLLM handles provider prefixes automatically messages=compiled_prompt.messages ) return response.choices[0].message.content# Call the functionresult = customer_support_generation()