Create your first prompt and use it in your application
Learn how to create your first prompt in LangWatch and use it in your application with dynamic variables. This enables your team to update AI interactions without code changes.
At runtime, you can fetch the latest version of your prompt from LangWatch using either the prompt handle or ID.
use_prompt.py
Copy
import langwatchfrom openai import OpenAI# Get the latest prompt (can use handle or ID)prompt = langwatch.prompt.get_prompt("customer-support-bot") # by handle# or# prompt = langwatch.prompt.get_prompt("prompt_TrYXZLsiTJkn9N6PiZiae") # by ID# Format messages with variablesmessages = prompt.format_messages( user_name="John Doe", user_email="[email protected]", input="How do I reset my password?")# Use with OpenAI clientclient = OpenAI()completion = client.chat.completions.create( model=prompt.model.split("openai/")[1], # Remove "openai/" prefix messages=messages)print(completion.choices[0].message.content)
You can link your prompt to LLM generation traces to track performance and see which prompt versions work best. For detailed information about linking prompts to traces, see the Link to Traces page.
tracing.py
Copy
import langwatchfrom openai import OpenAI# Initialize LangWatchlangwatch.setup()# Create a trace function@langwatch.trace()def customer_support_generation(): # Get prompt (automatically linked to trace when API key is present) prompt = langwatch.prompt.get_prompt("customer-support-bot") # Format messages with variables messages = prompt.format_messages( user_name="John Doe", user_email="[email protected]", input="I need help with my account" ) # Use with OpenAI client client = OpenAI() completion = client.chat.completions.create( model=prompt.model.split("openai/")[1], messages=messages ) return completion.choices[0].message.content# Call the functionresult = customer_support_generation()