Prompt Playground
The Prompt Playground is an interactive workspace for editing, testing, and iterating on prompts with real-time AI assistance. Work with multiple prompts simultaneously in a browser-like tab interface, test variations with variables, and get instant feedback through conversation testing.
Key Features
- Multi-tab editing - Open and edit multiple prompts simultaneously in separate tabs
- Split view - Compare prompts side-by-side by splitting tabs into multiple windows
- AI-powered testing - Test prompts in real-time with an integrated chat interface
- Variable testing - Test prompts with different variable values without saving
- Trace integration - Load prompts from execution traces to iterate on real conversations
- Version history - Access and restore previous prompt versions directly from the editor
- API code generation - Generate code snippets for using prompts in your application
Getting started
Access the Playground
- Navigate to your project dashboard
- Click Prompt Management in the sidebar
- Select Prompt Playground (or navigate to
/prompt-studio)
Open a prompt
Prompts are organized in the left sidebar by folder:- Click any prompt in the sidebar to open it in a new tab
- Prompts are grouped by folder (e.g.,
customer-support/,agent/) - Click the + button to create a new prompt
Edit prompts
Each prompt tab contains three main sections:Header Bar
The header provides quick access to essential controls:- Handle Editor - Edit the prompt handle (identifier)
- Model Selector - Choose the LLM model for this prompt
- API Snippet - Generate code snippets for TypeScript, Python, or REST API
- Version History - View and restore previous versions
- Save Button - Save changes (creates a new version)
Messages Editor
Edit your prompt messages directly in the messages editor:- Add system, user, and assistant messages
- Use variables with the
{{variable_name}}syntax - Reorder messages by dragging
- Delete messages with the remove button

Tabbed Sections
Each prompt tab includes three tabs for different aspects of prompt management:Conversation Tab
Test your prompt interactively:- Chat with the prompt using the current configuration
- See real-time responses as you edit
- Reset the conversation to start fresh
Variables Tab
Test your prompt with different variable values:- Set values for all prompt variables
- Variables are automatically extracted from your prompt template
- Test multiple scenarios without saving changes
- Values persist while the tab is open

Settings Tab
Configure advanced prompt settings:- Model Parameters - Temperature, max tokens, response format
- Input/Output Types - Define expected input and output schemas
- Scope - Set prompt visibility (PROJECT or ORGANIZATION)
- Metadata - Add descriptions and tags
Multi-tab workflow
Open multiple prompts
- Click prompts in the sidebar to open them in new tabs
- Each tab operates independently
- Switch between tabs by clicking the tab header
- Close tabs by clicking the × button
Split tabs
Compare prompts side-by-side:- Click the split icon (columns icon) in the active tab
- The tab opens in a new window pane
- Drag tabs between panes to reorganize
- Each pane can contain multiple tabs

Drag tabs
- Drag tabs within the same window to reorder
- Drag tabs between windows to move them
- Drag tabs to create new windows
Import prompts from traces
Import prompts from execution traces to iterate on real conversations:- Navigate to a trace in LangWatch
- Click on an LLM span
- Select “Open in Prompt Playground”
- The prompt configuration loads automatically with:
- System prompt from the trace
- Model and parameters
- Conversation history in the chat tab
Manage versions
View version history
- Click the version history icon in the header
- Browse all versions with timestamps
- See commit messages for each version
- Click a version to preview it
Restore versions
- Open version history
- Select the version you want to restore
- Click Restore to load it into the editor
- Make any additional changes and save
Restoring a version loads it into the editor but doesn’t create a new version
until you save.
Generate API code
Generate ready-to-use code snippets:- Click the API Snippet button in the header
- Select your preferred language (TypeScript, Python, or REST)
- Copy the generated code
- Use it directly in your application
- Proper authentication headers
- Variable compilation
- Model configuration
- Error handling
Best practices
Testing workflow
- Edit your prompt in the messages editor
- Set variables in the Variables tab
- Test in the Conversation tab
- Iterate based on results
- Save when satisfied
Comparison workflow
- Open the original prompt in one tab
- Create a new version or duplicate
- Split tabs to view side-by-side
- Test both versions with the same variables
- Compare results before saving
Trace-based iteration
- Load a problematic trace into the playground
- Review the conversation in the chat tab
- Identify issues with the prompt
- Edit and test improvements
- Save the improved version
Keyboard shortcuts
- Cmd/Ctrl + S - Save current prompt
- Cmd/Ctrl + W - Close current tab
- Tab - Navigate between form fields
Troubleshooting
Chat Not Responding
- Check that your prompt has a valid model selected
- Ensure variables are set if required
- Verify your API key has proper permissions
- Check browser console for errors
Changes Not Saving
- Ensure you have
prompts:editpermission - Check that the prompt handle is valid
- Verify network connectivity
- Look for validation errors in the form
Tab Not Loading
- Refresh the page
- Check browser console for errors
- Ensure the prompt exists and is accessible
- Try opening the prompt from the sidebar again
Related Documentation
- Getting Started - Create your first prompt
- Version Control - Manage prompt versions
- Link to Traces - Connect prompts to execution traces
- Analytics - Monitor prompt performance