Getting Started
To begin working with LLM nodes, first create a new workflow by navigating to the workflows page and clicking âCreate New Workflow.â You can choose from available templates, but for learning purposes, the blank template is a good starting point. After naming your workflow, the system automatically creates three basic blocks: an entry node, an LLM call node, and an end node.Understanding the LLM Node (0:34)
The LLM node is where the actual language model interaction happens. Each node has configurable properties accessible through the right sidebar, including:- LLM provider selection
- LLM Instructions
- Input and output fields
- Few-shot demonstrations
Configuring Input and Output Fields (1:31)
One of the most important aspects of the LLM node is how you configure its inputs and outputs. The field names are meaningful as theyâre passed directly to the LLM. You can:- Add multiple input fields (such as âpurchaseâ and âamountâ)
- Create custom output fields for different types of responses
- Rename fields to better represent their purpose
Working with Datasets (2:10)
LLM nodes become particularly powerful when connected to datasets. Through the entry node, you can:- Select and load your datasets
- Map dataset fields to LLM input fields
- Test your workflow using random samples from your dataset
- Connect multiple dataset fields to provide richer context to your LLM
Improving Results with Instructions (2:58)
To get better responses from your LLM, you can add specific instructions in the node properties. These instructions help guide the LLMâs behavior and can include:- Expected output categories
- Format specifications
- Processing guidelines
- Context information
Creating Complex Workflows (4:25)
Youâre not limited to single LLM nodes. You can create sophisticated workflows by:- Connecting multiple LLM nodes in sequence
- Passing outputs from one node as inputs to another
- Using different LLM models for different tasks
- Adjusting temperature and other parameters independently for each node
Monitoring and Tracking (5:54)
Every LLM node execution is tracked in detail. You can:- View the full execution trace in LangWatch trace monitoring
- Examine system prompts and user requests
- Track costs and performance metrics
- Analyze the complete message flow
Using Demonstrations (6:58)
To improve your LLMâs performance, you can provide example cases through demonstrations. In the node properties, you can:- Add input-output pairs as examples
- Save demonstrations for reuse
- Test how different examples affect results
Custom LLM Providers (7:27)
Youâre not limited to default LLM providers. You can set up custom providers by:- Accessing the âConfigure available modelâ settings
- Enabling custom settings
- Adding your API keys
- Configuring custom or fine-tuned models