To begin working with LLM nodes, first create a new workflow by navigating to the workflows page and clicking “Create New Workflow.” You can choose from available templates, but for learning purposes, the blank template is a good starting point. After naming your workflow, the system automatically creates three basic blocks: an entry node, an LLM call node, and an end node.
The LLM node is where the actual language model interaction happens. Each node has configurable properties accessible through the right sidebar, including:
LLM provider selection
LLM Instructions
Input and output fields
Few-shot demonstrations
You can quickly test an LLM node by using the “Run with manual input” option, which allows you to input test queries and see immediate results. The system will show you both the cost and duration of each execution.
One of the most important aspects of the LLM node is how you configure its inputs and outputs. The field names are meaningful as they’re passed directly to the LLM. You can:
Add multiple input fields (such as ‘purchase’ and ‘amount’)
Create custom output fields for different types of responses
To get better responses from your LLM, you can add specific instructions in the node properties. These instructions help guide the LLM’s behavior and can include:
You’re not limited to default LLM providers. You can set up custom providers by:
Accessing the “Configure available model” settings
Enabling custom settings
Adding your API keys
Configuring custom or fine-tuned models
Appart from the main providers (OpenAI, Anthropic, Google, Groq, etc) the system also supports any OpenAI-compatible APIs, for example on your own self-hosted Llama.This is just the beginning of the Optimization Studio. The LLM Node serves as the foundation for more advanced features like image processing, evaluation, and automatic optimization, which are covered in the next tutorials.