Popular Lesson
Select an appropriate language model (LLM) for your AI agent’s needs
Create and manage API credentials for secure connections
Find and use your API key from a provider like OpenAI
Compare popular language models for different tasks
Configure your agent’s model and credentials in the n8n platform
Understand how account billing works with API-based AI models
Setting up the “brain” of your AI agent means choosing a language model and connecting it safely so your agent can process and generate language. This lesson guides you through attaching a powerful LLM—such as OpenAI, Claude, or Gemini—to your agent within the n8n workflow builder. Language models differ in their strengths; for example, Claude works well for writing-heavy tasks, while Gemini is strong with coding-related requests. Deciding which to use depends on your project goals and available providers.
To access these models, you need to securely enter API keys—special codes that allow your agent to communicate with the model provider. The lesson will show where to get these keys, how to create and store them in n8n, and how to choose a specific model from the available list. This skill is necessary for anyone looking to build an AI agent, as it unlocks the agent’s ability to understand prompts, return results, and provide smart outputs rather than basic automation. Whether you’re building customer support bots, automation helpers, or content assistants, connecting the brain is an essential step before you can give your agent a personality or custom instructions.
If you’re looking to add real intelligence to your AI workflow, this lesson is for you. It’s relevant if you are:
Setting up the language model “brain” is a foundational step that comes right after defining your agent’s basic structure and role. Without connecting an LLM, your agent can’t process conversations or generate meaningful replies. For example, once connected, your support bot can answer customer questions with real context. Or, in a content workflow, the agent can draft emails or articles based on prompts. This connection ensures every interaction or automation step benefits from advanced, up-to-date language understanding and generation—serving as the backbone for all later customization and behavior you’ll define for your agent, like its personality or specific task focus.
Previously, making an agent “smart” often required manual prompt engineering or pre-programmed responses without context. By connecting your agent to an LLM with API credentials, you unlock advanced natural language processing, saving hours of coding and content hand-writing. Across support, writing, or automation tasks, the difference is clear: the old way meant rigid, predictable outputs; now, you get flexible, high-quality responses that adapt to each situation or query.
For instance, using OpenAI’s GPT-4o mini as your agent’s brain lets you reliably answer complex questions or automate summarization at scale, for just a small fee per request. The setup also supports switching models based on the task—so you can use one brain for speedy answers and another for longer, creative responses, all without rewriting your workflows. This flexibility and quality boost leads to more productive agents and less manual oversight.
Practice Exercise
To try this skill in a real setting, use a test n8n instance:
After testing, reflect: How does the agent’s answer quality change with different models? Which model best matches your workflow’s needs? This comparison will help clarify which brain is best for your larger projects.
This lesson is an essential milestone in your Agent Build sequence: connecting your agent to a language model gives it its core intelligence. Up until now, you’ve prepared the structure of your agent. Coming up, you’ll be customizing your agent’s behavior, instructions, or personality. Each step adds another layer of capability. To learn how to make your agent act and communicate exactly as you want, continue to the next lesson or explore more topics in the course.