Provider Configuration
How to configure Anthropic, OpenAI, Gemini, Groq, OpenRouter, and Ollama with AI Agent Flow.
AI Agent Flow supports six LLM providers β cloud and local. Configuration is handled interactively via aiagentflow init, which writes a .aiagentflow/config.json file in your project directory.
aiagentflow init
The wizard walks you through selecting providers, entering API keys, assigning models per agent role, and setting workflow preferences.
aiagentflow doctor after setup to verify all providers can connect and list available models.
Anthropic (Recommended)
Claude Sonnet 4 is the recommended model for architecture and coding tasks due to its strong reasoning and large context window.
Get your API key at console.anthropic.com.
Default model: claude-sonnet-4-20250514
OpenAI
Compatible with GPT-4o, GPT-4o-mini, and o-series models.
Get your API key at platform.openai.com.
Default model: gpt-4o-mini
Google Gemini
Gemini's large context window makes it excellent for analyzing large codebases.
Get your API key at aistudio.google.com.
Default model: gemini-2.0-flash
Groq
Groq provides extremely fast inference on open-source models. The free tier is generous β good for testing and rapid iteration.
Get your API key at console.groq.com.
Default model: llama-3.3-70b-versatile
OpenRouter
OpenRouter proxies 100+ models through a single OpenAI-compatible API. Many models are available for free β no credit card required.
Get your API key at openrouter.ai.
Default model: meta-llama/llama-3.1-8b-instruct:free
:free to any model ID on OpenRouter to use the free version β e.g. google/gemma-3-12b-it:free.
Ollama (Local First)
For maximum privacy, run AI Agent Flow entirely offline using local models via Ollama.
- Install Ollama
- Pull a model:
ollama pull llama3.2 - Select Ollama during
aiagentflow initβ no API key needed.
Default model: llama3.2:latest
Assigning Models Per Agent
During aiagentflow init you can assign different providers and models to each of the six agent roles (Architect, Coder, Reviewer, Tester, Fixer, Judge). This lets you use a powerful model for the Architect while using a faster/cheaper one for repetitive tasks like the Fixer.
The configuration is saved to .aiagentflow/config.json in your project root.