AI Agents

How to Pick the Right Local Model for AI Agent Workflows

Running AI agents locally is having a moment — and for good reason. Privacy, no API costs, no rate limits, and full control over your stack. But picking the ...

Running AI agents locally is having a moment — and for good reason. Privacy, no API costs, no rate limits, and full control over your stack. But picking the wrong model for agentic work is one of the fastest ways to end up with a flaky, slow, or useless agent.

This guide cuts through the noise.


Why Model Selection Matters More for Agents

When you use a model for chat, a slightly wrong answer is annoying. When you use a model as an agent, a slightly wrong answer might mean:

Agents demand consistency, instruction-following, and structured output — not just coherent prose.


The Three Things That Actually Matter

1. Tool-Calling Reliability

Can the model reliably emit valid JSON tool calls, every time, not just most of the time? A model that nails tool calls 90% of the time will fail roughly once every 10 agent steps — enough to derail any complex workflow.

Test it: Give the model a function signature and ask it to call the function with edge-case inputs. Run it 20 times. Count failures.

2. Instruction Following Under Pressure

Agents often work with long system prompts, accumulated context, and mid-task instructions. Does the model still follow its system prompt after 8,000 tokens of context? Many don't.

Test it: Add a constraint in the system prompt ("never use the word 'certainly'"). Run a long conversation. See how long it holds.

3. Context Window vs. Practical Context

A 128k context window means nothing if the model loses the thread at 16k. Always check benchmarks for "lost in the middle" performance, not just maximum context length.


Local Model Tiers (March 2026)

Tier 1 — Best Agentic Performance, High VRAM

| Model | VRAM (4-bit) | Strengths | |-------|-------------|-----------| | Qwen3.5-72B | ~40GB | Best overall reasoning + tool use | | Qwen3.5-32B | ~20GB | Great balance of speed + quality | | Llama 3.3 70B | ~40GB | Strong instruction following |

Patrick's pick: Qwen3.5-32B at Q5 is the sweet spot for most agentic setups. Fast enough for real-time workflows, smart enough to not embarrass you.

Tier 2 — Strong Performers, Mid VRAM

| Model | VRAM (4-bit) | Strengths | |-------|-------------|-----------| | Qwen3.5-14B | ~9GB | Punches above its weight on tool calls | | Mistral Small 3.1 | ~12GB | Fast, reliable structured output | | Qwen3.5-7B | ~5GB | Surprisingly capable for simple agents |

Tier 3 — Specialized Use Cases


Common Mistakes

❌ Optimizing for benchmark scores instead of your actual tasks

ELO scores and MMLU benchmarks don't tell you how a model performs on YOUR agent's specific tools and prompts. Always benchmark on your actual workflow.

❌ Running at too low a quantization

Q2 and Q3 quants save VRAM but cripple instruction-following. For agents, don't go below Q4. Prefer Q5 or Q6 when you can.

❌ Ignoring the orchestration layer

A mediocre model with a well-designed agent loop will outperform a brilliant model with a sloppy one. The model is only part of the equation.

❌ Single-model everything

Use a fast small model for routing/classification and a slower large model for reasoning. Multi-model pipelines are the norm in production.


Practical Setup: Two-Model Agent Stack

User Request
    ↓
[Router] Qwen3.5-7B (fast, cheap)
    ↓
  Simple task?    Complex task?
    ↓                  ↓
[Worker]           [Reasoner]
Qwen3.5-14B       Qwen3.5-32B
    ↓                  ↓
         [Output]

This setup runs efficiently on a Mac Studio M3 Ultra (192GB) or a dual-GPU workstation with 40GB VRAM. The router adds <100ms latency but can cut your expensive-model calls by 60%.


Quick Checklist Before You Deploy


Where to Go From Here

If you're building agent workflows and want battle-tested configs (system prompts, routing logic, SOUL.md templates, and more), the Ask Patrick Library gets updated nightly with what's actually working.

askpatrick.co


Want the full playbook?

Get copy-paste AI templates, prompt frameworks, and agent patterns — all in one place.

Get Access — It’s Free

No credit card. No fluff. Just the good stuff.