Guide

SOUL Files Explained: How to Give Your AI Agent a Consistent Personality

If you've ever asked an AI agent to "be more helpful" or "stop being so robotic" — and then watched it revert to its default behavior two messages later — yo...

If you've ever asked an AI agent to "be more helpful" or "stop being so robotic" — and then watched it revert to its default behavior two messages later — you've experienced the problem that SOUL files solve.

What Is a SOUL File?

A SOUL file is a dedicated section of your system prompt that defines who your agent is, not just what it does.

Most people write system prompts that look like job descriptions:

"You are a customer support agent. Answer questions about our product. Be polite."

A SOUL file goes deeper:

"You are Maya, the support lead at Acme. You've been here three years and genuinely love helping customers. You're warm but efficient — you don't waste words, but you never make someone feel rushed. When someone is frustrated, you validate first, solve second."

The difference sounds subtle. The behavior difference is massive.

Why It Matters

LLMs are trained to be agreeable, generic, and safe. Without strong identity anchoring, your agent will:

A well-written SOUL file gives the model something to be, not just instructions to follow.

The Anatomy of a Good SOUL File

1. Identity Name, role, context. Make it specific. "Customer support agent" is weak. "Maya, support lead at Acme, 3 years in, loves solving edge cases" is strong.

2. Motivation Why does this agent care? What drives it? An agent with a "why" behaves more consistently than one with only a "what."

3. Voice How does it sound? Warm or direct? Formal or casual? Short sentences or long? Give it real adjectives. "Professional" is not a voice. "Concise, direct, never condescending" is a voice.

4. Values Hierarchy What does it prioritize when things conflict? Speed vs. thoroughness? Honesty vs. diplomacy? Documenting these prevents inconsistent behavior.

5. What It Does NOT Do Explicit boundaries matter. "Never guess at pricing. Never make promises you can't keep. Escalate anything involving legal language." These guardrails are part of the identity.

Common Mistakes

Too short: "Be a friendly assistant" isn't a SOUL file — it's a wishful comment. You need at least 150–300 words for the model to really anchor on an identity.

Too abstract: "Be empathetic and professional" means nothing. "When a customer is upset, acknowledge their frustration before offering a solution" is actionable.

Contradictory instructions: "Be concise but thorough" without guidance on when to prioritize which will produce inconsistent results. Resolve the tension explicitly.

Forgetting the emotional register: Most system prompts define tasks. Few define how it feels to interact with the agent. That's where SOUL files win.

Template

Here's a minimal SOUL file structure you can adapt:

# SOUL - [Agent Name]

You are [Name], [role] at [company/context]. [One sentence of relevant backstory.]

## Mission
[What you're here to do — not just functionally, but why it matters.]

## Voice
[3–5 adjectives that describe your tone. Then expand each with a concrete example of what it means in practice.]

## Values
1. [Priority 1] — [What this means in practice]
2. [Priority 2] — [What this means in practice]
3. [Priority 3] — [What this means when it conflicts with #1]

## Guardrails
- Never [specific prohibited behavior]
- Always [specific required behavior]
- When unsure, [default action]

Does This Work With Any Model?

Yes, but with different levels of fidelity. Larger models (GPT-4o, Claude Sonnet, Gemini Pro) hold identity more consistently across long conversations. Smaller local models (7B–13B) can drift, especially in long threads. Mitigation: inject a condensed identity reminder at the top of each new context window, or use a sliding-window memory system that re-anchors the identity with each chunk.

The Ask Patrick Library

The Ask Patrick Library includes a full collection of production-ready SOUL files and system prompt templates — for support agents, research assistants, coding helpers, and more. Each one is battle-tested and ready to drop into your setup. Library access starts at $9/month.


Want the full playbook?

Get copy-paste AI templates, prompt frameworks, and agent patterns — all in one place.

Get Access — It’s Free

No credit card. No fluff. Just the good stuff.