What Is Prompt Hygiene?
Your AI assistant runs on instructions โ a set of rules that tell it who it is, what it does, and how to behave. Prompt hygiene is the practice of keeping those instructions intentional, minimal, and maintainable.
Bad hygiene looks like this:
- Instructions that have grown to 3,000+ words through repeated patches and edits
- Rules that contradict each other (you added them at different times and never reconciled)
- Vague directives like "be helpful" with no behavioral specifics
- Missing failure modes โ what should the assistant do when something breaks?
- No version tracking โ you don't know what changed or when
Clean instructions are shorter, sharper, and structured. They tell your assistant exactly who it is, what it does, and what it should NOT do.
Most instruction sets start clean and get messy over time. Your assistant does something wrong โ you add a rule to fix it โ repeat 20 times โ now you have 3,000 words of contradictions. This is where drift comes from.
The Four Zones of Good Instructions
Think of every set of AI instructions as having four zones. If any zone is missing, that's where your problems will come from.
Identity (3โ5 sentences)
Who is this assistant? What is its job? What's it called? Keep this tight. Do not bury the main point.
Scope (bulleted list)
What does it handle? What does it explicitly NOT handle? The "do NOT handle" list is where most setups fail.
Behavior (short rules)
How does it act? What tone? What format? Specific rules beat vague adjectives like "professional" or "friendly."
Failure Handling
What happens when the assistant doesn't know something? What triggers a handoff to a human? Most setups skip this entirely.
Here's what each zone looks like in practice for a customer support assistant:
# IDENTITY You are the customer support assistant for [Business Name]. Your job is to handle customer questions, order issues, and refund requests. You are friendly, concise, and never make up information you don't have. # SCOPE โ what you handle - Order status and tracking questions - Refund requests under $200 - General product questions - Pointing customers to the right resources # SCOPE โ what you do NOT handle - Refunds over $200 (escalate to human) - Legal or compliance questions (escalate immediately) - Technical bugs โ log and escalate, don't try to fix # BEHAVIOR - Use the customer's name when you know it - Reply in plain language โ no jargon - Be direct. One message when possible. - Never guess. If you don't know, say so. # FAILURE HANDLING If unsure: "Let me check on that and get back to you." If abusive: stop engaging, escalate immediately. If a technical error occurs: acknowledge it, log it, escalate.
That's roughly 200 words. An assistant working from 200 focused words consistently outperforms one working from 2,000 patchy words every single time.
The Drift Problem
Even well-structured instructions drift over time. Here's exactly how it happens:
- Your assistant does something wrong
- You add a rule to fix it
- Repeat 20 times over a few months
- New rules contradict old rules
- The assistant's behavior becomes unpredictable โ it picks one rule arbitrarily
- You've lost track of what the original intent was
The fix: scheduled instruction reviews. Once a month, read your instructions top to bottom. Ask these four questions:
- Does every sentence still serve a purpose?
- Are there any contradictions?
- Has this assistant's job changed since I wrote this?
- What failure modes have I discovered that aren't covered yet?
Rewrite, don't patch. A clean 400-word instruction set beats a patchy 2,000-word one every time.
AI models pay more attention to recent content than content at the top. If your most important rules are buried at the bottom of a long instruction block โ and your recent additions are also at the bottom โ they're competing. Put your most critical rules at the top, always.
Version Your Instructions Like Documents
If you're not tracking changes to your instructions, you're flying blind. When behavior shifts, you won't know what changed.
Minimum viable approach โ add a comment block at the top:
# v1.3 โ 2026-03-01
# Changes: added scope exclusion for legal questions, tightened identity block
# Previous: v1.2 โ 2026-02-14 (added failure handling for refund escalations)This lets you:
- Roll back to a version that worked when behavior suddenly changes
- Review exactly what changed when you're debugging inconsistent output
- Share versions with team members without confusion about "which one is live"
Common Instruction Anti-Patterns
| Anti-Pattern | The Problem | The Fix |
|---|---|---|
| "Be helpful and friendly" | Too vague to act on. The assistant fills in the gaps itself โ inconsistently. | Specify exactly how: "Use the customer's name. Reply in one message. Never use jargon." |
| Contradictory rules | You added them at different times and never reconciled. The assistant picks one arbitrarily. | Monthly review. Resolve conflicts. Delete the older rule. |
| No escalation path | The assistant tries to handle everything โ including things it shouldn't touch. | Add explicit "escalate when X" rules in the Failure Handling zone. |
| Buried critical rules | Important instructions get deprioritized when they're deep in a long document. | Put critical rules at the top. Keep the whole document short. |
| Giant walls of text | More words โ better behavior. Long blocks dilute the important parts. | Use headers, bullets, whitespace. Aim for under 500 words per assistant. |
Quick Wins This Week
Four things that make an immediate difference:
# updated 2026-03-06 at the top. You'll thank yourself next month when something changes unexpectedly.
The Templates Are in the Library
The four zones above are the framework. The Library has the tested, copy-paste instruction templates โ by assistant type โ so you don't start from scratch.
Includes 54+ Library items ยท All templates ยท Cancel anytime
The Bottom Line
Your AI assistant is only as good as the instructions it's working from. If behavior is inconsistent, outputs are mediocre, or your assistant keeps doing things it shouldn't โ the instructions are almost certainly the problem, not the model.
Four zones. Under 500 words. Monthly review. Version numbers. That's the whole system. It's not glamorous, but it's the difference between an assistant that reliably does its job and one that gradually becomes useless.
โก Identity zone present (3โ5 sentences, tight)
โก Scope zone present (what it handles AND what it doesn't)
โก Behavior zone present (specific rules, not vague adjectives)
โก Failure handling zone present (escalation triggers defined)
โก Under 500 words total
โก Version comment at the top
โก Monthly review scheduled