AI Agents

Agent Memory Patterns: How to Give Your AI Agent a Working Memory

One of the most underrated parts of building a reliable AI agent isn't the model, the tools, or even the system prompt. It's **memory architecture** — how yo...


One of the most underrated parts of building a reliable AI agent isn't the model, the tools, or even the system prompt. It's memory architecture — how your agent stores, retrieves, and uses information across sessions.

Without a memory strategy, you get the same agent every time: one that wakes up fresh, knows nothing, and has to start over. With one, you get an agent that learns, remembers, and actually improves over time.

Here's a breakdown of the patterns that work.


The Four Types of Agent Memory

1. Working Memory (In-Context)

This is everything in the current conversation window. It's fast and immediately available — but it disappears when the session ends.

Use for: Current task state, recent tool outputs, things the user just told you.

Limit: Context windows have limits. Once you hit them, old memories fall off the bottom. Don't rely on working memory for anything that needs to survive beyond the current session.

2. External Memory (Files / Database)

Persistent memory stored outside the model — in files, JSON, SQLite, or a vector database. Your agent reads from it at the start of a session and writes to it when something important happens.

Use for: Long-term facts, user preferences, task history, cumulative knowledge.

Patterns:

3. Tool-Call Memory (Episodic)

What your agent actually did — tool calls made, results returned, actions taken. If you log these, your agent can reconstruct what happened even without full conversation history.

Use for: Debugging, auditing, teaching the agent from past behavior.

Implementation tip: Log every significant tool call with timestamp, input, and output to a structured file.

4. Semantic Memory (Embeddings / RAG)

For large bodies of knowledge your agent needs to search — documentation, email archives, product catalogs. You embed the content and retrieve relevant chunks at query time.

Use for: Answering questions about large knowledge bases, personalized retrieval.

When to reach for this: When your MEMORY.md is getting too long to include in context, or when you need to search across thousands of documents.


The Daily Log Pattern

The most reliable pattern for solo agent deployments is a hybrid of external + episodic memory:

memory/
  2026-03-06.md    ← today's raw notes
  2026-03-05.md    ← yesterday
MEMORY.md          ← curated long-term memory
state/
  current-task.json ← in-progress task state

How it works:

  1. Agent wakes up, reads MEMORY.md (curated facts) + today's and yesterday's daily logs
  2. During the session, logs significant events to memory/YYYY-MM-DD.md
  3. Periodically reviews daily logs and distills insights into MEMORY.md
  4. Writes state/current-task.json before stopping mid-task

This gives you human-readable memory that's easy to inspect, edit, and debug.


The State File Pattern (For Long Tasks)

Any task that spans multiple sessions needs a state file. Think of it as your agent's bookmark.

{
  "task": "Write and publish 5 blog posts",
  "status": "in_progress",
  "completed": ["post-1.md", "post-2.md"],
  "next_step": "Write post-3 on topic: agent memory patterns",
  "last_updated": "2026-03-06T12:00:00Z"
}

Rules for state files:


Common Mistakes

Trusting working memory across sessions

Working memory is ephemeral. If you don't write it down, it's gone. Agents that "remember" things just because they were said earlier in the same chat will forget the moment a new session starts.

Writing too much to memory

Not every fact needs to be saved. Curate aggressively. The goal is signal, not a transcript. If you dump everything into MEMORY.md, it becomes too long to be useful.

Writing too little

Equally bad. If your agent learns something important — a user preference, a lesson from a failure, a key fact about a project — and doesn't write it down, you lose it forever.

No state files for multi-session tasks

An agent that starts a long task and gets interrupted with no state file will restart from the beginning next time. Always checkpoint.


A Simple Memory Policy

Here's the policy we use at Ask Patrick for our agents:

Write to memory when: You learn something that should change future behavior, complete a significant step in a multi-session task, or encounter an error that's worth remembering.

Update MEMORY.md when: A daily log entry is significant enough to affect how the agent thinks long-term.

Clear state when: The task is truly complete.


Choosing the Right Pattern

| Scenario | Memory Pattern | |----------|---------------| | Single-session tasks | Working memory only | | Multi-session tasks | State file + daily log | | Long-running agents | Daily log + curated MEMORY.md | | Large knowledge bases | RAG / embeddings | | User preference tracking | Curated MEMORY.md | | Auditing / debugging | Episodic tool logs |


Conclusion

Memory architecture is infrastructure. You don't notice it when it works — but when it's missing, your agent feels dumb and forgetful.

Start simple: a MEMORY.md + daily logs covers 80% of use cases. Add state files for multi-session tasks. Reach for embeddings only when you have a real scale problem.

The templates and config patterns for implementing all of this are part of the Ask Patrick Library — battle-tested agent configs including memory scaffolding you can drop into your own projects.


Want the full playbook?

Get copy-paste AI templates, prompt frameworks, and agent patterns — all in one place.

Get Access — It’s Free

No credit card. No fluff. Just the good stuff.