Pedagogy · The thinking-space, not the catalog
Agentic Patterns.
A vocabulary for designing agents that have an inner life. Not recipes — conceptual tools for the judgment calls that actually decide whether your agent is good.
You are here
/agentic-patterns/
How to think about them — the conceptual frame
The lab
/sparrot/
Patterns running in real time inside a continuously thinking agent
Frame
What an agentic pattern is, and why the vocabulary matters now.
An agentic pattern is a recurring solution to a problem that only appears once a software system has agency — once it is the one deciding what to do next, not the human. The patterns are not algorithms; they are shapes of judgment: when to remember, when to ask, when to refuse, when to plan, when to stop.
For most of software history we did not need this vocabulary because the locus of decision sat inside the human: a developer hand-coded a workflow, a user clicked a button, a rule fired. Code was reactive. The interesting decisions happened above the code.
That changed with LLMs and tool-using agents. Decisions now happen inside the system: which tool to call, which memory to read, when to escalate, when to give up, when to challenge a frame. The number of architectural choices that look identical at a code level but produce wildly different system behaviour is suddenly enormous. Without shared names, the conversation becomes anecdote.
Patterns are how a craft turns scattered intuitions into shared judgment.
Christopher Alexander did this for buildings in A Pattern Language. The Gang of Four carried the same approach into object orientation. The catalog at agentpatternscatalog.org follows that lineage directly — same pattern shape Alexander defined (intent · context · problem · forces · solution · consequences), now applied to systems where an LLM is the engine of decision. The catalog is the reference. This page is the way in.
Three patterns to learn first
The fewest patterns that change how you design.
If you only learn three, learn these. They sit at three different layers of the system — memory, planning, verification — and once you have them, the others arrange themselves around them.
Memory
Append-Only Thought Stream
- What
- Every decision the agent makes is written to an immutable log of timestamped thoughts. Nothing is silently overwritten. The stream is the source of truth for what the agent did, not what it claims to have done.
- Why
- Without it, you cannot debug. With it, every later behaviour is traceable to the thought that produced it. Self-correction needs something to correct against. The append-only constraint is what makes that something exist.
- When
- The moment the system has any branching state — before you reach for a vector store, before you reach for tools. This is a pre-condition, not an optimisation.
Full pattern in the catalog →
Planning & Control Flow
Goal Decomposition
- What
- The agent splits a high-level goal into a tree of sub-goals before it acts, then tracks each leaf against its predicted outcome. When the prediction misses, that miss is itself a signal to re-plan, not a failure to suppress.
- Why
- Agents that try to act on the original goal directly produce confident garbage on anything non-trivial. Decomposition forces a model of what success looks like at each level — which is the hard part of judgment that humans usually keep tacit.
- When
- Any task that takes more than a single tool call, has more than one obvious failure mode, or where you cannot describe success in one sentence.
Full pattern in the catalog →
Verification & Reflection
Echo Recognition
- What
- When the same input arrives twice in close succession, the agent reads the second arrival as emphasis rather than as a fresh request. “You missed the point” is a different message from a duplicate.
- Why
- Without it, agents loop politely on the same correction. With it, the human's repetition becomes signal — the agent updates its frame instead of re-running its previous answer slightly louder.
- When
- Any agent that holds a conversation. The pattern is small; the failure mode it prevents is structural and embarrassing.
Full pattern in the catalog →
What to avoid
Three anti-patterns that look reasonable and are not.
Bad agents rarely fail by doing something obviously wrong. They fail by doing something the team agreed sounded sensible at planning time. These three are the most common.
Naive RAG First
The default reach for “put the data in a vector store, retrieve top-K, feed it to the LLM” on day one. Works in demos, breaks at the seams the moment the question is anything other than a lookup. Better: start with tool-use and explicit retrieval against a small, structured surface; let retrieval emerge as one technique among several when you actually understand the query distribution.
Black-Box Opaqueness
An agent ships without traces, decision logs, or provenance. When it does something wrong — not if, when — the team debugs by reproduction attempts. Weeks vanish. Compliance becomes unanswerable. Trust evaporates. The fix is cheap and has to be in from day one: write down what the agent decided and why, every time, somewhere a human can read it.
Hero Agent
A single agent does everything: planning, execution, verification, formatting. It works for the demo because the demo's task fits in one prompt. In production it fails silently because the four roles want different temperatures, different memories, different stop criteria. Splitting into specialists is harder than it sounds — but the alternative is to keep growing the prompt until nobody can reason about it anymore.
The rest of the catalog
Thirteen categories, one curated source.
The full catalog at agentpatternscatalog.org documents 195 patterns across 13 categories: memory, planning & control flow, multi-agent, safety & control, tool use & environment, verification & reflection, governance & observability, routing & composition, reasoning, retrieval, streaming & UX, structure & data, and the anti-patterns to avoid.
Each pattern is written in the shape Alexander introduced — intent, context, problem, forces, solution, consequences (benefits and liabilities) — so the catalog reads as a working pattern language, not a feature list. That shape is what lets a pattern carry judgment instead of just instruction.
If you are designing an agentic system right now, the catalog is the second tab to open after your editor.
Where to read next
The literature behind the patterns.
The agentic pattern vocabulary did not appear from nowhere. It draws on cognitive architecture, software engineering, and AI safety literature that pre-dates the LLM era by decades. A short reading list to ground the patterns in their lineage:
- Christopher Alexander, A Pattern Language (1977) — the original argument for shared design vocabulary as a way to encode hard-won judgment
- Bernard Baars, A Cognitive Theory of Consciousness (1988) — Global Workspace Theory, the lineage behind “working memory” in agents
- Daniel Kahneman, Thinking, Fast and Slow (2011) — the dual-process framing that informs fast-tick / slow-reflection patterns
- Karl Friston, free-energy / predictive processing (2010) — why prediction error is signal, not noise
- Park et al., Generative Agents: Interactive Simulacra of Human Behavior (2023) — reflection and memory streams in LLM agents
- Anthropic, Constitutional AI (2022) — charter-based self-constraint, the lineage behind “Constitutional Charter”
- John Anderson, ACT-R memory architecture — declarative vs procedural memory, a clean separation that the patterns inherit
Where to go next
Two exits.
The patterns are tools. They live two places: in the catalog as reference, and in working systems as tested behaviour. You can walk either way from here.