There is a quiet failure mode spreading through every team that has been using ChatGPT or Claude for more than a few months. Matt Huggins at Every named it this week: context rot.
The symptoms are subtle. Your AI assistant starts hedging more. Tone shifts inconsistently across outputs. Responses that used to feel crisp now arrive padded with qualifications. You assume the model got worse, or that your prompts are stale. The actual problem is usually in the memory layer -- and it compounds silently every week you keep adding to it.
What Context Rot Actually Is
Every modern AI assistant with persistent memory -- ChatGPT's Memory feature, Claude's Projects, custom GPTs with instruction blocks -- faces the same structural challenge: it has to honor all stored context simultaneously.
In the first month, this feels powerful. You tell it your preferred writing tone, your industry, your audience. The model adapts. Outputs improve.
By month three, the memory layer looks like a decade-old junk drawer. You have preferences that contradict each other because your needs evolved. "Keep responses concise" sits next to a later note that says "include more supporting evidence." "Write for non-technical founders" coexists with a newer instruction to "include code examples." The model does not rank or timestamp these -- it tries to satisfy all of them at once.
The result is responses that feel slightly off: too hedged, inconsistently formatted, or calibrated to a version of your workflow that no longer exists. This is context rot.
Myth vs. Operator Reality
Myth: "The more context I give the AI, the better it understands me."
Reality: More context only helps if it is accurate and internally consistent. Stale or contradictory preferences actively compete with each other during inference. The model cannot know which instruction is current -- it treats a three-month-old preference and yesterday's instruction as equally authoritative.
Myth: "If my outputs feel off, I need a better prompt."
Reality: A well-crafted prompt cannot override contaminated system-level memory. The model allocates attention across all active context, including the junk you forgot was in there. You can write the perfect prompt and still get a degraded result because the memory layer is pulling in the opposite direction.
Myth: "AI memory is a set-and-forget productivity win."
Reality: AI memory is a garden, not a filing cabinet. It requires periodic weeding. Teams that ignore this accumulate debt that compounds -- each new instruction lands on top of older, conflicting ones, and the model has to split its attention across all of it.
The 30-Minute Context Audit
This is the fix. Run it now, then schedule it monthly.
Step 1: Export everything (10 minutes)
In ChatGPT, go to Settings > Personalization > Manage Memory and copy out every stored memory. In Claude Projects, pull your custom instructions. In any custom GPT, open the instruction block. Paste all of it into a plain text document.
You will likely be surprised at what is in there -- notes from workflows you abandoned, preferences you have since reversed, and instructions that directly contradict each other.
Step 2: Edit for consistency (15 minutes)
Read through the full export and apply three filters:
- Stale: Does this reflect how I actually work today? If not, delete it.
- Duplicate: Is there a newer version of this preference? Delete the older one.
- Contradictory: Do any two instructions conflict? Keep the current one, delete the rest.
What remains should be a tight, internally consistent set of preferences -- typically 5 to 10 sentences, not 50.
Step 3: Reset and reload (5 minutes)
Clear all stored memory in the tool. Do not selectively delete -- the old instructions are entangled. Start fresh with only your edited, cleaned set. Then run a sample task you do frequently and compare the output to what you were getting before the audit.
Most teams see a noticeable improvement on the first try.
Why This Is a Team Problem, Not Just a Personal One
If you are using shared Claude Projects or custom GPTs across a team, context rot scales with headcount. Every team member who adds an instruction or tweaks the system prompt is potentially introducing a contradiction. No one is reviewing the full context state.
The operational fix is to designate one person as the "context owner" for shared AI tools -- someone responsible for running the monthly audit and maintaining a versioned record of current instructions. Treat it like maintaining a configuration file: meaningful changes go through review, not casual edits.
For teams using the Claude Projects API or building on OpenAI's Assistants, this discipline is even more important. Instruction creep in a production assistant silently degrades every user interaction without triggering any error state. There is no alert. There is no log entry. Output quality just drifts.
The Counter-Intuitive Principle
The teams getting the most value from AI assistants tend to have less in their memory layers, not more. They store precise, current, non-overlapping preferences -- and they revisit them regularly.
If your AI outputs have been feeling slightly off lately, do not reach for a better prompt. Open the memory panel first. There is a good chance the problem has been sitting there for months.
Related: Stop Chatting, Start Delegating | The 3 Endpoint Decisions That Change Agent Rollouts
