That makes sense, and I agree that for a single agent using skills well, Claude’s native context handling has gotten much better.
This wasn't mentioned in the first post, but the use case we’re focused on isn’t really “Claude forgetting,” but context living beyond a single agent or tool. Even if Claude remembers well within a session, that context is still owned by that agent instance.
The friction shows up when you switch tools or models (Claude → Codex / Cursor / etc.), run multiple agents in parallel, or want context created in one place to be reused elsewhere without re-establishing it.
In those cases, the problem isn’t forgetting so much as fragmentation. If someone is happy with one agent and one tool, there are probably a bunch of memory solutions to choose from. The value of this external memory network that you can plug into any model or agent shows up once context needs to move across tools and people.
I mostly agree with this, if the goal were “better persistent memory inside Claude Code,” that wouldn’t be very interesting.
For a single agent and a single tool, keeping project specs and decisions in markdown and explicitly pointing the model at them works well. We do that too.
What we’re focused on is a different boundary: memory that isn’t owned by a specific agent or tool.
Once you start switching between tools (Claude, Codex, Cursor, etc.), or running multiple agents in parallel, markdown stops being “the memory” and becomes a coordination mechanism you have to keep in sync manually. Context created in one place doesn’t naturally flow to another, and you end up re-establishing state rather than accumulating it.
That’s why we're not thinking about this as "improving Claude Code”. We’re interested in the layer above that: a shared, external memory that can be plugged into any other model and tools, that any agent can read from or write to, and that can be selectively shared with collaborators. Context created in Claude can be reused in Codex, Manus, Cursor, or other agents from collaborators - and vice versa.
If one already built and is using one agent in one tool and is happy with markdown, they probably don’t need this. The value shows up once agents are treated as interchangeable workers and context needs to move across tools and people without being re-explained each time.
You’re right that reading the same markdown file is trivial, that’s not the hard part.
Where it stopped being trivial for us was once multiple agents were working at the same time. For example, one agent is deciding on an architecture while another is already generating code. A constraint changes mid-way. With a flat file, both agents can read it, but you’re relying on humans as the coordination layer: deciding which docs are authoritative, when plans are superseded, which tickets are still valid, and how context should be scoped for a given agent.
This gets harder once context is shared across tools or collaborators’ agents. You start running into questions like who can read vs. update which parts of context, how to share only relevant decisions, how agents discover what matters without scanning a growing pile of files, and how updates propagate without state drifting apart.
You can build conventions around this with files, and for many workflows that works well. But once multiple agents are updating state asynchronously, the complexity shifts from storage to coordination. That boundary - sharing and coordinating evolving context across many agents and tools — is what we’re focused on and what an external memory network can solve.
If you’ve found ways to push that boundary further with files alone, I’d genuinely be curious - this still feels like an open design space.
A lot of the discussion here is about memory inside a single tool, which makes sense.
I’m curious how people think about portability: e.g. letting Claude Code retrieve context that was created while using Codex, Manus, or Cursor, or sharing specific parts of that context with other people or agents.
At that point, log parsing and summaries become per-tool views of state rather than shared state. Do people think a shared external memory layer is overkill here, or a necessary step once you have multiple agents/tools in play?