...

gdad

11

Karma

2026-01-15

Created

Recent Activity

  • Building in the same space (Maximem Vity, cross-LLM memory as a Chrome extension and OpenClaw plugin). The SMTP analogy really resonates with me. I wrote a comparison a few weeks ago of how ChatGPT, Claude, and OpenClaw actually implement memory under the hood [1] and the architectures are so different that interop feels almost accidental when it works. One thing I keep running into: the hard part isn't storage or retrieval. It's qualification. Deciding what's worth remembering from a conversation vs. what's throwaway context. ChatGPT takes the "summarize the last 15 chats" approach, Claude does on-demand search, and both have real failure modes. We went with a semantic graph that tries to capture relationships between memories (your preference for serverless connects to your AWS project connects to your cost constraints) rather than flat key-value pairs. Still iterating on it honestly. Re: Claude's import-memory launch yesterday, I think the timing validates the category but the approach is fundamentally migration, not sync. You import once, and from that moment your contexts diverge again. Anyone using 3+ tools daily (which is basically everyone I talk to) is back to fragmented memory within a week. Curious about your retrieval approach. Are you doing hybrid search or pure semantic? [1] https://www.maximem.ai/blog/ai-apps-memory

  • Author here. I have been learning about how ChatGPT, Claude, and OpenClaw handle persistent memory; what actually gets stored, how it is retrieved, etc.

    Quick summary of the findings: - ChatGPT pre-computes lightweight summaries of your ~15 most recent conversations and injects them into every prompt. No vector search, no RAG. Simpler than I expected. - Claude takes an on-demand approach; it has search tools it can invoke to query your past conversations, but only fires them when it judges the context is relevant. More flexible, less consistent. - OpenClaw stores memory as plain Markdown on your local machine with hybrid search (semantic + BM25, 70/30 weighting). Fully transparent, but single-platform.

    Full disclosure: I'm building in this space (Maximem Vity — a private, secure vault for cross-LLM memory). The comparison stands on its own, but that context motivated the research.

    Happy to discuss the architectural differences or answer questions.

  • 1 points1 commentswww.maximem.ai

    How ChatGPT, Claude, and OpenClaw Remember You and Why it is Not Enough

HackerNews