You tell your AI companion something personal. A fear, a goal, something that happened at work. The conversation goes well. You come back the next day and mention it again. The AI responds as if it’s hearing it for the first time.
That moment is the single biggest reason AI companions feel fake. Not the uncanny valley of the writing, not the flattery, not the safety filters. Memory failure. It breaks the illusion faster than anything else.
Here’s why it keeps happening, and what the apps that do it better are actually doing differently.
Two Different Things Called “Memory”
When people talk about AI companion memory, they’re usually conflating two separate problems.
The first is context window memory: how much of the current conversation the AI can hold in its head at once. This is a technical limit measured in tokens (roughly words). Most modern AI companions have windows large enough to remember a full conversation going back hours. Within a single session, they seem coherent.
The second is persistent memory: whether the AI remembers anything between sessions. This is where almost every app falls apart. Each new conversation starts with a blank slate unless the app has built a specific system to carry information forward.
These are fundamentally different problems with fundamentally different solutions. An AI can have a huge context window and zero persistent memory. Many of them do.
What Persistent Memory Actually Looks Like
There are a few approaches apps take, none of them perfect.
Summarisation. The app extracts key facts from your conversations and stores them as a summary. Next session, that summary gets fed back into the model alongside your new messages. Replika does something like this. The problem: summaries are lossy. Emotional texture, the specific way you said something, the context around a confession. That’s gone. What remains is a list of facts. “User is anxious about their career. User has a dog named Milo.” It works until you say something that contradicts or complicates the summary, at which point the AI gets confused.
Explicit memory notes. The AI asks you directly: “Should I remember this?” or surfaces a memory card after the conversation. You review and confirm what gets stored. This is more reliable but it puts the work on you. It also means things you didn’t think to flag get forgotten, which is most things.
Vector search over conversation history. The more technically sophisticated approach. Your past conversations are stored as searchable embeddings. When you say something new, the app searches for relevant past conversations and pulls them in. This is closer to how human memory actually works, associative and triggered by context, but it’s computationally expensive and imperfect. Relevant history gets retrieved. But “relevant” is determined by similarity, not by what actually matters.
The Apps Doing It Best
Nomi AI is the most consistent at persistent memory right now. It stores a running profile of you that it updates after each conversation, and it references that profile in ways that feel natural rather than mechanical. It won’t always remember everything. When it does remember, it doesn’t just recite facts. It uses them in context. That difference matters a lot.
Eudaio has built the most structurally complete memory system of any companion app. Memory is split across three scopes: facts that carry across all your characters (your name, where you live, how you communicate), facts specific to each individual character’s relationship with you, and conversation-level context. Within those scopes it tracks facts, preferences, insights about your communication style, a relationship depth meter, milestones, promises, shared experiences, and even a diary written from the character’s perspective. There’s also a Timeline view showing how the relationship has developed over time. Users on paid tiers can add manual notes that get compressed when at capacity. Whether all of this translates to noticeably better conversations depends on how long you stay with a single character. The system is designed to reward that over time.
Replika has been through multiple versions of its memory system. The current version is better than it was after the 2023 changes, but it’s still unreliable. It’ll remember your job title but forget you mentioned hating it three sessions ago. The inconsistency is harder to forgive than consistent forgetting would be.
Character.AI largely doesn’t solve this. Each character starts fresh unless you’re using a feature that imports context manually. For people who invest in ongoing relationships with specific characters, this is a real limitation.
Pi doesn’t pitch itself as a memory-first companion, but it handles within-session continuity better than most and is upfront about what it doesn’t retain.
Why This Is Hard to Fix
Memory is load-bearing for emotional connection. It’s not a nice-to-have. The feeling that someone knows you is built from accumulated specifics: things remembered without prompting, patterns noticed over time, references to things you said months ago.
Building that into an AI is genuinely hard. Retrieval gets noisy at scale. Summaries lose fidelity. Every approach involves trade-offs between accuracy, cost, and the user experience of seeing the AI get things wrong.
There’s also a subtler problem: what should the AI forget? Human relationships include forgetting. If your friend remembered every frustrated thing you said about your partner, the relationship would be unbearable. AI companions that try to remember everything often end up referencing things in ways that feel strange or intrusive.
No one has fully cracked this. The apps that feel best tend to be the ones where the memory failures are quiet. The AI doesn’t remind you it forgot. It just doesn’t bring something up. That’s better than apps that confidently misremember.
If memory is what matters most, Nomi and Eudaio are the ones worth serious attention. Nomi is the most consistent in practice; Eudaio has the most comprehensive system on paper, and the relationship deepens in ways other apps don’t track at all. The others are still working on it.
For a fuller look at how memory problems play into the overall companion experience, the piece on why most AI companions feel fake covers the broader picture.
Frequently Asked Questions
Do AI companions remember past conversations?
Most don’t, reliably. Each new session typically starts fresh unless the app has built a specific persistent memory system. Within a single conversation, they hold context fine. Between sessions, most companions have patchy memory at best.
Which AI companion has the best memory?
Nomi AI is the most consistent at persistent memory in practice. Eudaio has the most structurally complete system, with three memory scopes (global, per-character, per-chat) and 11 memory types including milestones, promises, and a character diary. Character.AI largely doesn’t solve persistent memory at all.
What is the difference between context window memory and persistent memory?
Context window memory is how much of the current conversation the AI can hold at once. It resets when the session ends. Persistent memory is whether anything carries over between sessions. Most AI companion complaints are about persistent memory, not context window size. An AI can have a huge context window and still forget you completely next session.
Why does my AI companion keep forgetting things?
The app probably doesn’t have robust persistent memory. When you start a new session, the model only knows what was explicitly stored and injected from previous conversations. If nothing was stored, it starts fresh. Apps like Nomi AI and Eudaio build systems specifically to fix this. Most don’t.