Back to Blog
ProductJanuary 18, 202612 min read

One Memory, Every AI: How Memory Passport Works

You do not get a new identity at every airport gate. Why should you get a new memory with every AI? Memory Passport makes your AI memory portable across ChatGPT, Claude, Gemini, and any MCP-compatible agent.

MEMORY PASSPORTBackgroundFactualEventSkillClaudeChatGPTGeminiAgentsOne Memory, Every AI

The Problem: Digital Amnesia

Every time you switch between AI platforms, you start from zero. You have spent three months teaching Claude your coding preferences, your project context, and your communication style. Then you try ChatGPT for a creative writing task — and it has no idea who you are. You explain your background again. You re-state your preferences again. You rebuild context from scratch.

This is not a minor inconvenience. It is a fundamental failure of the current AI ecosystem. Each platform maintains its own siloed memory (if it maintains any at all), and none of them can talk to each other. Your AI identity is fragmented across a dozen platforms, each holding a partial, incomplete picture of who you are.

The average knowledge worker uses 3-4 AI tools regularly. Each tool knows a fraction of the user's context. The result is a paradox: AI systems that individually remember less and less about you as you use more and more of them. Memory Passport solves this by making your AI memory portable — a single, unified memory that travels with you across every platform.

The Passport Analogy

Think about your physical passport. When you arrive at an airport in Tokyo, you do not need to prove your identity from scratch. Your passport carries your verified identity across borders. The immigration officer in Tokyo trusts the same document that the officer in Paris trusted last month. You are you — everywhere.

Memory Passport works the same way for your AI identity. It is a portable, encrypted memory container that carries your preferences, context, interaction history, and personal model across AI platforms. When you connect Memory Passport to a new AI system, it does not need to learn you from scratch. It reads your passport and immediately understands your background, your preferences, and your communication style.

The analogy extends to privacy. Just as you control when to show your passport and to whom, Memory Passport gives you granular control over which memories are shared with which platforms. Your medical history might be relevant to a healthcare AI but irrelevant (and private) to a coding assistant. Memory Passport lets you selectively share — not all or nothing.

How Memory Passport Works

Memory Passport consists of three components: a portable memory format that standardizes how memories are structured, platform adapters that translate between the portable format and each AI platform's native memory system, and a sync protocol that keeps memories consistent across platforms.

When you interact with an AI platform connected to Memory Passport, the flow is straightforward. New memories generated during the conversation are extracted, typed, and stored in the portable format. When you switch to a different platform, the adapter for that platform reads your portable memories and injects relevant context. Changes made on any platform sync back to the central store.

The entire system is built on MemoryLake's memory infrastructure — meaning every memory has full typing (Background, Factual, Event, Conversation, Reflection, Skill), temporal indexing, conflict detection, and provenance tracking. This is not a simple key-value clipboard. It is a complete memory system that happens to be portable.

The Portable Memory Format

The portable memory format is an open specification designed for interoperability. Each memory entry includes: a unique identifier, a memory type (one of six categories), the memory content in structured text, temporal metadata (creation time, last accessed, last modified), provenance information (source platform, source conversation, confidence score), and relationship links to other memories.

The format is intentionally verbose — it carries more information than any single platform needs. This is by design. When a platform adapter reads a memory, it extracts the subset of information that the platform can use and discards the rest. When the platform generates new memories, the adapter enriches them with the metadata that the portable format requires.

The format supports encryption at the memory level, not just the container level. Individual memories can be encrypted with different keys, enabling selective sharing scenarios where some memories are readable by all platforms and others are restricted to specific platforms or use cases.

Platform Adapters

Platform adapters are the translation layer between Memory Passport and each AI platform. Each adapter understands the platform's native memory format, API constraints, and context window limitations. The Claude adapter, for instance, knows how to format memories as system prompt context in a way that Claude processes optimally. The ChatGPT adapter formats the same memories differently, optimizing for OpenAI's specific behavior patterns.

Adapters are not simple format converters. They implement intelligent context selection — given a user's complete memory store, the adapter selects the memories most relevant to the current conversation and formats them to fit within the platform's context budget. A user with 10,000 memories does not dump all of them into every conversation. The adapter selects the 50-100 most relevant, balancing recency, frequency of access, and semantic relevance to the current topic.

The adapter system is extensible through the Model Context Protocol (MCP). Any AI platform that supports MCP can integrate with Memory Passport through a standard MCP server. This means Memory Passport works not just with the major platforms but with any custom agent, IDE extension, or tool that speaks MCP.

Sync Protocol

The sync protocol ensures that memories remain consistent across platforms. When you tell Claude that you have switched from Python to Rust, that update should be reflected when you next use ChatGPT. Without sync, platforms would have contradictory memories about you, and the whole point of portability would be undermined.

The protocol uses an event-sourced architecture. Every memory operation (create, update, delete) is recorded as an immutable event. Each platform adapter maintains a cursor indicating the last event it has processed. When you connect to a platform, the adapter pulls all events since its last sync and applies them locally. Conflict resolution follows MemoryLake's three-level conflict detection: if two platforms have modified the same memory, the system detects the conflict and applies resolution rules.

Sync is designed for resilience. If a platform is offline or the user has not connected to it in months, the sync catches up automatically when reconnected. There is no "sync window" or data loss from extended disconnection. The event log provides a complete, ordered history of every memory change across all platforms.

Privacy by Design

Memory Passport is built on a fundamental principle: you own your memories. No platform, no provider, and no third party can access your memory data without your explicit permission. This is not a policy choice — it is an architectural guarantee enforced by end-to-end encryption with triple-party key management.

The encryption architecture works as follows: your memories are encrypted with a key that is split across three parties — you (the user), MemoryLake (the infrastructure provider), and an independent key escrow service. No single party can decrypt your memories alone. You must actively authorize access, and you can revoke access at any time.

This architecture means that even if MemoryLake's servers were compromised, the attacker would gain access to encrypted blobs — not readable memories. The encryption keys never exist in a single location, and the decryption process requires active participation from at least two of the three parties.

Selective Memory Sharing

Not all memories should go everywhere. Your medical history is relevant to a healthcare AI but should not be visible to a coding assistant. Your company's proprietary project details should not leak to a general-purpose chatbot. Memory Passport implements granular sharing controls at the memory level.

You can configure sharing rules based on memory type (share Factual memories but not Conversation memories), memory topic (share work-related memories but not personal ones), platform identity (share everything with Claude but only preferences with ChatGPT), or custom tags that you assign to individual memories.

Sharing rules are enforced cryptographically, not just through access controls. Memories that are restricted from a platform are never decrypted for that platform's adapter. The platform cannot see that restricted memories exist, let alone access their content. This is a stronger guarantee than typical access control lists, which can be bypassed through administrative access.

Use Case: Personal Continuity

The most immediate use case is personal AI continuity. A user who has built up context with one AI platform should not lose that context when they try another. Memory Passport makes platform switching frictionless — your preferences, communication style, and personal context travel with you.

Consider a concrete scenario: you have been using Claude for work tasks and ChatGPT for creative writing. With Memory Passport, both platforms know your writing style preferences. Claude knows you prefer concise, technical responses because that is your general preference — but ChatGPT also knows to be more expressive in creative contexts because Memory Passport carries that contextual nuance.

Personal continuity also means continuity across devices. Your memory is not locked to a specific browser session, app installation, or device. Whether you are on your laptop, your phone, or a colleague's computer, your AI memory is available through Memory Passport.

Use Case: Professional Workflows

Professional knowledge workers often use different AI tools for different tasks: a coding assistant in their IDE, a research assistant for literature review, a writing assistant for documentation, and a general assistant for ad-hoc queries. Without Memory Passport, each tool operates in isolation. The coding assistant does not know what the research assistant learned. The writing assistant does not know what code was just written.

Memory Passport enables a connected workflow where each tool builds on the context accumulated by others. The research assistant finds a relevant paper, and the coding assistant immediately knows to incorporate that paper's approach. The coding assistant implements a new function, and the documentation assistant can reference it without being told.

This cross-tool continuity is particularly valuable for complex, multi-day projects where context accumulates over time. Without Memory Passport, each tool session starts with "let me explain the project again." With it, every tool picks up exactly where the workflow left off.

Use Case: Team Knowledge

Memory Passport also supports team-level shared memories. A team can maintain a shared memory space — project decisions, architectural choices, customer context — that is accessible to every team member's AI tools. When a new engineer joins the team, their AI assistant instantly has access to the team's accumulated knowledge.

Shared team memories coexist with individual memories. Your personal preferences remain private; the team's project decisions are shared. When you interact with an AI tool, Memory Passport merges your personal context with the team context, providing a complete picture without exposing private information to the team.

Team memories follow the same typing, versioning, and provenance rules as individual memories. When a team decision changes, the memory is updated with full audit trail. When two team members provide conflicting information, the conflict detection system flags it for resolution.

Technical Architecture

The technical architecture of Memory Passport consists of four layers. The storage layer uses MemoryLake's core memory infrastructure — typed memories, dual indexes, conflict detection. The encryption layer provides end-to-end encryption with triple-party key management. The sync layer implements event-sourced cross-platform synchronization. The adapter layer provides platform-specific integrations.

The system is designed for low latency. When you start a conversation with an AI platform, the adapter needs to retrieve and inject relevant memories within the platform's time-to-first-token budget — typically under 500 milliseconds. This is achieved through aggressive caching at the adapter level, pre-computed relevance rankings, and a fast sync protocol that minimizes round-trips.

The architecture supports both cloud-hosted and on-premise deployments. For organizations with strict data residency requirements, the entire Memory Passport stack can run within their infrastructure, with memory data never leaving their network boundary.

Conclusion

Memory Passport represents a fundamental shift in how we think about AI memory. Instead of each platform maintaining its own partial, siloed view of you, Memory Passport creates a single, unified memory that is yours — portable, private, and persistent across every AI platform you use.

The passport analogy is apt: just as your physical passport gives you a consistent identity across countries, Memory Passport gives you a consistent identity across AI platforms. You are no longer a stranger every time you try a new tool. You are you — with all the context, preferences, and history that makes your AI interactions genuinely personal.

We are building toward a future where the question is not "which AI platform has the best memory?" but "which AI platform makes the best use of my memory?" Memory Passport decouples memory from platform, giving users the freedom to choose tools based on capability rather than lock-in — and giving AI platforms the complete context they need to truly serve each user.

References

  1. Maharana, A., Lee, D. H., Bansal, M. (2024). "Evaluating Very Long-Term Conversational Memory of LLM Agents." ACL 2024. arxiv.org/abs/2402.17753
  2. Zhang, Z., et al. (2024). "A Survey on the Memory Mechanism of Large Language Model based Agents." arxiv.org/abs/2404.13501
  3. Park, J. S., et al. (2023). "Generative Agents: Interactive Simulacra of Human Behavior." UIST 2023. arxiv.org/abs/2304.03442