Switching AI assistants has always had a hidden tax: you lose everything the model learned about how you work. Your coding style, your writing tone, your preferences for how information gets structured -- it all evaporates the moment you open a new tab on a competitor's platform.
Anthropic shipped a direct answer to that on March 1st. Claude's Import Memory feature reduces the migration from months of re-training to a single copy-paste sequence. The technical switching cost is now close to zero.
That raises a harder question: if the friction is gone, why would anyone stay?
How the Two-Step Transfer Works
The process is almost aggressively simple. Visit claude.com/import-memory and Anthropic hands you a prompt. Paste that prompt into your current AI assistant -- ChatGPT, Gemini, Copilot, Grok, any platform that stores user memories -- and it outputs a structured block containing everything it knows about you. Copy that block, paste it into Claude's memory settings, and Claude processes it into its own memory system.
No file exports. No JSON parsing. No API credentials. The whole operation takes under a minute.
For ChatGPT users specifically, there's a second path: navigate to Settings > Personalization > Manage Memories, copy the entries directly, and paste them into Claude. Either route works.
The feature is available on all paid Claude plans -- Pro ($20/month), Max, Team, and Enterprise. Memories are encrypted, not used for model training, and exportable at any time. Anthropic is positioning this explicitly as an anti-lock-in play.
Google is reportedly testing a similar "Import AI Chats" feature for Gemini, but the current implementation only transfers chat history, not saved memories. That's a meaningful gap: chat logs are searchable archives, but they don't actively shape how the model responds to you. Memories do.
What the Import Actually Moves
Understanding what transfers -- and what doesn't -- matters before you treat this as a full migration.
What moves cleanly: User preferences (writing style, response length, formality level), project context you've previously shared, recurring topics and terminology, explicit instructions you've given the model.
What doesn't move: Conversation history. Custom GPTs or Claude Projects with their own system prompts. Any context that lives inside a specific workflow integration -- your Slack-connected assistant or the GPT embedded in your company's internal tool. And critically, the implicit calibration that comes from hundreds of interactions isn't fully portable. The model has learned your rhythms in ways that don't surface cleanly as discrete memory entries.
Think of it as moving your address book, not your entire inbox. The foundation transfers; the accumulated texture does not.
Where Memory Portability Breaks Down
There are three scenarios where this feature matters less than the headline suggests.
Integrated tooling. If your team runs ChatGPT through a Microsoft 365 Copilot license or uses a custom GPT wired into internal systems, the memory import is irrelevant. The lock-in isn't your personal preferences -- it's the infrastructure. Switching means renegotiating contracts and rebuilding integrations, not clicking a button.
Team vs. individual. Import Memory is a personal account feature. A 50-person team where individuals have each built months of model context doesn't have a one-click solution. Each person migrates independently, and the organizational memory (shared project context, team-level instructions) doesn't port at all.
Agentic workflows. If you're running Claude or ChatGPT inside automated pipelines -- scheduled tasks, multi-step agents, API calls -- personal memory isn't even in the picture. Those systems are configured through system prompts and tool definitions, not user preferences. Platform choice there is an engineering decision, not a UX one.
A Framework for Deciding When to Switch
This feature makes the migration calculus cleaner for individual power users. It doesn't change much for teams or developers.
Switch now if:
- You're on a personal paid plan and primarily use the AI for writing, research, or knowledge work
- Your current platform is raising prices or you have philosophical concerns about how it handles data
- You've been meaning to try Claude but kept deferring because of the re-onboarding cost
Stay or evaluate further if:
- Your team is mid-deployment on a company-wide contract
- You rely on custom GPTs, Copilot integrations, or workflow connectors that don't have Claude equivalents
- Your use case is primarily agentic -- the memory feature doesn't affect your stack
Worth testing in parallel if:
- You're evaluating platforms for a future procurement decision
- You want to compare Claude and ChatGPT on the same underlying context to see which responds better to your actual work patterns
One thing worth noting: Anthropic's choice to launch this during a week when thousands of users are publicly protesting OpenAI's military contract isn't coincidental. The feature is good on its merits. The timing is also clearly strategic. Neither of those things cancels the other out.
The real signal isn't that Anthropic built a migration tool. It's that they're confident enough in Claude's quality to make switching as easy as possible and let the platform compete on performance alone. If that confidence is warranted, the import feature is the start of a longer shift in how AI platform loyalty gets established -- not through accumulated friction, but through actual daily utility.
