A new research paper accepted at the ACM Web Conference 2026 should make every small business pause before treating ChatGPT like a harmless productivity assistant.
In The Algorithmic Self-Portrait: Deconstructing Memory in ChatGPT, researchers studied 80 real ChatGPT users who donated their full conversation histories through a legal data access request. They found 2,050 stored memories. Users had explicitly asked ChatGPT to remember only 84 of them.
That means 96% of the stored memories were created automatically.
The bigger issue is not just volume. It is the type of information being stored. According to the paper, 52% of stored memories contained psychological insights such as beliefs, motivations, fears, or patterns in how a person thinks. Twenty-eight percent contained personal data that could fall under GDPR protection. Thirty-five percent of participants had health information stored, including symptoms, conditions, or medications.
For small and mid-sized businesses, that shifts the conversation. This is no longer just about whether employees paste sensitive text into a chatbot. It is also about whether the system is building a quiet profile from everyday work conversations, then using that profile in future interactions.
Why this matters for SMBs
Many SMBs use ChatGPT informally before they ever create an AI policy. A founder uses it to draft a proposal. A sales rep asks for help with objections from a specific account. An office manager summarizes an HR issue. A customer support lead pastes a complaint thread and asks for a response.
Each of those moments can expose more than the user intended.
A memory feature that silently stores patterns can accumulate business context over time:
- details about pricing pressure from specific customers
- internal team tensions or employee performance concerns
- health or leave information mentioned in HR-related prompts
- client names, industries, budgets, or legal concerns
- the decision style and risk tolerance of leadership
That profile does not just sit there. The paper argues that memory shapes future responses across conversations. In plain English, ChatGPT may respond to a user based partly on an internal profile the user never meant to create.
For a business, that creates two separate risks.
First, there is a privacy and compliance risk. If employees discuss personal data, medical details, or client information, those details may persist longer than expected.
Second, there is a governance risk. If an AI system is adapting responses based on inferred beliefs, habits, or vulnerabilities, you have less visibility into why one employee sees one answer and another sees something else.
The practical business question
Most SMBs do not need to panic. They do need to stop assuming that "I did not click remember" means "nothing was remembered."
That assumption is now dead.
If your team uses ChatGPT for real work, the right question is this: What information could the system infer and retain from normal usage, and would we be comfortable if that shaped future outputs?
That applies to confidential business information and to personal information about employees and clients.
What SMBs should do now
Here is the short list.
1. Review stored memories on business-used ChatGPT accounts
If your team is using individual ChatGPT accounts for work, ask them to review what is stored in memory settings. Do not treat this as a one-time cleanup. Make it part of routine AI hygiene.
Look for:
- customer names or account details
- internal project context
- personal preferences that reveal decision patterns
- health, HR, or employment-related information
- anything that would be awkward in discovery, audit, or a client security review
2. Decide whether memory should be enabled at all
For many SMB use cases, the answer should be no.
Memory can be helpful for personal productivity. It is much harder to justify in workflows involving sales, operations, HR, finance, legal matters, or client service. If the convenience is marginal and the data exposure is real, disable it.
If you keep it enabled for a narrow use case, define that use case clearly.
3. Set rules for what never goes into general-purpose AI tools
Your team should not have to guess.
Create a simple written policy that blocks entry of:
- employee health or leave information
- payroll, tax, or banking details
- client confidential information
- contracts, legal disputes, or regulated data
- performance issues tied to identifiable people
A short list that people actually follow beats a long policy nobody reads.
4. Separate experimentation from production work
A lot of risk comes from casual use. Someone tries ChatGPT for "just one thing" and ends up feeding it real business context.
Use approved accounts, approved tools, and approved workflows. If possible, keep public consumer AI tools away from sensitive internal tasks and move those use cases into platforms with clearer admin controls, retention terms, and data handling commitments.
5. Train employees on inference risk, not just copy-paste risk
Most AI training tells employees not to paste secrets into a prompt. That is necessary, but incomplete.
They also need to understand that repeated small disclosures can create a profile. A dozen harmless-looking prompts can reveal a lot when combined.
That is the lesson from this paper.
6. Document your AI data hygiene process
If a client asks how you handle AI risk, "we told people to be careful" is not enough.
Document who can use which tools, whether memory is allowed, how stored memories are reviewed, and what categories of data are off limits. That gives you a better operational baseline and a better answer during security questionnaires.
The bottom line
ChatGPT memory is not just a convenience feature. Based on this research, it can function like a silent profiling layer.
For SMBs, the right response is not fear. It is control.
Review what is stored. Disable memory where it does not belong. Keep sensitive business and personal data out of general-purpose AI chats. Treat AI memory settings as part of your security posture, not a cosmetic preference.
If you want help turning that into a practical AI usage policy for your business, contact Barista Labs.
