Which AI Assistants Train on Your Inputs by Default in 2026
Anthropic, OpenAI, Google, Perplexity, and Mistral handle user inputs differently. Here is what each vendor's privacy policy actually says about training and retention.
Why the default matters
Every consumer AI subscription terms-of-service controls whether the vendor can use your conversations to train future models. The default settings differ across vendors and shift over time. The single most useful action is to check Settings โ Data Controls (or equivalent) on the day you sign up, and again periodically โ defaults can change with policy revisions.
Anthropic Claude
Per anthropic.com/legal/privacy, consumer Claude Pro and Max accounts do not train models on user inputs by default. Anthropic API traffic is also not used for training by default. Enterprise terms vary by contract. The default-off posture is the strictest among the major US-based assistants.
OpenAI ChatGPT
Per openai.com/policies/privacy-policy, consumer ChatGPT accounts (Free, Plus, Pro) default to using inputs to train models. Opt-out is documented at Settings โ Data Controls โ 'Improve the model for everyone'. ChatGPT Team and Enterprise accounts default to no training. The opt-out is documented but on by default for individuals โ the inverse of Anthropic's posture.
Google Gemini
Per gemini.google/policy-guidelines, Google Gemini consumer activity may be used to improve Google's services. The Web & App Activity setting on the user's Google account controls retention; pausing it limits storage but the precise training-data interaction varies by Workspace context. Review the policy directly for the specific tier you use.
Perplexity and Mistral
Per perplexity.ai/hub/legal/privacy-policy, Perplexity does not train models on user data by default on the consumer plan. Per mistral.ai/legal/privacy-policy, Mistral's Le Chat handling is documented in their privacy policy โ review before assuming. Mistral being EU-headquartered (Paris) means GDPR applies directly, which is a structural difference from US-based competitors.
Practical advice
For sensitive prompts (legal, medical, internal-business): use a paid tier on a vendor whose default is no-training (Claude Pro, Perplexity Pro), or a Team / Enterprise tier from any vendor (those default to no-training across the board). For everyday use: the privacy difference is small in practice; pick by capability and price. Never paste passwords, API keys, or customer-identifiable PII into any consumer AI tier.
Sources
Anthropic privacy: anthropic.com/legal/privacy. OpenAI privacy: openai.com/policies/privacy-policy. Google AI policy: gemini.google/policy-guidelines. Perplexity privacy: perplexity.ai/hub/legal/privacy-policy. Mistral privacy: mistral.ai/legal/privacy-policy. All URLs accessed 2026-04-30.