How to Opt Out of Claude Model Training (Anthropic)

Opt out of AI services use of your personal information for future training data.

Anthropic gives consumer Claude users a clear choice: allow Claude to learn from your conversations, or keep your data out of future model training. The setting lives in Privacy Settings and takes effect on new and resumed conversations going forward. This 2026 guide explains how to find it, what it actually covers, and what it cannot undo.

Why this matters. Consumer AI products use chats, posts, and account data to train models by default. Once your information is included in a training run, it cannot be removed retroactively. The earlier you opt out, the more of your data stays out. Priwall by mePrism tracks every major AI opt-out so you can act before, not after.

Why opt out of Claude model training?

Consumer Claude products — Free, Pro, and Max — may use your chat and coding-session content to train and improve future models. If you discuss health concerns, financial plans, work projects, or personal situations with Claude, that content can be included in training data unless you opt out. Anthropic states the opt-out applies to new and resumed conversations after you save the setting; conversations already in a training run that has started are not pulled back. Turning it off early is the only way to keep more of your data out.

Quick facts

Service type Generative AI assistant (consumer plans)
Data used by default Chat and coding-session content
Opt-out method Settings → Privacy Settings toggle
Applies to Free, Pro, Max consumer accounts
Excluded by default Claude for Work, API, Government

How to opt out of Claude model training

  1. Sign in to Claude. Go to claude.ai and sign in to your Free, Pro, or Max account.
  2. Open your account menu. Click your profile or account icon, typically in the top-right or sidebar.
  3. Open Settings. Choose Settings from the account menu.
  4. Navigate to Privacy Settings. Within Settings, find the Privacy Settings section.
  5. Locate the model training option. Find the toggle that controls whether your conversations may be used to train Anthropic's models.
  6. Turn it off and save. Switch the toggle off. If prompted, confirm the change. The setting applies to new and resumed conversations going forward.

Steps current as of 2026; if the platform has changed its flow, see their current privacy or settings page.

Tired of chasing AI privacy settings?

Priwall by mePrism finds your AI opt-out controls across major platforms, helps you turn them off, and monitors for changes — so a single policy update doesn't silently re-expose your data.

Skip the manual work →

What to expect after opting out

Once you save, any new chat or coding session — and any conversation you resume — is excluded from Anthropic's future training. Conversations that were already part of a training run that had begun before you opted out remain included; this is standard across major AI platforms. You can switch the setting back on at any time, and you can also delete individual conversations from your history independently of this control.

Frequently asked questions

Does opting out delete my conversation history?

No. Your past conversations stay in your account. The toggle only controls whether new and resumed conversations are used for training.

Does this affect Claude for Work or the API?

No. Claude for Work (team and business plans), the API, and Claude for Government are not used for model training by default.

If I opt out later, is past data removed from training?

No. Anthropic states the change is forward-looking — conversations already used or in a training run in progress are not pulled out.

Can I turn it back on?

Yes. The toggle can be switched on or off at any time in Privacy Settings.

How is this different from deleting a conversation?

Deleting removes the chat from your account view. Opting out controls whether your conversations contribute to training. Use both for stronger control.

"Tracking AI opt-outs across a dozen services was driving me crazy. Priwall by mePrism handles them in one place — and tells me when something changes. Worth every dollar."

— David L., Chicago IL, mePrism customer

See pricing →

Related AI opt-out guides

Sources checked

Next
Next

How to Opt Out of CheckPeople