🌐
Anthropic
anthropic.com › news › updates-to-our-consumer-terms
Updates to Consumer Terms and Privacy Policy
Today, we're rolling out updates to our Consumer Terms and Privacy Policy that will help us deliver even more capable, useful AI models. We're now giving users the choice to allow their data to be used to improve Claude and strengthen our safeguards against harmful usage like scams and abuse.
🌐
Reddit
reddit.com › r/claudeai › new privacy and tos explained by claude
r/ClaudeAI on Reddit: New privacy and TOS explained by Claude
August 28, 2025 -

Hi there,

I let check Claude the changes which come into force on September 28th.

Please note. Claude can make mistakes. Check the changes by yourself before accepting.

Here is Claude's analysis, evaluation and tips:

Critical Changes in Anthropic's Terms of Service & Privacy Policy Analysis May 2025 vs September 2025 Versions

MOST CRITICAL CHANGE: Fundamental Shift in Model Training Policy

OLD POLICY (May 2025): "We will not train our models on any Materials that are not publicly available, except in two circumstances: (1) If you provide Feedback to us, or (2) If your Materials are flagged for trust and safety review"

NEW POLICY (September 2025): "We may use Materials to provide, maintain, and improve the Services and to develop other products and services, including training our models, unless you opt out of training through your account settings. Even if you opt out, we will use Materials for model training when: (1) you provide Feedback to us regarding any Materials, or (2) your Materials are flagged for safety review"

ASSESSMENT: This is a massive privacy regression. Anthropic now defaults to using ALL your conversations for training unless you explicitly opt out. This fundamentally changes their data usage model from opt-in to opt-out.

CHANGE 2: New Financial Services Restriction

NEW ADDITION (September 2025): "To rely upon the Services, the Materials, or the Actions to buy or sell securities or to provide or receive advice about securities, commodities, derivatives, or other financial products or services, as Anthropic is not a broker-dealer or a registered investment adviser"

ASSESSMENT: New legal liability protection for Anthropic, restricting legitimate use cases for users.

CHANGE 3: Expanded Data Collection

OLD POLICY (May 2025): Listed device information, IP address, identifiers

NEW POLICY (September 2025): Added: "device location" and expanded "Technical Information" definitions

ASSESSMENT: More invasive data collection with location tracking now explicitly mentioned.

CHANGE 4: Enhanced Surveillance Language

NEW ADDITION (September 2025): Explicit mention that flagged content will be used for "AI safety research" and "advance AI safety research"

ASSESSMENT: Broader justification for monitoring and analyzing user conversations.

CHANGE 5: Increased User Liability for Actions

The new version emphasizes more strongly that users are fully liable for all "Actions" Claude takes on their behalf, with more explicit warnings about responsibility.

ASSESSMENT: Anthropic is shifting more legal risk to users while reducing their own liability.

CHANGE 6: Weakened Transparency Obligations

NEW VERSION includes more exceptions for when Anthropic does NOT need to inform users about account suspensions or content removal.

ASSESSMENT: Less accountability and transparency in enforcement actions.

CHANGE 7: Expanded Research Definition

The definition of "research" has been broadened to include "societal impact of AI models" which is very broadly interpretable.

ASSESSMENT: Gives Anthropic wider latitude to use user data for various research purposes.

CHANGE 8: New Non-User Privacy Policy Reference

The new policy references a separate "Non-User Privacy Policy" for data obtained from third parties.

ASSESSMENT: Indicates expanded data collection from sources beyond direct users.

CHANGE 9: Modified Cancellation Rights

NEW RESTRICTION: "The 14-day cancellation period will not reset if you change subscription tiers or cancel and then resubscribe" NEW RESTRICTION: "If you are not paying under a subscription billing model, you do not have a right to a refund of unused tokens"

ASSESSMENT: Reduced consumer protections and refund rights.

CHANGE 10: New Data Portability Rights (Positive)

NEW ADDITION: Complete new section about data switching and portability rights.

ASSESSMENT: This is actually a positive change, giving users more control over their data and ability to switch providers.

OVERALL ASSESSMENT:

The changes represent a significant shift away from privacy-protective practices toward a more typical Big Tech approach:

  • Default data harvesting for AI training

  • Expanded surveillance and monitoring

  • Reduced user protections

  • Increased corporate liability shields

  • More invasive data collection

This is particularly concerning from a company that positions itself as focused on "AI Safety" and responsible AI development.

RECOMMENDATIONS:

  1. DO NOT accept the new terms until September 28, 2025 (use the full grace period)

  2. IMMEDIATELY check your account settings for the new training opt-out option when it becomes available

  3. Review and adjust ALL privacy settings before accepting new terms

  4. Consider alternative AI services as backup options (OpenAI, Google, others)

  5. Be more cautious about sensitive information in conversations

  6. Document your current conversation history if you want to preserve it

  7. Consider the implications for any business or professional use cases

The direction is clearly toward more data collection and less user privacy protection, which represents a concerning departure from Anthropic's stated principles.

🌐
Claude
support.claude.com › en › collections › 4078534-privacy-and-legal
Privacy and Legal | Claude Help Center
Updates to our Acceptable Use Policy (now “Usage Policy”), Consumer Terms of Service, and Privacy Policy · Consumer Terms of Service Updates · Terms of Service Updates · Official Anthropic marketing email addresses · Reporting, Blocking, and Removing Content from Claude · Does Anthropic crawl data from the web, and how can site owners block the crawler? Online Safety Contacts · Claude 4 Invite Sweepstakes Official Rules · Designated Point of Contact for Users in the EU · Can I use my Outputs to train an AI model?
🌐
Data Studios
datastudios.org › post › claude-data-retention-policies-storage-rules-and-compliance-overview
Claude: data retention policies, storage rules, and compliance overview
September 3, 2025 - Default retention is 30 days across most Claude products, but API logs shrink to 7 days starting 15 September 2025. Anthropic does not use customer data for training unless there is explicit opt-in consent.
🌐
Skywork
skywork.ai › home › security & privacy in claude desktop: what you need to know
Security & Privacy in Claude Desktop: Essential FAQ Answers
October 27, 2025 - See Claude Help Center — Custom ... user data: Anthropic states personal data for Claude.ai users is encrypted in transit (TLS) and at rest, with restricted employee access under strict controls....
🌐
Claude
privacy.claude.com › en › articles › 10458704-how-does-anthropic-protect-the-personal-data-of-claude-users
How does Anthropic protect the personal data of Claude users? | Anthropic Privacy Center
At Anthropic, we're committed to protecting your privacy and securing your data. Here's how we keep your information safe: Encryption: Your data is automatically encrypted both while in transit, and stored (at rest). Limited Access: By default, Anthropic employees cannot access your conversations ...
🌐
Wesurance
wesurance.io › post › understanding-data-privacy-in-ai-does-gpt-3-5-or-claude-3-5-use-your-data-for-training
Data Privacy in AI: Do GPT-3.5 or Claude 3.5 Use Your Data for Training?
Explore how OpenAI's GPT-3.5 and Anthropic's Claude 3.5 manage user data privacy. Learn that interactions are not automatically used for training unless opted in. Discover best practices for businesses to ensure data security while leveraging AI capabilities.
🌐
Tactiq
tactiq.io › learn › is-claude-ai-safe
Is Claude AI Safe? Security Measures You Need to Know
June 12, 2025 - Short-term Data Retention: User data is retained for a maximum of 90 days for system functionality and user convenience, after which it is automatically deleted. No Unauthorized Data Use: Anthropic commits to not using data from user interactions to train the AI models unless explicit consent is provided. Privacy Policies: Claude AI adheres to rigorous privacy policies and complies with data protection regulations ensuring user information is handled responsibly...
🌐
AMST Legal
amstlegal.com › home › anthropic’s claude ai updates – impact on privacy & confidentiality
Anthropic's Claude AI Updates - Impact on Privacy & Confidentiality | AMST Legal
September 25, 2025 - Most critically, the Consumer Terms grant Anthropic permission to retain and use data. They establish the legal foundation for the 5-year retention period. However, they defer implementation details to the Privacy Policy. Claude AI will train on all data, except if you opt out or in case of business accounts
Find elsewhere
🌐
Privacy International
privacyinternational.org › guide-step › 5677 › claude-settings-and-good-practices
Claude: Settings and good practices | Privacy International
We have expressed concerns about the lawfulness of using data collected online for this purpose and there are demonstrable risks of this data being reproduced by AI chatbots without user consent or knowledge (called “regurgitation”). To limit the risk of personal data leaking out, we suggest preventing the content of your chats from being used for further training. On the mobile app: Go to your profile by clicking the menu in the top left corner and tapping on your profile in the bottom left corner. Navigate to Privacy and turn off the You can help improve Claude setting.
🌐
Claude
claude.ai › settings › data-privacy-controls
Claude
Talk with Claude, an AI assistant from Anthropic
🌐
Smithstephen
smithstephen.com › p › claude-flips-the-privacy-default
Claude flips the privacy default, opt out before September 28
September 3, 2025 - The new policy takes effect on September 28, and the default choice is set to “yes” through a large Accept button with a small toggle that is initially set to On. I don’t like it.
🌐
Bitdefender
bitdefender.com › en-au › blog › hotforsecurity › anthropic-shifts-privacy-stance-lets-users-share-data-for-ai-training
Anthropic Shifts Privacy Stance, Lets Users Share Data for AI Training
In an update to its consumer terms and privacy policy, Anthropic explained that the move aims to make models more effective and secure. Unlike industry peers that have historically collected user data by default, Anthropic is framing this as a voluntary contribution. The adjustment applies only to individual consumer plans, leaving commercial users untouched. Claude for Work, Claude Gov, Claude for Education, and API usage through Amazon Bedrock and Google’s Cloud Vertex AI will keep the existing privacy protections.
🌐
Reddit
reddit.com › r/claudeai › does claude api (direct) keep your data private?
r/ClaudeAI on Reddit: Does Claude API (direct) keep your data private?
May 3, 2024 - They may share personal information ... policies and technical measures, prioritizes user data privacy and takes steps to ensure that personal and sensitive information is protected....
🌐
Claude Ai
claudeai.wiki › privacy-policy
Privacy Policy – Claude Ai
At Claude AI, accessible from https://claudeai.uk, one of our main priorities is the privacy of our visitors. This Privacy Policy document contains types of information that is collected and recorded by Claude AI and how we use it.
🌐
Claude AI Hub
claudeaihub.com › privacy-policy
Privacy Policy | Claude AI Hub
January 10, 2025 - Our website address is: https://claudeaihub.com. When visitors leave comments on the site we collect the data shown in the comments form, and also the visitor’s IP address and browser user agent string to help spam detection. An anonymized string created from your email address (also called a hash) may be provided to the Gravatar service to see if you are using it. The Gravatar service privacy policy ...
🌐
Medium
medium.com › @michael_79773 › ai-assistant-privacy-what-claude-chatgpt-and-gemini-users-should-now-7d3f5cae9e5d
AI Assistant Privacy: What Claude, ChatGPT, and Gemini Users Should Know | by Michael Alexander Riegler | Medium
June 26, 2024 - It is a good practice to periodically review your settings and the latest privacy policy updates for the services you use. If you have concerns, do not hesitate to reach out to the service providers or use their feedback mechanisms. In addition you can also ask for all data stored about you to check from time to time. ... In my opinion in terms of policy clarity, Anthropic (Claude) offers the clearest and most concise policy, with specific information about data usage in model training.