Hi there,
I let check Claude the changes which come into force on September 28th.
Please note. Claude can make mistakes. Check the changes by yourself before accepting.
Here is Claude's analysis, evaluation and tips:
Critical Changes in Anthropic's Terms of Service & Privacy Policy Analysis May 2025 vs September 2025 Versions
MOST CRITICAL CHANGE: Fundamental Shift in Model Training Policy
OLD POLICY (May 2025): "We will not train our models on any Materials that are not publicly available, except in two circumstances: (1) If you provide Feedback to us, or (2) If your Materials are flagged for trust and safety review"
NEW POLICY (September 2025): "We may use Materials to provide, maintain, and improve the Services and to develop other products and services, including training our models, unless you opt out of training through your account settings. Even if you opt out, we will use Materials for model training when: (1) you provide Feedback to us regarding any Materials, or (2) your Materials are flagged for safety review"
ASSESSMENT: This is a massive privacy regression. Anthropic now defaults to using ALL your conversations for training unless you explicitly opt out. This fundamentally changes their data usage model from opt-in to opt-out.
CHANGE 2: New Financial Services Restriction
NEW ADDITION (September 2025): "To rely upon the Services, the Materials, or the Actions to buy or sell securities or to provide or receive advice about securities, commodities, derivatives, or other financial products or services, as Anthropic is not a broker-dealer or a registered investment adviser"
ASSESSMENT: New legal liability protection for Anthropic, restricting legitimate use cases for users.
CHANGE 3: Expanded Data Collection
OLD POLICY (May 2025): Listed device information, IP address, identifiers
NEW POLICY (September 2025): Added: "device location" and expanded "Technical Information" definitions
ASSESSMENT: More invasive data collection with location tracking now explicitly mentioned.
CHANGE 4: Enhanced Surveillance Language
NEW ADDITION (September 2025): Explicit mention that flagged content will be used for "AI safety research" and "advance AI safety research"
ASSESSMENT: Broader justification for monitoring and analyzing user conversations.
CHANGE 5: Increased User Liability for Actions
The new version emphasizes more strongly that users are fully liable for all "Actions" Claude takes on their behalf, with more explicit warnings about responsibility.
ASSESSMENT: Anthropic is shifting more legal risk to users while reducing their own liability.
CHANGE 6: Weakened Transparency Obligations
NEW VERSION includes more exceptions for when Anthropic does NOT need to inform users about account suspensions or content removal.
ASSESSMENT: Less accountability and transparency in enforcement actions.
CHANGE 7: Expanded Research Definition
The definition of "research" has been broadened to include "societal impact of AI models" which is very broadly interpretable.
ASSESSMENT: Gives Anthropic wider latitude to use user data for various research purposes.
CHANGE 8: New Non-User Privacy Policy Reference
The new policy references a separate "Non-User Privacy Policy" for data obtained from third parties.
ASSESSMENT: Indicates expanded data collection from sources beyond direct users.
CHANGE 9: Modified Cancellation Rights
NEW RESTRICTION: "The 14-day cancellation period will not reset if you change subscription tiers or cancel and then resubscribe" NEW RESTRICTION: "If you are not paying under a subscription billing model, you do not have a right to a refund of unused tokens"
ASSESSMENT: Reduced consumer protections and refund rights.
CHANGE 10: New Data Portability Rights (Positive)
NEW ADDITION: Complete new section about data switching and portability rights.
ASSESSMENT: This is actually a positive change, giving users more control over their data and ability to switch providers.
OVERALL ASSESSMENT:
The changes represent a significant shift away from privacy-protective practices toward a more typical Big Tech approach:
-
Default data harvesting for AI training
-
Expanded surveillance and monitoring
-
Reduced user protections
-
Increased corporate liability shields
-
More invasive data collection
This is particularly concerning from a company that positions itself as focused on "AI Safety" and responsible AI development.
RECOMMENDATIONS:
-
DO NOT accept the new terms until September 28, 2025 (use the full grace period)
-
IMMEDIATELY check your account settings for the new training opt-out option when it becomes available
-
Review and adjust ALL privacy settings before accepting new terms
-
Consider alternative AI services as backup options (OpenAI, Google, others)
-
Be more cautious about sensitive information in conversations
-
Document your current conversation history if you want to preserve it
-
Consider the implications for any business or professional use cases
The direction is clearly toward more data collection and less user privacy protection, which represents a concerning departure from Anthropic's stated principles.
Thought your chats and code were private? Think again.
https://www.perplexity.ai/page/anthropic-reverses-privacy-sta-xH4KWU9nS3KH4Aj9F12dvQ
Videos
Did anyone else notice that Claude is now extending its data retention policy from 30 days to 5 years? Is this for both outputs and inputs?
Lets say I work at a company and I'm not sure if it is safe to input information about the network scheme assignments.
Things that I wouldnt want to post on reddit you know? We dont need all that information sent out into the wild.
Not sure how secure Claude is. What is your rule for information you share with Claude?
TL;DR: Is Anthropic forcing a choice between privacy and functionality that creates massive competitive disadvantages for independent developers while protecting enterprise customers?
What’s Happening
By September 28, 2025, all Claude users (Free, Pro, Max - including $100+/month subscribers) must decide: let Anthropic use your conversations for AI training and keep them for 5 years, or lose the memory/personalization features that make AI assistants actually useful.
There’s no middle ground. No “store my data for personalization but don’t train on it” option.
The Real Problem: It’s Not Just About Privacy
This creates a two-tiered system that systematically disadvantages solo entrepreneurs:
If You Opt Out (Protect Privacy):
Your AI assistant has amnesia after every conversation
No memory of your coding patterns, projects, or preferences
Lose competitive advantages that personalized AI provides
Pay the same $100+/month for inferior functionality
If You Opt In (Share Data):
Your proprietary code, innovative solutions, and business strategies become training data
Competitors using Claude can potentially access insights derived from YOUR work
Your intellectual property gets redistributed to whoever asks the right questions.
Enterprise Customers Get Both:
Full privacy protection AND personalized AI features
Can afford the expensive enterprise plans that aren’t subject to this policy
Get to benefit from innovations extracted from solo developers’ data
The Bigger Picture: Innovation Extraction
This isn’t just a privacy issue - it’s systematic wealth concentration. Here’s how:
Solo developers’ creative solutions → Training data → Corporate AI systems
Independent innovation gets absorbed while corporate strategies stay protected
Traditional entrepreneurial advantages (speed, creativity, agility) get neutralized when corporations have AI trained on thousands of developers’ insights
Why This Matters for the Future
AI was supposed to democratize access to senior-level coding expertise. For the first time, solo developers could compete with big tech teams by having 24/7 access to something like a senior coding partner. It actually gave solo developer a chance at starting a sophisticated innovative head start and an actual chance of creating a foundation.
Now they’re dismantling that democratization by making the most valuable features conditional on surrendering your competitive advantages.
The Technical Hypocrisy
A billion-dollar company with teams of experienced engineers somehow can’t deploy a privacy settings toggle without breaking basic functionality. Voice chat fails, settings don’t work, but they’re rushing to change policies that benefit them financially.
Meanwhile, solo developers are shipping more stable updates with zero budget.
What You Can Do
Check your Claude settings NOW - look for “Help improve Claude” toggle under Privacy settings
Opt out before September 28 if you value your intellectual property
Consider the competitive implications for your business
Demand better options - there should be personalization without training data extraction
Questions for Discussion
Is this the end of AI as a democratizing force?
Should there be regulations preventing this kind of coercive choice?
Are there alternative AI platforms that offer better privacy/functionality balance?
How do we prevent innovation from being systematically extracted from individual creators?
This affects everyone from indie game developers to consultants to anyone building something innovative. Your proprietary solutions shouldn’t become free training data for your competitors.
What’s your take? Are you opting in or out, and why?
I'll preface by saying I don't know a lot about artificial intelligence. However, I'm interesting in using an AI app to boost my productivity with writing. I want to use it to discuss my ideas and edit my drafts. But I'm a bit concerned about having my ideas stolen. Should I be concerned or am I just being unreasonable? Should I be concerned about the companies behind these AI models stealing my ideas? Is ChatGPT or Claude better?
I don't mind having my conversations analyzed to improve their models.
I've been using Claude Code for a while now and it's been solid, mainly because Anthropic lets you opt out of training on your data. Privacy matters when you're working with client code or anything remotely sensitive.
Now I'm seeing people integrate GLM 4.6 (the new Zhipu AI model) into their coding workflows, and honestly, the performance looks tempting. But here's the problem: I can't find clear information about whether they train on API usage data, and there doesn't seem to be an opt-out like Claude offers.
I've looked at OpenRouter as a potential middleman, but there are multiple providers there and the privacy policies are... unclear. Some of these providers are basically black boxes when it comes to data handling.
So, real question for anyone who's done their homework:
Has anyone found a legit API provider for GLM 4.6 that contractually guarantees they won't train on your code?
Are there any OpenRouter providers that are actually transparent and safe for proprietary/sensitive codebases?
Or am I just being paranoid and there's something obvious I'm missing in their ToS?
I'm not trying to build SkyNet here - I just have repos with customer data, internal tools, and stuff that absolutely cannot end up in someone's training dataset. The whole "state-of-the-art model" thing doesn't mean much if it comes with the risk of leaking IP.
Anyone successfully using GLM 4.6 (or similar Chinese models) with actual privacy guarantees? What's your setup?
Thanks in advance. Not looking to start a privacy crusade, just want to use good tools without getting my company's lawyers involved.
I recently built a small CLI app for translating commit messages from one language to another using the Claude API for a personal project. It was working great until I started noticing something weird - random messages would occasionally appear alongside my translations.
At first, I thought these were just translation errors, but looking closer, it seems like I'm seeing fragments of other people's prompt history. The messages usually follow this format:
End File# [github username]/[github repository name] H: [someone's prompt]
I've seen about 4 different prompts so far. When I checked, the GitHub usernames are real, and most of the repositories exist (though some seem to be private since I can see the user but not the repo).
Fortunately, I haven't seen any sensitive information like API keys or personal data... yet. But this seems like a pretty serious privacy issue, right? Is this a known bug with the Claude API? Has anyone else experienced something similar?
I’ve just discovered that I can run AI (like Claude Code) in the terminal. If I understand correctly, using the terminal means the AI may need permission to access files on my computer. This makes me hesitant because I don’t want the AI to access my personal or banking files or potentially install malware (I’m not sure if that’s even possible).
I have a few questions about running AI in the terminal with respect to privacy and security:
If I run the AI inside a specific directory (for example,
C:\Users\User\Project1), can it read, create, or modify files only inside that directory (even if I use--dangerously-skip-permissions)?I’ve read that some people run the AI in the terminal inside a VM. What’s the purpose of that and do you think it’s necessary?
Do you have any other advice regarding privacy and security when running AI in the terminal?
Thank you very much for any help.
How does Claude hold up in terms of privacy - I’m assuming it’s miles better than google, but my impression was that it’s also better than ChatGPT, is that correct?
Both gpt-4 and gemini advanced have the option to limit data retention (for longer than 30 days, and 72 hours respectively) and disable training models on chats, but I couldn't find any such option in Claude.
Is there a way to stop Claude from keeping my chat data indefinitely and training their future models based on it?
I recently wrote on my site about how the “How is Claude doing this session?” prompt seemed like a feature just designed to sneak more data from paying Claude users who had opted out of sharing data to improve+train Anthropic’s models. But I could only theorize that even tapping “0” to “Dismiss” the prompt may be considered “feedback” and therefore hand over the session chat to the company for model training.
I can confirm that, tapping “0” to Dismiss, is considered “feedback” by Anthropic (a very important word when it comes to privacy policies). When doing so, Claude says “thanks for the feedback … and thanks for helping to improve Anthropic’s models”. (This is paraphrasing because the message lasts for about 2 seconds before vanishing, but the words "feedback" "improve" and "models" are definitely part of the response.) Obviously helping to improve models (or provide feedback) is NOT what I or others are trying to accomplish by tapping “Dismiss”. I assume this is NOT a typo on the company’s part, but I’d be interested in having a clarification from the company either way. I would wager a fair case could be made that classifying this response as (privacy-defeating) “feedback” runs afoul of contract law (but I am not a lawyer).
Anyway, I clicked it so you won’t have to: I would not interact with that prompt at all, just ignore it, if you care about your privacy.
This was my original writing on the topic, with privacy policy context:
I am a power user of AI models, who pays a premium for plans claiming to better-respect the privacy of users. (Btw, I am not a lawyer.)
With OpenAI, I pay $50/month (2 seats) for a business account vs a $20/month individual plan because of stronger privacy promises, and I don’t even need the extra seat, so I’m paying $30 more!
Yet with OpenAI, there is this caveat: “If you choose to provide feedback, the entire conversation associated with that feedback may be used to train our models (for instance, by selecting thumbs up or thumbs down on a model response).”
So I never click the thumbs up/down.
But I’m nervous… Notice how that language is kept open-ended? What else constitutes “feedback”?
Let’s say I’m happy with a prompt response, and my next prompt starts with “Good job. Now…” Is that feedback? YES! Does OpenAI consider it an excuse to train on that conversation? 🤷 Can I get something in writing or should I assume zero privacy and just save my $30/month?
I was initially drawn to Anthropic’s product because it had much stronger privacy guarantees out of the gate. Recent changes to that privacy policy made me suspicious (including some of the ways they’ve handled the change).
But recently I’ve seen this very annoying prompt in Claude Code, which I shouldn’t even see because I’ve opted OUT of helping “improve Anthropic AI models”.
What are its privacy implications? Here’s what the privacy policy says:
“When you provide us feedback via our thumbs up/down button, we will store the entire related conversation, including any content, custom styles or conversation preferences, in our secured back-end for up to 5 years. Feedback data does not include raw content from connectors (e.g. Google Drive), including remote and local MCP servers, though data may be included if it’s directly copied into your conversation with Claude…. We may use your feedback to analyze the effectiveness of our Services, conduct research, study user behavior, and train our AI models as permitted under applicable laws. We do not combine your feedback with your other conversations with Claude.”
This new prompt seems like “feedback” to me, which would mean typing 1,2,3 (or maybe even 0) could compromise the privacy of the entire session? All we can do is speculate, and, I’ll say it: shame on the product people for not helping users make a more informed choice on what they are sacrificing, especially those who opted out of helping to “improve Anthropic AI models”.
It’s a slap in the face for users paying hundreds of dollars/month to use your service.
As AI startups keep burning through unprecedented amount of cash, I expect whatever “principles” founders may have had, including about privacy, to continue to erode.
Be careful out there, folks.
So I had a weird experience with latest Claude 3.5 Sonnet that left me a bit unsettled. I use it pretty regularly through the API but mostly on their playground (console environment). Recently, I asked it to write a LICENSE and README for my project, and out of nowhere, it wrote my full name in the MIT license. The thing is, I’d only given it my first name in that session - and my last name is super rare.
I double-checked our entire convo to make sure I hadn’t slipped up and mentioned it, but nope, my last name was never part of the exchange. Now I’m wondering… has Claude somehow trained on my past interactions, my GitHub profile, or something else that I thought I’d opted out of? Also, me giving personal information is something that would be super rare in all my interactions with API vendors…
Anyone else have spooky stuff like this happen? I’m uneasy thinking my name could just randomly pop up in for other people. Would love to hear your thoughts or any similar stories if you’ve got ’em!
People who use Claude as a therapist: how do you find yourself thinking about the privacy issues involved? (Part of me would love to have a therapist in my pocket I could turn to in times of stress, but mostly I'm terrified of giving Anthropic - or any tech company - unfettered insight into my neuroses. Who knows what they would do with that info?)
tl;dr
Are documents uploaded to a project kept private on the paid Pro tier?
Long version:
I’m looking for a competent AI LLM to accelerate software development.
As I don’t have an array of high-end NVIDIA cards to run a local model, I’m looking at commercial options. Claude AI is one of my finalists, with the Projects with “project knowledge” (uploaded documents for that project.)
When subscribing to the Pro tier and creating a project, what happens to the files uploaded to “Project Knowledge “? Do they remain private? Or does Anthropic access them?
I’ve read the web privacy page but I’m not a lawyer. I’ve searched this subreddit and I haven’t found the answer. So here’s my question.
So many people talk about how great it is for coding, analyzing data, using MCP etc. There is one thing that Claude Code helped me with because it is so good at those things I mentioned. It completely extinguished my stress of deadlines or in general work related things. Now I have 0 stress, whatever task they ask me to do I know I will do it thanks to Claude. So thanks again Anthropic for this stress relieving tool.
We’re updating our consumer terms and privacy policy. With your permission, we’ll use chats and coding sessions to train our models and improve Claude for everyone.
If you choose to let us use your data for model improvement we'll only use new or resumed chats and coding sessions.
By participating, you'll help us improve classifiers to make our models safer. You'll also help Claude improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.
You can change your choice at any time.
These changes only apply to consumer accounts (Free, Pro, and Max, including using Claude Code with those accounts). They don't apply to API, Claude for Work, Claude for Education, or other commercial services.
Learn more: https://www.anthropic.com/news/updates-to-our-consumer-terms
As the title say, but the caveats are:
-You obviously would not share codes with sensitive data (.env as an example).
-You would delete the conversations once not needed anymore (Claude has a D+30 retention policy for deleted item, unless suspicious).
I have been doing it (although not the whole app since the knowledge base is not big enough with Pro, unsure if Max is bigger).
I shared some API key since it's on staging.
I delete conversation as soon as not needed anymore.
Maybe I am forgetting some critical items here.