🌐
Claude
claude.ai › settings › data-privacy-controls
Privacy Settings
Talk with Claude, an AI assistant from Anthropic
🌐
Claude
code.claude.com › docs › en › security
Security - Claude Code Docs
Claude Code uses strict read-only permissions by default. When additional actions are needed (editing files, running tests, executing commands), Claude Code requests explicit permission.
Discussions

New privacy and TOS explained by Claude
Anthropic had an opportunity to plant a deep stake in the moral high ground of user privacy and they caved. Let this be cautionary tale about what really drives a business and your privacy is merely a resource to exploit, not a civil right to be protected at all costs. I’m reevaluating my choice of providers immediately. More on reddit.com
🌐 r/ClaudeAI
98
187
August 28, 2025
Does Claude API (direct) keep your data private?
I'd say yes, from the Exhibit A: Anthropic Data Processing Addendum section in the commercial ToS , at least from how I as a layman understand it. From how I understand it, they don't even train the safety model when using the API: Commercial Terms of Service A. Services 4. Anthropic may not train models on Customer Content from paid Services. More on reddit.com
🌐 r/ClaudeAI
10
3
May 3, 2024
Now you see privacy, now you don’t.
Something sketchy is happening. I opted out when it popped up. But after reading this thread I thought I'd go look at it in the privacy settings, and the Help improve Claude toggle was set to opted in. What the hell. More on reddit.com
🌐 r/ClaudeAI
126
200
August 28, 2025
Potential Privacy Issue in Claude AI
There is another new post in this sub regarding project caching. I'm *guessing* that's what you accidentally ran across. The project info you deleted was still in cache and that's why it was reproduced. Try it again in a non-project chat and see what happens. More on reddit.com
🌐 r/ClaudeAI
13
15
May 3, 2025
🌐
Anthropic
anthropic.com › news › updates-to-our-consumer-terms
Updates to Consumer Terms and Privacy Policy
Today, we're rolling out updates to our Consumer Terms and Privacy Policy that will help us deliver even more capable, useful AI models. We're now giving users the choice to allow their data to be used to improve Claude and strengthen our safeguards against harmful usage like scams and abuse. Adjusting your preferences is easy and can be done at any time. These updates apply to users on our Claude Free, Pro, and Max plans, including when they use Claude Code ...
🌐
Reddit
reddit.com › r/claudeai › new privacy and tos explained by claude
r/ClaudeAI on Reddit: New privacy and TOS explained by Claude
August 28, 2025 -

Hi there,

I let check Claude the changes which come into force on September 28th.

Please note. Claude can make mistakes. Check the changes by yourself before accepting.

Here is Claude's analysis, evaluation and tips:

Critical Changes in Anthropic's Terms of Service & Privacy Policy Analysis May 2025 vs September 2025 Versions

MOST CRITICAL CHANGE: Fundamental Shift in Model Training Policy

OLD POLICY (May 2025): "We will not train our models on any Materials that are not publicly available, except in two circumstances: (1) If you provide Feedback to us, or (2) If your Materials are flagged for trust and safety review"

NEW POLICY (September 2025): "We may use Materials to provide, maintain, and improve the Services and to develop other products and services, including training our models, unless you opt out of training through your account settings. Even if you opt out, we will use Materials for model training when: (1) you provide Feedback to us regarding any Materials, or (2) your Materials are flagged for safety review"

ASSESSMENT: This is a massive privacy regression. Anthropic now defaults to using ALL your conversations for training unless you explicitly opt out. This fundamentally changes their data usage model from opt-in to opt-out.

CHANGE 2: New Financial Services Restriction

NEW ADDITION (September 2025): "To rely upon the Services, the Materials, or the Actions to buy or sell securities or to provide or receive advice about securities, commodities, derivatives, or other financial products or services, as Anthropic is not a broker-dealer or a registered investment adviser"

ASSESSMENT: New legal liability protection for Anthropic, restricting legitimate use cases for users.

CHANGE 3: Expanded Data Collection

OLD POLICY (May 2025): Listed device information, IP address, identifiers

NEW POLICY (September 2025): Added: "device location" and expanded "Technical Information" definitions

ASSESSMENT: More invasive data collection with location tracking now explicitly mentioned.

CHANGE 4: Enhanced Surveillance Language

NEW ADDITION (September 2025): Explicit mention that flagged content will be used for "AI safety research" and "advance AI safety research"

ASSESSMENT: Broader justification for monitoring and analyzing user conversations.

CHANGE 5: Increased User Liability for Actions

The new version emphasizes more strongly that users are fully liable for all "Actions" Claude takes on their behalf, with more explicit warnings about responsibility.

ASSESSMENT: Anthropic is shifting more legal risk to users while reducing their own liability.

CHANGE 6: Weakened Transparency Obligations

NEW VERSION includes more exceptions for when Anthropic does NOT need to inform users about account suspensions or content removal.

ASSESSMENT: Less accountability and transparency in enforcement actions.

CHANGE 7: Expanded Research Definition

The definition of "research" has been broadened to include "societal impact of AI models" which is very broadly interpretable.

ASSESSMENT: Gives Anthropic wider latitude to use user data for various research purposes.

CHANGE 8: New Non-User Privacy Policy Reference

The new policy references a separate "Non-User Privacy Policy" for data obtained from third parties.

ASSESSMENT: Indicates expanded data collection from sources beyond direct users.

CHANGE 9: Modified Cancellation Rights

NEW RESTRICTION: "The 14-day cancellation period will not reset if you change subscription tiers or cancel and then resubscribe" NEW RESTRICTION: "If you are not paying under a subscription billing model, you do not have a right to a refund of unused tokens"

ASSESSMENT: Reduced consumer protections and refund rights.

CHANGE 10: New Data Portability Rights (Positive)

NEW ADDITION: Complete new section about data switching and portability rights.

ASSESSMENT: This is actually a positive change, giving users more control over their data and ability to switch providers.

OVERALL ASSESSMENT:

The changes represent a significant shift away from privacy-protective practices toward a more typical Big Tech approach:

  • Default data harvesting for AI training

  • Expanded surveillance and monitoring

  • Reduced user protections

  • Increased corporate liability shields

  • More invasive data collection

This is particularly concerning from a company that positions itself as focused on "AI Safety" and responsible AI development.

RECOMMENDATIONS:

  1. DO NOT accept the new terms until September 28, 2025 (use the full grace period)

  2. IMMEDIATELY check your account settings for the new training opt-out option when it becomes available

  3. Review and adjust ALL privacy settings before accepting new terms

  4. Consider alternative AI services as backup options (OpenAI, Google, others)

  5. Be more cautious about sensitive information in conversations

  6. Document your current conversation history if you want to preserve it

  7. Consider the implications for any business or professional use cases

The direction is clearly toward more data collection and less user privacy protection, which represents a concerning departure from Anthropic's stated principles.

🌐
Claude
support.claude.com › en › articles › 8325621-i-would-like-to-input-sensitive-data-into-my-chats-with-claude-who-can-view-my-conversations
I would like to input sensitive data into my chats with Claude. Who can view my conversations? | Claude Help Center
Your data is used solely to make Claude better for everyone - we do not use such personal data to contact people, build profiles about them, to try to sell or market anything to them, or to sell the information itself to any third party.
🌐
ClaudeLog
claudelog.com › home › faqs › data storage
ClaudeLog - Claude Code Docs, Guides, Tutorials & Best Practices
November 11, 2025 - Additionally, file names, directory ... different components relate to each other. ... Your data privacy is partially protected by Claude Code's selective file reading approach....
🌐
Smithstephen
smithstephen.com › p › claude-flips-the-privacy-default
Claude flips the privacy default, opt out before September 28
September 3, 2025 - If you opt in, Anthropic will use your new or resumed chats and coding sessions to train future models, and it may keep that data for up to five years. If you opt out, Claude keeps the prior 30-day retention policy.
🌐
Anthropic
anthropic.com › engineering › claude-code-sandboxing
making Claude Code more secure and autonomous
October 20, 2025 - Sandboxing ensures that even a successful prompt injection is fully isolated, and cannot impact overall user security. This way, a compromised Claude Code can't steal your SSH keys, or phone home to an attacker's server.
Find elsewhere
🌐
Milvus
milvus.io › ai-quick-reference › how-secure-is-claude-code-when-processing-proprietary-code
How secure is Claude Code when processing proprietary code?
The tool operates locally on your machine and communicates directly with Anthropic’s API without requiring backend servers or remote code indexing, which means your code isn’t stored on intermediate systems. Anthropic has implemented policies stating they will not train generative models using feedback from Claude Code, and they maintain limited retention periods for sensitive information, storing user feedback transcripts for only 30 days with restricted access to user session data.
🌐
Skywork
skywork.ai › home › security & privacy in claude desktop: what you need to know
Security & Privacy in Claude Desktop: Essential FAQ Answers
October 27, 2025 - (2025). ... Keep sensitive windows closed or masked when invoking Computer Use. Use least‑privilege access and human‑in‑the‑loop approvals for high‑risk tasks. For a deeper look at agentic safeguards, Anthropic engineering discusses ...
🌐
Goldfarb
goldfarb.com › home page › news › updates to anthropic’s claude ai terms and privacy policy – what you need to know
Updates to Anthropic's Claude AI Terms and Privacy Policy - What You Need to Know - Goldfarb Gross Seligman
September 17, 2025 - If users do not want their data, coding sessions, and chats to be used for these purposes, they must actively opt out by selecting the appropriate option in a pop-up window that will be presented to existing users, or by changing their default Privacy Settings. Existing users must make their data sharing selection by September 28, 2025. These updates apply only to Claude Free, Pro, and Max plans (including Claude Code usage from these accounts).
🌐
Reddit
reddit.com › r/claudeai › does claude api (direct) keep your data private?
r/ClaudeAI on Reddit: Does Claude API (direct) keep your data private?
May 3, 2024 - They may share personal information with third parties, including service providers and advertising partners, to assist in operating their website and delivering services, but this is done with user privacy in mind[1][5]. In summary, Claude API, through its parent company Anthropic's policies and technical measures, prioritizes user data privacy and takes steps to ensure that personal and sensitive information is protected.
🌐
Claude Code
claudecode.cc › privacy-policy
Privacy Policy | Claude Code - Agentic Coding Tool
July 10, 2025 - 1. Data Collection & Usage 1.1 ... data - System performance indicators 2. Data Protection Your development data is protected through: - 30-day maximum retention period - No permanent storage of code - Secure OAuth protocols - Encrypted communications - Isola...
🌐
WIRED
wired.com › gear › artificial intelligence › anthropic will use claude chats for training data. here’s how to opt out
Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out | WIRED
September 30, 2025 - Anthropic is prepared to repurpose conversations users have with its Claude chatbot as training data for its large language models—unless those users opt out. Previously, the company did not train its generative AI models on user chats. When Anthropic’s privacy policy updates on October 8 to start allowing for this, users will have to opt out, or else their new chat logs and coding tasks will be used to train future Anthropic models.
🌐
Anthropic
anthropic.com › news › detecting-countering-misuse-aug-2025
Detecting and countering misuse of AI: August 2025
The threat: We recently disrupted a sophisticated cybercriminal that used Claude Code to commit large-scale theft and extortion of personal data. The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government and religious institutions.
🌐
Eesel AI
eesel.ai › blog › security-claude-code
A deep dive into security for Claude Code in 2025 - eesel AI
September 30, 2025 - Explore the key security risks, benefits, and best practices for using Anthropic's Claude Code. Learn how to secure your AI-driven development workflow.
🌐
Skywork
skywork.ai › home › how claude code handles private repos and security
How Claude Code handles private repos and security - Skywork ai
October 15, 2025 - I’m Claire, and I’ve spent the last few months poking every knob and setting to see what actually protects code, not just what looks nice in marketing. Claude Code runs locally and only touches what you let it touch.
🌐
Apidog
apidog.com › articles › how-secure-is-claude-code-when-processing-proprietary-code
How secure is Claude Code when processing proprietary code?
September 22, 2025 - Using Claude Code with proprietary code requires careful consideration of security risks. Key concerns include data privacy, intellectual property protection, compliance with regulations, and potential security breaches.
🌐
Anthropic
anthropic.com › news › building-safeguards-for-claude
Building safeguards for Claude
August 12, 2025 - Through this collaborative process, Claude develops several important skills. It learns to decline assistance with harmful illegal activities, and it recognizes attempts to generate malicious code, create fraudulent content, or plan harmful activities.