My interpretation of that clause is they don’t want you using the Claude Code SDK as an API endpoint for customer use. For example: don’t host “translate4u.com” on a VPS and let your service use your Claude Code credentials to provide the service. However, using the API for this = totally fine. Answer from solaza on reddit.com
🌐
Claude
code.claude.com › docs › en › legal-and-compliance
Legal and compliance - Claude Code Docs
The BAA will be applicable to that customer’s API traffic flowing through Claude Code. You can find more information in the Anthropic Trust Center and Transparency Hub. Anthropic manages our security program through HackerOne. Use this form to report vulnerabilities. © Anthropic PBC. All rights reserved. Use is subject to applicable Anthropic Terms of Service.
🌐
Anthropic
anthropic.com › news › updates-to-our-consumer-terms
Updates to Consumer Terms and Privacy Policy
If you’re an existing user, you have until October 8, 2025 to accept the updated Consumer Terms and make your decision. If you choose to accept the new policies now, they will go into effect immediately. These updates will apply only to new or resumed chats and coding sessions. After October 8, you’ll need to make your selection on the model training setting in order to continue using Claude.
Discussions

no commercial use with claude code and pro/max plan?
My interpretation of that clause is they don’t want you using the Claude Code SDK as an API endpoint for customer use. For example: don’t host “translate4u.com” on a VPS and let your service use your Claude Code credentials to provide the service. However, using the API for this = totally fine. More on reddit.com
🌐 r/ClaudeAI
33
49
June 24, 2025
New privacy and TOS explained by Claude
Anthropic had an opportunity to plant a deep stake in the moral high ground of user privacy and they caved. Let this be cautionary tale about what really drives a business and your privacy is merely a resource to exploit, not a civil right to be protected at all costs. I’m reevaluating my choice of providers immediately. More on reddit.com
🌐 r/ClaudeAI
98
187
August 28, 2025
I have received a new email about Anthropic’s updated terms and conditions, below is the summarised assessment of their new terms & conditions (I have used Claude ai for this summary)
Hi, we're the ethical AI company - hope you don't mind if we sell your personal info to whomever we want. Oh hi, us again - anything you upload here can be used how we want, and we'll also be training Claude with your work. Sorry about that. Sorry, us again - we're also going the Facebook/Cambridge Analytica route and building a deep profile on you. This is the end for me and Claude. I don't understand how they feel they need to do this. This is almost to the level of Microsoft Passport in 2001/2 - they said that any info that went through Windows was co-owned by MS, and they could sell it if they wanted. It lasted about 48h before the entire tech media raked them over the coals for it, and MS killed their terms. I'm not sticking around to find out if Anthropic will change - I'm out. More on reddit.com
🌐 r/ClaudeAI
26
42
November 17, 2023
[deleted by user]
Um did you read what you linked?? Even `11` is a no guarantees or warranties. I don't see that sentence or anything mentioning non-commercial use, perhaps don't use ai to read or summarize or at the very least double check. Now what does the terms actually state for commercial usage? Well don't use it do illegal stuff or exploit people and also don't use it to build a competing product or reverse engineering their model . Also claude code is explicitly under their commercial TOS More on reddit.com
🌐 r/ClaudeAI
19
4
March 21, 2025
🌐
Reddit
reddit.com › r/claudeai › new privacy and tos explained by claude
r/ClaudeAI on Reddit: New privacy and TOS explained by Claude
August 28, 2025 -

Hi there,

I let check Claude the changes which come into force on September 28th.

Please note. Claude can make mistakes. Check the changes by yourself before accepting.

Here is Claude's analysis, evaluation and tips:

Critical Changes in Anthropic's Terms of Service & Privacy Policy Analysis May 2025 vs September 2025 Versions

MOST CRITICAL CHANGE: Fundamental Shift in Model Training Policy

OLD POLICY (May 2025): "We will not train our models on any Materials that are not publicly available, except in two circumstances: (1) If you provide Feedback to us, or (2) If your Materials are flagged for trust and safety review"

NEW POLICY (September 2025): "We may use Materials to provide, maintain, and improve the Services and to develop other products and services, including training our models, unless you opt out of training through your account settings. Even if you opt out, we will use Materials for model training when: (1) you provide Feedback to us regarding any Materials, or (2) your Materials are flagged for safety review"

ASSESSMENT: This is a massive privacy regression. Anthropic now defaults to using ALL your conversations for training unless you explicitly opt out. This fundamentally changes their data usage model from opt-in to opt-out.

CHANGE 2: New Financial Services Restriction

NEW ADDITION (September 2025): "To rely upon the Services, the Materials, or the Actions to buy or sell securities or to provide or receive advice about securities, commodities, derivatives, or other financial products or services, as Anthropic is not a broker-dealer or a registered investment adviser"

ASSESSMENT: New legal liability protection for Anthropic, restricting legitimate use cases for users.

CHANGE 3: Expanded Data Collection

OLD POLICY (May 2025): Listed device information, IP address, identifiers

NEW POLICY (September 2025): Added: "device location" and expanded "Technical Information" definitions

ASSESSMENT: More invasive data collection with location tracking now explicitly mentioned.

CHANGE 4: Enhanced Surveillance Language

NEW ADDITION (September 2025): Explicit mention that flagged content will be used for "AI safety research" and "advance AI safety research"

ASSESSMENT: Broader justification for monitoring and analyzing user conversations.

CHANGE 5: Increased User Liability for Actions

The new version emphasizes more strongly that users are fully liable for all "Actions" Claude takes on their behalf, with more explicit warnings about responsibility.

ASSESSMENT: Anthropic is shifting more legal risk to users while reducing their own liability.

CHANGE 6: Weakened Transparency Obligations

NEW VERSION includes more exceptions for when Anthropic does NOT need to inform users about account suspensions or content removal.

ASSESSMENT: Less accountability and transparency in enforcement actions.

CHANGE 7: Expanded Research Definition

The definition of "research" has been broadened to include "societal impact of AI models" which is very broadly interpretable.

ASSESSMENT: Gives Anthropic wider latitude to use user data for various research purposes.

CHANGE 8: New Non-User Privacy Policy Reference

The new policy references a separate "Non-User Privacy Policy" for data obtained from third parties.

ASSESSMENT: Indicates expanded data collection from sources beyond direct users.

CHANGE 9: Modified Cancellation Rights

NEW RESTRICTION: "The 14-day cancellation period will not reset if you change subscription tiers or cancel and then resubscribe" NEW RESTRICTION: "If you are not paying under a subscription billing model, you do not have a right to a refund of unused tokens"

ASSESSMENT: Reduced consumer protections and refund rights.

CHANGE 10: New Data Portability Rights (Positive)

NEW ADDITION: Complete new section about data switching and portability rights.

ASSESSMENT: This is actually a positive change, giving users more control over their data and ability to switch providers.

OVERALL ASSESSMENT:

The changes represent a significant shift away from privacy-protective practices toward a more typical Big Tech approach:

  • Default data harvesting for AI training

  • Expanded surveillance and monitoring

  • Reduced user protections

  • Increased corporate liability shields

  • More invasive data collection

This is particularly concerning from a company that positions itself as focused on "AI Safety" and responsible AI development.

RECOMMENDATIONS:

  1. DO NOT accept the new terms until September 28, 2025 (use the full grace period)

  2. IMMEDIATELY check your account settings for the new training opt-out option when it becomes available

  3. Review and adjust ALL privacy settings before accepting new terms

  4. Consider alternative AI services as backup options (OpenAI, Google, others)

  5. Be more cautious about sensitive information in conversations

  6. Document your current conversation history if you want to preserve it

  7. Consider the implications for any business or professional use cases

The direction is clearly toward more data collection and less user privacy protection, which represents a concerning departure from Anthropic's stated principles.

🌐
Claude
claude.ai
Claude
Talk with Claude, an AI assistant from Anthropic
🌐
Claudecode
claudecode.io › terms-of-service
Claude Code - Terms of Service | ClaudeCode.io
You agree to defend, indemnify, and hold harmless Claude Code, its affiliates, and their respective officers, directors, employees, and agents from and against any claims, liabilities, damages, losses, and expenses, including reasonable attorneys' fees, arising from or in any way connected with your use of our website or services or violation of these Terms.
🌐
npm
npmjs.com › package › @anthropic-ai › claude-code
@anthropic-ai/claude-code - npm
2 days ago - We have implemented several safeguards to protect your data, including limited retention periods for sensitive information and restricted access to user session data. For full details, please review our Commercial Terms of Service ...
      » npm install @anthropic-ai/claude-code
    
Published   Dec 20, 2025
Version   2.0.75
Author   Anthropic
🌐
ClaudeLog
claudelog.com › home › legal › terms of service
Terms of Service | ClaudeLog
August 26, 2025 - ClaudeLog Terms of Service covering website usage, content rights, user responsibilities, and legal terms for using our Claude Code documentation.
🌐
Claude
docs.claude.com › en › docs › claude-code › legal-and-compliance
Legal and compliance - Claude Docs
The BAA will be applicable to that customer’s API traffic flowing through Claude Code. You can find more information in the Anthropic Trust Center and Transparency Hub. Anthropic manages our security program through HackerOne. Use this form to report vulnerabilities. © Anthropic PBC. All rights reserved. Use is subject to applicable Anthropic Terms of Service.
Find elsewhere
🌐
Anthropic
anthropic.com › news › usage-policy-update
Usage Policy Update
August 15, 2025 - We’ve heard from users that this blanket approach also limited legitimate use of Claude for policy research, civic education, and political writing. We're now tailoring our restrictions to specifically prohibit use cases that are deceptive or disruptive to democratic processes, or involve voter and campaign targeting.
🌐
Reddit
reddit.com › r/claudeai › no commercial use with claude code and pro/max plan?
r/ClaudeAI on Reddit: no commercial use with claude code and pro/max plan?
June 24, 2025 -

Non-commercial use only. You agree that you will not use our Services for any commercial or business purposes and we and our Providers have no liability to you for any loss of profit, loss of business, business interruption, or loss of business opportunity.

seen here: https://www.anthropic.com/legal/consumer-terms

can someone shed light on this? there are tons of people (here as well) who use the max plan + claude code

🌐
Claude
privacy.claude.com › en › articles › 9264813-consumer-terms-of-service-updates
Consumer Terms of Service Updates | Anthropic Privacy Center
This support article covers the changes to our Consumer Terms of Service that take effect May 1, 2024.
🌐
Claude Code
claudecode.cc › tos
Terms and Conditions | Claude Code - Agentic Coding Tool
July 10, 2025 - Termination We may suspend service if you: - Violate these terms - Misuse the service - Exceed rate limits - Generate malicious code - Share credentials - Attempt unauthorized access 9. Updates and Changes We may: - Update the service - Modify these terms - Enhance features - Add capabilities - Improve security - Optimize performance - Change APIs 10. Support We provide: - Documentation - Bug reporting via /bug - Email support: [email protected] - Response within 2 hours for critical issues - Regular updates and fixes 11. Disclaimer Claude Code is: - In research preview - Subject to improvements - Provided without warranty - Not guaranteed 100% accurate - Dependent on your input - Limited by current technology By using Claude Code, you agree to these terms.
🌐
Anthropic
anthropic.com › news › expanded-legal-protections-api-improvements
Expanded legal protections and improvements to our API
December 19, 2023 - We are introducing new, simplified Commercial Terms of Service with an expanded copyright indemnity, as well as an improved developer experience with our beta Messages API. Customers will now enjoy increased protection and peace of mind as they build with Claude, as well as a more streamlined ...
🌐
Goldfarb
goldfarb.com › home page › news › updates to anthropic’s claude ai terms and privacy policy – what you need to know
Updates to Anthropic's Claude AI Terms and Privacy Policy - What You Need to Know - Goldfarb Gross Seligman
September 17, 2025 - These updates apply only to Claude Free, Pro, and Max plans (including Claude Code usage from these accounts). They do not affect services under Commercial Terms, including Claude for Work, which includes Anthropic’s Team and Enterprise plans, Anthropic API, Amazon Bedrock, or Google Cloud’s Vertex API, Claude Gov and Claude for Education.
🌐
Claude
privacy.claude.com › en › articles › 9190861-terms-of-service-updates
Terms of Service Updates | Anthropic Privacy Center
April 15, 2024 - We updated the definition of “Providers” to include our affiliates, licensors, distributors, and service providers. We also clarified that our Providers are intended third-party beneficiaries of specified disclaimers and limitations of liability. Software updates. We clarified our terms about software updates, including that we may offer automatic updates to our software to ensure our users have access to the latest version.
🌐
Anthropic
anthropic.com › news › claude-code-on-the-web
Claude Code on the web
November 12, 2025 - Git interactions are handled through a secure proxy service that ensures Claude can only access authorized repositories—helping keep your code and credentials protected throughout the entire workflow.
🌐
Anthropic
anthropic.com › news › claude-code-on-team-and-enterprise
Claude Code and new admin controls for business plans
Team and Enterprise plan admins can now upgrade to premium seats with Claude Code—and take advantage of flexible pricing with extra usage options.