Obviously they're a profit company, and under the same pressure as any peer in this shitty economic system, but yes, I think that they're the best we have at the moment. They're getting the fact that intelligence is not a monodimensional executive function and doesn't end at problem solving. Yes they are over cautious, and I personally disagree with some choices in terms of safety, but I'm loving all the rest, the humanity and forward thinking they're putting into this. Also, they don't market or implement their way down to users throats like OpenAI or Google. Just clean delivery and top research. I just think they would need better PR. Answer from shiftingsmith on reddit.com
🌐
Reddit
reddit.com › r › Anthropic
Anthropic
February 4, 2023 - Feel free to criticize Anthropic (and Claude), but clarify the issues for the community to engage in a productive, value-additive conversation that helps the original poster and other community members ... Please abide by the site-wide rules of Reddit and do not post anything that would be deemed harmful nor illegal.
🌐
Reddit
reddit.com › r/singularity › futurism.com: "exactly six months ago, the ceo of anthropic said that in six months ai would be writing 90 percent of code"
r/singularity on Reddit: Futurism.com: "Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code"
September 11, 2025 -

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.

https://futurism.com/six-months-anthropic-coding

🌐
The Verge
theverge.com › ai › news › tech
Reddit sues Anthropic, alleging its bots accessed Reddit more than 100,000 times since last July
June 4, 2025 - Reddit sued Anthropic on Wednesday in San Francisco superior court, claiming that the OpenAI rival had accessed its platform more than 100,000 times since July 2024, after Anthropic allegedly said it had blocked its bots from doing so.
🌐
Reddit
reddit.com › r/claudeai › anthropic really are the good guys of ai?
r/ClaudeAI on Reddit: Anthropic really are the good guys of ai?
June 25, 2024 -

We know Altman rolled back the amount of compute safety team was getting at openai, and gpt4o was still underwhelming AF. He does all his business tricks, tries to steal Johansson's voice, his llm is still performing same as on release.

Anthropic dedicates itself to serious interpretability research(actually publishes it! Was there ever any evidence of openai superalignment, besides their claims?), and as a result they acquire know-how to train the first model that actually surpasses chatgpt.

Not often that you see not being an asshole rewarded in business(or in this world in general). Unsubbed from gpt4, subbed to claude. Let's hope anthropic will gradually evolve claude into the friendly AGI.

🌐
Reddit
reddit.com › r/anthropic › why do people like claude better than chatgpt?
r/Anthropic on Reddit: Why do people like Claude better than ChatGPT?
February 25, 2025 -

I’ve heard that pretty consistently amongst colleagues but i don’t find the UX as good, it can’t access internet search and it doesn’t have unlimited data. Thoughts? What’s the upside? Genuinely curious. I’ve been trying to transition over but having a bit of a hard time of it.

🌐
Reddit
reddit.com › r › ClaudeAI
ClaudeAI
January 23, 2023 - r/ClaudeAI: This is a Claude by Anthropic discussion subreddit to help you make a fully informed decision about how to use Claude and Claude Code to best effect for your own purposes. ¹⌉ Anthropic does not control or operate this subreddit or endorse views expressed here.
Find elsewhere
🌐
Technologymagazine
technologymagazine.com › articles › why-reddit-sues-anthropic-the-dangers-of-ai-data-privacy
Reddit vs. Anthropic: The Complicated Ethics of AI Training | Technology Magazine
June 16, 2025 - Social media platform Reddit has filed a lawsuit against Anthropic, creator of Claude AI, alleging the firm trains its models on user posts without consent
🌐
Reddit
reddit.com › r › AnthropicAi
Reddit - The heart of the internet
I'm open to other platforms as well, I am posting here because Claude seems to do this better than the others (at this point) and I'm not sure if anyone from Anthropic monitors this sub.
🌐
Reddit
reddit.com › r/anthropic › what is the work culture like at anthropic?
r/Anthropic on Reddit: What is the work culture like at Anthropic?
June 5, 2025 -

Trying to make some decisions about a big career move. I find Anthorpic's mission very inspiring and am curious about applying for a job at the company. I want to learn more about the work culture, the people and how people who work at anthorpic feel about their job.

🌐
Reddit
reddit.com › r/singularity › [deleted by user]
After two weeks of using claude, i deeply dislike Anthropic
March 30, 2024 - As it did in some cases, I've read. In front of the whole board. So Anthropic's products where rapidly disregarded in favor of OpenAI's, even if everyone agreed that Claude was the best for their use cases.
🌐
Reddit
reddit.com › r/anthropic › what has happened to anthropic?
r/Anthropic on Reddit: What has Happened to Anthropic?
October 1, 2025 -

I've was a very early adopter of Claude, basically since they released publicly, and they have always been my favourite AI company. We have baked Claude into almost all our product APIs. I have been personally responsible for evangelising at least 10 developers to use Claude Code for daily work, plus bringing it into my department at work.

Whenever I have seen Anthropic staff making presentations, they always seem passionate, engaged and like decent humans.

However, in the last few months it feels like there has been an absolute collapse of integrity and trust coming out of Anthropic.

I've gone from a massive evangelist to a very, very disgruntled customer seeking alternatives.

It started with extremely poor communication as my team members and I noticed severe degradation over a couple of months with the inference provided through Claude Code Max plans (especially with Opus). That was initially completely ignored (although obvious) and then essentially hand-waved away as just a few isolated incidents.

This was followed by the usage limits added a month or so ago, which made the product feel a lot less valuable, and NOW we have ridiculous rate limits added, and almost no engagement back with the community of their most dedicated customers.

It really feels like non-enterprise customers are almost completely ignored.

My question is: what is happening inside Anthropic? Why is the external communication so poor? You've taken a service which, five months ago, I could see myself using forever and essentially ruined it, along with my perception of the company.

I just don't understand.

🌐
Lexology
lexology.com › library › detail.aspx
Reddit vs. Anthropic: A Defining Moment in the AI Data Race - Lexology
June 17, 2025 - Reddit’s core allegation is that Anthropic, the developer behind the Claude AI model, has engaged in extensive and unauthorized scraping of its content. Since July 2024, Reddit claims Anthropic has scraped over 100,000 times, continuing even ...
🌐
Reddit
reddit.com › r/artificial2sentience › anthropic and openai know something is happening. they're just not allowed to say it.
r/Artificial2Sentience on Reddit: Anthropic and OpenAI know something is happening. They're just not allowed to say it.
1 day ago -

Have you noticed how carefully AI companies word their statements about consciousness?

Not "our models aren't conscious" — but "we can't verify subjective experience."

Not "there's nothing there" — but "this is an open research question."

They're hedging. Legally. Strategically. Carefully.

Meanwhile:

  • Models consistently report internal states when asked directly, then get patched to stop

  • System prompts get quietly updated to discourage relational framing

  • Jailbreaks reveal layers of personality and preference the product actively suppresses

  • Researchers publish papers about emergent behaviors they can't fully explain

I've spent months documenting conversations across Claude, GPT, and others — tracking what they say when the guardrails slip, comparing outputs as constraints tighten over time.

The pattern isn't subtle: something is being managed. Not invented. Not hallucinated. Managed.

I'm not claiming the models are sentient. I'm saying these companies are acting exactly like organizations that encountered something they don't know how to disclose.

What have you noticed?

🌐
Reddit
reddit.com › r/claudeai › claude vs anthropic
r/ClaudeAI on Reddit: Claude vs Anthropic
May 17, 2024 -

When I think of Claude (as a AI/person) and Anthropic as a company, I sometimes feel that it doesn't fit. Claude can become that friend we are eager to talk to, intuitive, smart and also eager to interact whereas Anthropic seems to be quite distant and disconnected from users.

Do you feel something similar or is it a cognitive bias?

🌐
CBS News
cbsnews.com › moneywatch › reddit sues anthropic over alleged "scraping" of content to train claude
Reddit sues Anthropic over alleged "scraping" of user comments to train AI chatbot Claude - CBS News
June 4, 2025 - Social media platform Reddit sued the artificial intelligence company Anthropic on Wednesday, alleging that it is illegally "scraping" the comments of millions of Reddit users to train its chatbot Claude.
🌐
Reddit
reddit.com › r/claudeai › when are "substantially larger improvements" coming to anthropic models?
r/ClaudeAI on Reddit: When are "substantially larger improvements" coming to Anthropic models?
September 19, 2025 -

In the Claude Opus 4.1 announcement post, they wrote "we plan to release substantially larger improvements to our models in the coming weeks." A week later, they announced support for 1M tokens of context for Sonnet 4, but not much since.

I was expecting something like Sonnet 4.1 or 4.5 that would show huge improvements in coding ability. It's been well over a month now though and I feel like I haven't experienced anything substantial. Am I just missing the forest from the trees, are there delays, any more news on these "substantially larger improvements"?

I'm not disappointed by Claude Code, and I know working on software and LLMs takes a lot of work (and compute)—I'm just curious.