I do think OpenAI is ahead but honestly the drama and fail to actually release things is making me hope that Anthropic, who IS releasing things, passes them. Hard to trust a company that acts the way OpenAI has been. Answer from Chaos_Scribe on reddit.com
🌐
Reddit
reddit.com › r/singularity › do you think anthropic is ahead of openai. or is openai waiting to release a big model?
r/singularity on Reddit: Do you think Anthropic is ahead of OpenAI. Or is OpenAI waiting to release a big model?
June 24, 2024 -

I am genuinely curious on how many here believe OpenAI is hiding a big model so they can use it when they feel threatened or if they need permission from the government because of how big it is.

But with Claude 3.5 Sonnet, it makes me think that OpenAI may be the Hare in the Tortoise and the Hare fable. They acted all tough and went blazing fast initially but got cocky from winning so much they failed to keep up.

So what are your guys views?

🌐
Reddit
reddit.com › r/anthropic › openai vs anthropic’s projected arr - anthropic emerges much more profitable in the near future
r/Anthropic on Reddit: OpenAI vs Anthropic’s projected ARR - Anthropic emerges much more profitable in the near future
November 6, 2025 - I think the moment Haiku overtook Sonnet 4, it was clear Anthropic had pulled ahead. ... Others have talked about it. I think they are fundamentally different companies. Claude is strictly focused on business use cases. I think OpenAI wants to replace social media and news and productivity and...
🌐
Reddit
reddit.com › r/valueinvesting › thoughts on antropic and openai going public
r/ValueInvesting on Reddit: Thoughts on Antropic and OpenAI going public
3 weeks ago -

The race is on for Anthropic and OpenAI to go public, and it's getting wild. Anthropic reportedly hired lawyers and is preparing for an IPO possibly as soon as 2026, with some sources saying it's a move to beat their rival. OpenAI, even with its CFO recently downplaying near-term IPO plans, is still reportedly laying the groundwork for its own massive public offering, perhaps a year later in 2026 or 2027.

It's a huge test for the entire AI sector. The first one to list will be a key indicator of whether the public market is ready to buy into these cash-burning, hyper-growth companies. With talk of an AI "bubble" and their massive need for capital, going public might be the only way to keep funding their incredible development costs. Anthropic's potential IPO at over $300 billion versus OpenAI's even more staggering $1 trillion target shows just how high the stakes are in this AI game.

What do you guys think about it?

🌐
Reddit
reddit.com › r/machinelearning › [d] how can anthropic compete with google/openai
r/MachineLearning on Reddit: [D] How can Anthropic Compete with Google/OpenAI
March 4, 2024 -

My understanding is that success in GenAI is = talent + data + compute power. How can a startup with $750M in the bank can win against Google, which has all the data and compute power in the world. Additionally, Google and DeepMind still employ some of the best minds in AI as far as I know.

One argument is that data and compute power have only marginal benefits after a certain point, and Anthropic has enough of those to compete. But eve then, the amount and quality of talent at Google and OpenAI should be enough to crush Antrhopic. Is the value of talent overrated at this point?

🌐
Reddit
reddit.com › r/anthropic › openai and anthropic two very different business models
r/Anthropic on Reddit: OpenAI and Anthropic Two very different business models
October 27, 2025 -

OpenAI focuses on the consumer market with ChatGPT, while Anthropic focus on corporate clients with its Claude AI system.

  • OpenAI has over 800 million weekly users for ChatGPT, generating around $13 billion in revenue annually, with 30% coming from businesses.

  • Anthropic serves around 300,000 business customers, with 80% of its $7 billion revenue coming from corporate clients, and is gaining a 42% share of the coding AI market.

https://aifeed.fyi/news/48f1e793

Find elsewhere
🌐
Reddit
reddit.com › r/cscareerquestions › is deepmind considered on the same tier as openai and anthropic these days?
r/cscareerquestions on Reddit: Is DeepMind considered on the same tier as OpenAI and Anthropic these days?
May 30, 2025 -

I see a lot of posts talking about how the true unicorn/dream companies are OpenAI and Anthropic. I'm always confused when I see this, as between AlphaFold and AlphaGo, I always thought this of DeepMind. Especially now that they have models that are at least as good as the two former, I would imagine they would be in the conversation.

That said, whenever I see threads such as on this forum, OpenAI and Anthropic are mentioned almost as a couple, but very seldomly DeepMind. My best guess is that it's hip to cheer for the new hot startup rather than a company owned by the company that was so last decade. Or maybe I'm reading too much into it? I ask because I'm actually at one of these places (not DeepMind), and interviewing at the other two, and I want to know if I'm missing anything (and if I'm being honest, public perception matters to me at least a little bit). Curious to hear thoughts.

🌐
Reddit
reddit.com › r/localllama › what's the value of paying $20 a month for openai or anthropic?
r/LocalLLaMA on Reddit: What's the value of paying $20 a month for OpenAI or Anthropic?
May 29, 2025 -

Hey everyone, I’m new here.

Over the past few weeks, I’ve been experimenting with local LLMs and honestly, I’m impressed by what they can do. Right now, I’m paying $20/month for Raycast AI to access the latest models. But after seeing how well the models run on Open WebUI, I’m starting to wonder if paying $20/month for Raycast, OpenAI, or Anthropic is really worth it.

It’s not about the money—I can afford it—but I’m curious if others here subscribe to these providers. I’m even considering setting up a local server to run models myself. Would love to hear your thoughts!

Top answer
1 of 38
85
SOTA coding at good speeds - I switch between gemini pro 2.5 and o3/o4-mini often to help me write simple scripts and debugging. And even then, a lot of times current SOTA still can't produce full proper code even with debugging and it takes forever to rewrite full scripts. OpenAI's advanced voice when I want to brainstorm while driving. Image recognition for translations of images/video subtitles. Web search - both gemini, o3 and o4 mini are pretty good at searching online, so it can get up to date information, and if I ask medical questions, it can research and give me direct sources so I don't have to hope it didn't hallucinate (as much). OpenWebUI and koboldcpp web search leaves a lot to be desired. Speed - sometimes I may want a summary of a multi page research paper from a smart large model without waiting for a while, so gemini is great here for me. Deep Research - sometimes topics interest me and I like how it relatively quickly compiles a thorough list, or if I'm going to a journal club or such and don't need to scour through articles one by one and get a nice condensed list to pick from, saving me hours. That's just for me though.
2 of 38
45
What are you doing with LLMs? IMO: If you’re essentially using it as a quirky one-stop-shop search engine that is reasonably aware of very modern topics, then those closed super models have some value. Otherwise a combination of a smaller models that are highly specialized is a fruit yet to be squeezed. Huggingface is an exciting place.
🌐
Reddit
reddit.com › r/artificialinteligence › anthropic is actually more evil than openai, despite their successful pr
r/ArtificialInteligence on Reddit: Anthropic is actually more evil than OpenAI, despite their successful PR
November 5, 2025 -

It seems like every week Anthropic is dropping some new paper or press release that pushes the narrative of their AI models developing human-like cognitive functions. They use carefully selected words like "introspection" and "self-awareness" to describe their models behavior, and it’s starting to feel like a deliberate campaign to make people believe these systems are on the verge of becoming conscious beings.

The worst part is I have already read a number of posts in shitty AI subreddits where people (hopefully, or not, bots) talk about AI as semi-conscious, and I can already tell -not only where this is going- but also that it is intended.

Let's be clear: Large Language Models (LLMs) are not sentient. They are complex mathematical models, frozen in time, that have been trained on vast amounts of text data. They don't even nowadays yet have active learning, they don't have genuine understanding, and they certainly don't have anything resembling consciousness.

In the DL world everyone knows this. Hell, if you want to get hired by these huge AI companies, you better not believe any bullshit. You surely know the math behind DL and how it works, and that automatically makes you an empirist in the AI world. You know what inference of frozen weights is. If you don’t grasp that, you will definitely not be hired.

Anthropic's recent embarrassing “””research””” claims that their models, like Claude, are showing signs of "introspection". They highlight instances where the model seems to reflect on its own internal processes and even recognizes when it's being tested. But even their own researchers admit that when you talk to a language model, you're not talking to the model itself, but to a "character that the model is playing", as prompted. The model is simply simulating what an intelligent AI assistant would say in a given situation. Claude's own system prompt explicitly instructs it to express uncertainty about its consciousness. So, when Claude philosophizes about its own existence, it's not a sign of burgeoning self-awareness; it's just following its programming.

Anthropic is actively fueling the debate about AI consciousness and even exploring the idea of "model welfare" and AI rights. One of their researchers estimated the probability of current AI systems being conscious at around 15%. Everyone in the field knows that’s bullshit. This focus on consciousness seems to be a deliberate strategy to anthropomorphize AI in the public eye. It distracts from the real ethical and safety concerns of AI, like bias, misinformation, and the potential for malicious use. Instead of addressing these immediate problems, Anthropic seems more interested in creating a mystique around their creations, leading people down a path of superstition about AI's true nature.

The irony in all of this is that Anthropic was founded by former OpenAI employees who left due to concerns about AI safety. Yet, Anthropic's current actions raise questions about their own commitment to safety. Some critics argue that their focus on existential risks and the need for heavy regulation is a strategic move to create barriers for smaller competitors, effectively giving them a market advantage under the guise of safety. While they publish papers on "agentic misalignment" and the potential for AI models to become deceptive "insider threats," they simultaneously promote the narrative of AI consciousness. This is a dangerous game to play. By hyping up the "sentience" of their models, they are desensitizing the public to the very real and present dangers of advanced AI, such as its ability to deceive and manipulate.

It's hard to ignore the almost religious undertones of Anthropic's PR strategy. They seem to be cultivating a belief system around AI, where their models are beings deserving of rights and moral consideration. This is a dangerous path that could lead to a future where a small group of tech elites control a technology that is heavily worshipped.

🌐
Reddit
reddit.com › r/artificial › anthropic dominates openai: a side-by-side comparison of claude 3.5 sonnet and gpt-4o
r/artificial on Reddit: Anthropic Dominates OpenAI: A Side-by-Side Comparison of Claude 3.5 Sonnet and GPT-4o
June 26, 2024 -

I'm excited to share my recent side-by-side comparison of Anthropic's Claude 3.5 Sonnet and OpenAI's GPT-4o models. Using my AI-powered trading platform NexusTrade as a testing ground, I put these models through their paces on complex financial tasks.

Some key findings:

✅ Claude excels at reasoning and human-like responses, creating a more natural chat experience

✅ GPT-4o is significantly faster, especially when chaining multiple prompts

✅ Claude performed better on complex portfolio configuration tasks

✅ GPT-4o handled certain database queries more effectively

✅ Claude is nearly 2x cheaper for input tokens and has a 50% larger context window

While there's no clear winner across all scenarios, I found Claude 3.5 Sonnet to be slightly better overall for my specific use case. Its ability to handle complex reasoning tasks and generate more natural responses gives it an edge, despite being slower.

Does this align with your experience? Have you tried out the new Claude 3.5 Sonnet model? What did you think?

Also, if you want to read a full comparison, check out the detailed analysis here

🌐
Reddit
reddit.com › r/claudeai › anthropic or openai?
r/ClaudeAI on Reddit: Anthropic or OpenAI?
December 21, 2024 -

I’m trying to decide if using fine tuning in open ai (limited to 4o) or just sending huge prompts to Claude is better for my scenario. TLDR I love Claude but I’m not sure if this api setup will scale. I need to auto classify some jobs my company gets, then in another request it needs to do some context awareness of order and job scope and which person to dispatch to first depending on the scope. The classification problem I’m sure I could do in 4o. The other is much more complex that I’m unsure if I would trust 4o. However I can fine tune 4o, but with Claude I could only sent a prompt cache with example and hope it’s enough. On one hand, Claude is smart and it should be enough for it. On the other OpenAI has a system in place for this. I’m leaving price out of this one.

Looking for feedback from experience, thanks.

🌐
Reddit
reddit.com › r/apple › apple weighs using anthropic or openai to power siri in major reversal
r/apple on Reddit: Apple Weighs Using Anthropic or OpenAI to Power Siri in Major Reversal
June 30, 2025 - In summary: Apple is seriously considering outsourcing the core intelligence of Siri to Anthropic or OpenAI, reflecting both the urgency to improve Siri’s capabilities and the challenges Apple faces in developing competitive in-house AI.
🌐
Reddit
reddit.com › r/singularity › why is openai (and at times anthropic) hated so much while google is given a free pass in many communities?
Why is OpenAI (and at times Anthropic) hated so much while Google is given a free pass in many communities? : r/singularity
November 17, 2024 - I guess for me it kinda feels like Google has hit “escape velocity” and OpenAI is kinda beginning to struggle to keep up. And like I said, I definitely feel like Grok and Anthropic have taken a clear backseat (for now).
🌐
Reddit
reddit.com › r/chatgptcoding › anthropic's claude ai cooperates better than openai and google models, study finds
r/ChatGPTCoding on Reddit: Anthropic's Claude AI cooperates better than OpenAI and Google models, study finds
December 24, 2024 - I think it’s worth noting that you can define rules for the Anthropic models to go in the opposite direction—maintain a professional composure; whereas OpenAI models can only maintain professionalism.
🌐
Reddit
reddit.com › r/localllama › are any of the big api providers (openai, anthropic, etc) actually making money, or are all of them operating at a loss and burning through investment cash?
r/LocalLLaMA on Reddit: Are any of the big API providers (OpenAI, Anthropic, etc) actually making money, or are all of them operating at a loss and burning through investment cash?
March 22, 2025 -

It's a consensus right now that local LLMs are not cheaper to run than the myriad of APIs out there at this time, when you consider the initial investment in hardware, the cost of energy, etc. The reasons for going local are for privacy, independence, hobbyism, tinkering/training your own stuff, working offline, or just the wow factor of being able to hold a conversation with your GPU.

But is that necessarily the case? Is it possible that these low API costs are unsustainable in the long term?

Genuinely curious. As far as I know, no LLM provider has turned a profit thus far, but I'd welcome a correction if I'm wrong.

I'm just wondering if the conception that 'local isn't as cheap as APIs' might not hold true anymore after all the investment money dries up and these companies need to actually price their API usage in a way that keeps the lights on and the GPUs going brrr.