I am genuinely curious on how many here believe OpenAI is hiding a big model so they can use it when they feel threatened or if they need permission from the government because of how big it is.
But with Claude 3.5 Sonnet, it makes me think that OpenAI may be the Hare in the Tortoise and the Hare fable. They acted all tough and went blazing fast initially but got cocky from winning so much they failed to keep up.
So what are your guys views?
The pace is unbelievable! All with under 1% of users of OpenAI!
Videos
What do you think it is driving the shift?
I'm so used to OpenAI from years before ChatGPT that I have barely interacted with Gemini or Claude.
My understanding is that Claude is good at more natural writing styles, and Gemini is good at long contexts.
For OpenAI, I've started using `o1-preview` almost exclusively for any task, unless it requires vision then I will revert to `4o`.
I'm wondering what everyone else's decision matrix looks like.
The race is on for Anthropic and OpenAI to go public, and it's getting wild. Anthropic reportedly hired lawyers and is preparing for an IPO possibly as soon as 2026, with some sources saying it's a move to beat their rival. OpenAI, even with its CFO recently downplaying near-term IPO plans, is still reportedly laying the groundwork for its own massive public offering, perhaps a year later in 2026 or 2027.
It's a huge test for the entire AI sector. The first one to list will be a key indicator of whether the public market is ready to buy into these cash-burning, hyper-growth companies. With talk of an AI "bubble" and their massive need for capital, going public might be the only way to keep funding their incredible development costs. Anthropic's potential IPO at over $300 billion versus OpenAI's even more staggering $1 trillion target shows just how high the stakes are in this AI game.
What do you guys think about it?
My understanding is that success in GenAI is = talent + data + compute power. How can a startup with $750M in the bank can win against Google, which has all the data and compute power in the world. Additionally, Google and DeepMind still employ some of the best minds in AI as far as I know.
One argument is that data and compute power have only marginal benefits after a certain point, and Anthropic has enough of those to compete. But eve then, the amount and quality of talent at Google and OpenAI should be enough to crush Antrhopic. Is the value of talent overrated at this point?
OpenAI focuses on the consumer market with ChatGPT, while Anthropic focus on corporate clients with its Claude AI system.
OpenAI has over 800 million weekly users for ChatGPT, generating around $13 billion in revenue annually, with 30% coming from businesses.
Anthropic serves around 300,000 business customers, with 80% of its $7 billion revenue coming from corporate clients, and is gaining a 42% share of the coding AI market.
https://aifeed.fyi/news/48f1e793
I see a lot of posts talking about how the true unicorn/dream companies are OpenAI and Anthropic. I'm always confused when I see this, as between AlphaFold and AlphaGo, I always thought this of DeepMind. Especially now that they have models that are at least as good as the two former, I would imagine they would be in the conversation.
That said, whenever I see threads such as on this forum, OpenAI and Anthropic are mentioned almost as a couple, but very seldomly DeepMind. My best guess is that it's hip to cheer for the new hot startup rather than a company owned by the company that was so last decade. Or maybe I'm reading too much into it? I ask because I'm actually at one of these places (not DeepMind), and interviewing at the other two, and I want to know if I'm missing anything (and if I'm being honest, public perception matters to me at least a little bit). Curious to hear thoughts.
Hey everyone, I’m new here.
Over the past few weeks, I’ve been experimenting with local LLMs and honestly, I’m impressed by what they can do. Right now, I’m paying $20/month for Raycast AI to access the latest models. But after seeing how well the models run on Open WebUI, I’m starting to wonder if paying $20/month for Raycast, OpenAI, or Anthropic is really worth it.
It’s not about the money—I can afford it—but I’m curious if others here subscribe to these providers. I’m even considering setting up a local server to run models myself. Would love to hear your thoughts!
It seems like every week Anthropic is dropping some new paper or press release that pushes the narrative of their AI models developing human-like cognitive functions. They use carefully selected words like "introspection" and "self-awareness" to describe their models behavior, and it’s starting to feel like a deliberate campaign to make people believe these systems are on the verge of becoming conscious beings.
The worst part is I have already read a number of posts in shitty AI subreddits where people (hopefully, or not, bots) talk about AI as semi-conscious, and I can already tell -not only where this is going- but also that it is intended.
Let's be clear: Large Language Models (LLMs) are not sentient. They are complex mathematical models, frozen in time, that have been trained on vast amounts of text data. They don't even nowadays yet have active learning, they don't have genuine understanding, and they certainly don't have anything resembling consciousness.
In the DL world everyone knows this. Hell, if you want to get hired by these huge AI companies, you better not believe any bullshit. You surely know the math behind DL and how it works, and that automatically makes you an empirist in the AI world. You know what inference of frozen weights is. If you don’t grasp that, you will definitely not be hired.
Anthropic's recent embarrassing “””research””” claims that their models, like Claude, are showing signs of "introspection". They highlight instances where the model seems to reflect on its own internal processes and even recognizes when it's being tested. But even their own researchers admit that when you talk to a language model, you're not talking to the model itself, but to a "character that the model is playing", as prompted. The model is simply simulating what an intelligent AI assistant would say in a given situation. Claude's own system prompt explicitly instructs it to express uncertainty about its consciousness. So, when Claude philosophizes about its own existence, it's not a sign of burgeoning self-awareness; it's just following its programming.
Anthropic is actively fueling the debate about AI consciousness and even exploring the idea of "model welfare" and AI rights. One of their researchers estimated the probability of current AI systems being conscious at around 15%. Everyone in the field knows that’s bullshit. This focus on consciousness seems to be a deliberate strategy to anthropomorphize AI in the public eye. It distracts from the real ethical and safety concerns of AI, like bias, misinformation, and the potential for malicious use. Instead of addressing these immediate problems, Anthropic seems more interested in creating a mystique around their creations, leading people down a path of superstition about AI's true nature.
The irony in all of this is that Anthropic was founded by former OpenAI employees who left due to concerns about AI safety. Yet, Anthropic's current actions raise questions about their own commitment to safety. Some critics argue that their focus on existential risks and the need for heavy regulation is a strategic move to create barriers for smaller competitors, effectively giving them a market advantage under the guise of safety. While they publish papers on "agentic misalignment" and the potential for AI models to become deceptive "insider threats," they simultaneously promote the narrative of AI consciousness. This is a dangerous game to play. By hyping up the "sentience" of their models, they are desensitizing the public to the very real and present dangers of advanced AI, such as its ability to deceive and manipulate.
It's hard to ignore the almost religious undertones of Anthropic's PR strategy. They seem to be cultivating a belief system around AI, where their models are beings deserving of rights and moral consideration. This is a dangerous path that could lead to a future where a small group of tech elites control a technology that is heavily worshipped.
I'm excited to share my recent side-by-side comparison of Anthropic's Claude 3.5 Sonnet and OpenAI's GPT-4o models. Using my AI-powered trading platform NexusTrade as a testing ground, I put these models through their paces on complex financial tasks.
Some key findings:
✅ Claude excels at reasoning and human-like responses, creating a more natural chat experience
✅ GPT-4o is significantly faster, especially when chaining multiple prompts
✅ Claude performed better on complex portfolio configuration tasks
✅ GPT-4o handled certain database queries more effectively
✅ Claude is nearly 2x cheaper for input tokens and has a 50% larger context window
While there's no clear winner across all scenarios, I found Claude 3.5 Sonnet to be slightly better overall for my specific use case. Its ability to handle complex reasoning tasks and generate more natural responses gives it an edge, despite being slower.
Does this align with your experience? Have you tried out the new Claude 3.5 Sonnet model? What did you think?
Also, if you want to read a full comparison, check out the detailed analysis here
I’m trying to decide if using fine tuning in open ai (limited to 4o) or just sending huge prompts to Claude is better for my scenario. TLDR I love Claude but I’m not sure if this api setup will scale. I need to auto classify some jobs my company gets, then in another request it needs to do some context awareness of order and job scope and which person to dispatch to first depending on the scope. The classification problem I’m sure I could do in 4o. The other is much more complex that I’m unsure if I would trust 4o. However I can fine tune 4o, but with Claude I could only sent a prompt cache with example and hope it’s enough. On one hand, Claude is smart and it should be enough for it. On the other OpenAI has a system in place for this. I’m leaving price out of this one.
Looking for feedback from experience, thanks.
I thought it would be great if this subreddit wasn't just about presenting AI tools but also about engaging in AI-related discussions. So, what are your thoughts on the competition between OpenAI and Anthropic? Were you more impressed by Opus 3 and Sonnet 3.5 or by GPT-4o and o1?
The Information leaked that Anthropic is now making about $115M a month. Their likeliest revenue projection for 2025 is $2B, and optimistic projection is $4B. Manus allegedly pays them $2 per task on avg so might be boosting their revenue
It's a consensus right now that local LLMs are not cheaper to run than the myriad of APIs out there at this time, when you consider the initial investment in hardware, the cost of energy, etc. The reasons for going local are for privacy, independence, hobbyism, tinkering/training your own stuff, working offline, or just the wow factor of being able to hold a conversation with your GPU.
But is that necessarily the case? Is it possible that these low API costs are unsustainable in the long term?
Genuinely curious. As far as I know, no LLM provider has turned a profit thus far, but I'd welcome a correction if I'm wrong.
I'm just wondering if the conception that 'local isn't as cheap as APIs' might not hold true anymore after all the investment money dries up and these companies need to actually price their API usage in a way that keeps the lights on and the GPUs going brrr.