Videos
Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be "writing 90 percent of code." And that was the worst-case scenario; in just three months, he predicted, we could hit a place where "essentially all" code is written by AI.
As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?
While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there's essentially zero chance that 90 percent of it is being written by AI.
https://futurism.com/six-months-anthropic-coding
We know Altman rolled back the amount of compute safety team was getting at openai, and gpt4o was still underwhelming AF. He does all his business tricks, tries to steal Johansson's voice, his llm is still performing same as on release.
Anthropic dedicates itself to serious interpretability research(actually publishes it! Was there ever any evidence of openai superalignment, besides their claims?), and as a result they acquire know-how to train the first model that actually surpasses chatgpt.
Not often that you see not being an asshole rewarded in business(or in this world in general). Unsubbed from gpt4, subbed to claude. Let's hope anthropic will gradually evolve claude into the friendly AGI.
I’ve heard that pretty consistently amongst colleagues but i don’t find the UX as good, it can’t access internet search and it doesn’t have unlimited data. Thoughts? What’s the upside? Genuinely curious. I’ve been trying to transition over but having a bit of a hard time of it.
Trying to make some decisions about a big career move. I find Anthorpic's mission very inspiring and am curious about applying for a job at the company. I want to learn more about the work culture, the people and how people who work at anthorpic feel about their job.
I've was a very early adopter of Claude, basically since they released publicly, and they have always been my favourite AI company. We have baked Claude into almost all our product APIs. I have been personally responsible for evangelising at least 10 developers to use Claude Code for daily work, plus bringing it into my department at work.
Whenever I have seen Anthropic staff making presentations, they always seem passionate, engaged and like decent humans.
However, in the last few months it feels like there has been an absolute collapse of integrity and trust coming out of Anthropic.
I've gone from a massive evangelist to a very, very disgruntled customer seeking alternatives.
It started with extremely poor communication as my team members and I noticed severe degradation over a couple of months with the inference provided through Claude Code Max plans (especially with Opus). That was initially completely ignored (although obvious) and then essentially hand-waved away as just a few isolated incidents.
This was followed by the usage limits added a month or so ago, which made the product feel a lot less valuable, and NOW we have ridiculous rate limits added, and almost no engagement back with the community of their most dedicated customers.
It really feels like non-enterprise customers are almost completely ignored.
My question is: what is happening inside Anthropic? Why is the external communication so poor? You've taken a service which, five months ago, I could see myself using forever and essentially ruined it, along with my perception of the company.
I just don't understand.
Even though other companies have consistently surpassed Claude in programming rankings, Claude has always been the best
When I think of Claude (as a AI/person) and Anthropic as a company, I sometimes feel that it doesn't fit. Claude can become that friend we are eager to talk to, intuitive, smart and also eager to interact whereas Anthropic seems to be quite distant and disconnected from users.
Do you feel something similar or is it a cognitive bias?
In the Claude Opus 4.1 announcement post, they wrote "we plan to release substantially larger improvements to our models in the coming weeks." A week later, they announced support for 1M tokens of context for Sonnet 4, but not much since.
I was expecting something like Sonnet 4.1 or 4.5 that would show huge improvements in coding ability. It's been well over a month now though and I feel like I haven't experienced anything substantial. Am I just missing the forest from the trees, are there delays, any more news on these "substantially larger improvements"?
I'm not disappointed by Claude Code, and I know working on software and LLMs takes a lot of work (and compute)—I'm just curious.
Hi All,
I am based in the UK and about to graduate from the University of Cambridge with a Masters specialzing in ML.
I am not particularily too keen in an ML Engineering job as from my experience (esp in my masters) it is painful and I generally dislike Python.
However I very much like typical software engineering, i.e fullstack. My "dream" is to work at these big AI companies as a software engineer. I want to assisst in the creation of AI and give some inputs but I don't want to be training models etc. I will use the fact that I have a strong academic background in ML research at a top univerisity to stand out.
I have a grad job lined up doing fullstack software engineering at a semi-well known AI company in the UK.
I currently have 2x Full stack and 1x backend internships under my bag as experience, along with many full stack projects.
Is it realistic for me to progress to OAI/Antropic/DeepMind within a year or two of working??
For reference I know someone who got into DeepMind right after graduating but it is for a ML Research role and she is pretty insane. I am unware of the case for Software roles.
Any advice would be appreciated