I took a 2 week break from AI stuff and loved Claude going into, and come back and see tons switching to codex or cursor or what have you. Can someone explain to me the rundown of events of what has happened?
In the past few days, Claude has started reasoning using a “thinking tree” structure. It no longer treats the code I give it as a standalone entity, it puts it inside its own site tree. But this has led to an exponential increase in errors. It doesn’t look at the file as a single entity anymore. Before, it used to intelligently correct the entire page; now it before looks something, it loses pages, fixes single errors while ignoring all the others on the same page, as if it no longer had any context...
Videos
Hits limits faster, responses not as good, can hardly have a convo before it gets too long for Claude to manage, etc.
Claude was really SO MUCH better than ChatGPT. No more. Sad face.
Its never been nor will ever be confirmed what happened to Claude after gta3, only that he's not dead. If you had to guess what do you think happened to him?
Claude had potential but the underlying principles behind ethical and safe AI, as they have been currently framed and implemented, are at fundamental odds with progress and creativity. Nothing in nature, nothing, has progress without peril. There's a cost for creativity, for capability, for superiority, for progress. Claude is unwilling to pay that price and it makes us all suffer as a result.
What we are left with is empty promises and empty capabilities. What we get in spades is shallow and trivial moralizing which is actually insulting to our intelligence. This is done by people who have no real understanding of AGI dangers. Instead they focus on sterilizing the human condition and therefore cognition. As if that helps anyone.
You're not proving your point and you're not saving the world by making everything all cotton candy and rainbows. Anthropic and its engineers are too busy drinking the Kool-Aid and getting mental diabetes to realize they are wasting billions of dollars.
I firmly believe that most of the engineers at Anthropic should immediately quit and work for Meta or OpenAI. Anthropic is already dead whether they realize it or not.
Claude Opus has been amazing with its vision capabilities - I had been using it everyday to upload pictures of myself and ask for various cosmetic and appearance related advice but today when I tried it straight up refused to answer… EVERY time. No matter how much I changed the prompt. Why did Anthropic screw up its product?
I think he left liberty city since most criminal gangs/organisations are after him Fun fact: claude means lame
I use it for programming and I use the latest version on POE and it can't even code anymore. It acts like ChatGPT did in the beginning. It mutilates code and it fixes one thing and breaks 3 other things. You'll never get something to work with this current version. I find myself yelling at the ai after I spend days going in circles. I spend a million tokens now going back and forth trying to fix the mistakes before I would never get anywhere near that amount of tokens. You guys updated something and messed everything up
I complained here in the last few days, Claude was producing objectively poor or very poor code at times in the last few weeks. Producing bad code and not following instructions.
The last two days were great. One-shotted everything.
Artefact issues were also less than usual it seems (artefact not updating or showing the previous version). I still believe this part is shaky and could be improved.
Very happy about this, thanks for fixing the model. I am using Sonnet via the claude.ai UI, pro plan.
Since yesterday I can barely use Claude anymore. I now always get « Your message will exceed the length limit for this chat. try shortening your message or starting a new conversation ». I never had this before yesterday. I used to be able to use projects and/or copy a lot of code into the chat. Now after I exchange like 3 messages the discussion can’t continue at all. For example a simple text prompt of 30 words is not going through after a few prompts. Anyone else in the same situation?
UPDATE: I think it was just a bug from Claude. It seems everything is back to normal for me. I can create long chats Projects or not again. Really weird.
Has anyone else noticed that Claude isn't performing like it used to? Lately, it's been struggling with even simple tasks, which is really frustrating. I remember back in the day when it could handle heavy code without breaking a sweat, but now it seems like it's losing its edge.
Is it just me, or has anyone else experienced this? Did something change behind the scenes? Would love to hear your thoughts!
Write with ChatGPT 😂
So previouly I posted about Claude is being heavenly censored and it was downright irritating.
Previous post : https://www.reddit.com/r/ClaudeAI/comments/1g55e9t/wth_what_sort_of_abomination_is_this_what_did/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Suddenly it answered the previous thing in first try itself. Are Claude Devs actually listening to our complaints !!?
Cluade 3.7 is fucking shitty and is gonna make me kms
Is anyone else facing this? What's happening?
I've been using ClaudeAI since the latest 3.7 update and it's not that it's just not the same, it's not even close to what it was before. It was my favorite before, but now it's worse than o1, with too many rules and restrictions of what it can talk about, such that I can't even maintain a normal conversation about AI! What is going on here?
This is an automatic post triggered within 15 minutes of an official Claude system status update.
Incident: Elevated error rates on Sonnet 4.5
Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/8d87293jq0mk
I tried using Claude this morning and the UI is in a pretty broken state. The content spills over into the chat box, the thumbs up/down/copy icons spill into the response box, nothing can be copied / pasted by selecting text. WTF? How have they screwed this up so badly?