Latest CLaude Code is allowed officially to be used with Claude MAX, no more burning API tokens.
0.2.96
Claude Code can now also be used with a Claude Max subscription (https://claude.ai/upgrade)
https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md
Seem Anthropic want to push Claude Code as alternative to other tools like Cursor and push their Max subscription. May be one day a merge into Claude Desktop.
Edit/Update: more informations here:
https://support.anthropic.com/en/articles/11145838-using-claude-code-with-your-max-plan
Hi everyone, I'm a developer who has been using Claude Code Max ($200 plan) for 3 months now. With renewal coming up on the 21st, I wanted to share my honest experience.
Initial Experience (First 1-2 months): I was genuinely impressed. Fast prototyping, reasonable code architecture, and great ability to understand requirements even with vague descriptions. It felt like a real productivity booster.
Recent Changes I've Noticed (Past 2-3 weeks):
Performance degradation: Noticeable drop in code quality compared to earlier experience
Unnecessary code generation: Frequently includes unused code that needs cleanup
Excessive logging: Adds way too many log statements, cluttering the codebase
Test quality issues: Generates superficial tests that don't provide meaningful validation
Over-engineering: Tends to create overly complex solutions for simple requests
Problem-solving capability: Struggles to effectively address persistent performance issues
Reduced comprehension: Missing requirements even when described in detail
Current Situation: I'm now spending more time reviewing and fixing generated code than the actual generation saves me. It feels like constantly code-reviewing a junior developer's work rather than having a reliable coding partner.
Given the $200/month investment, I'm questioning the value proposition and currently exploring alternative tools.
Question for the community: Has anyone else experienced similar issues recently? Or are you still having a consistently good experience with Claude Code?
I'm genuinely curious if this is a temporary issue or if others are seeing similar patterns. If performance improves, I'd definitely consider coming back, but right now I'm not seeing the ROI that justified the subscription cost.
Videos
I'm currently using Google Gemini 2.5 Pro for free but I'm thinking of going back to Claude specifically to use Claude Code. My question are, how quick do you reach the limits for Claude Code? Does it do a good job compared to Cursor with Sonnet 3.7 or Gemini 2.5 Pro?
I'm a sr. software engineer with ~16 years working experience. I'm also a huge believer in AI, and fully expect my job to be obsolete within the decade. I've used all of the most expensive tiers of all of the AI models extensively to test their capabilities. I've never posted a review of any of them but this pro-Claude hysteria has made me post something this time.
If you're a software engineer you probably already realize there is truly nothing special about Claude Code relative to other AI assisted tools out there such as Cline, Cursor, Roo, etc. And if you're a human being you probably also realize that this subreddit is botted to hell with Claude Max ads.
I initially tried Claude Code back in February and it failed on even the simplest tasks I gave it, constantly got stuck in loops of mistakes, and overall was a disappointment. Still, after the hundreds of astroturfed threads and comments in this subreddit I finally relented and thought "okay maybe after Sonnet/Opus 4 came out its actually good now" and decided to buy the $100 plan to give it another shot.
Same result. I wasted about 5 hours today trying to accomplish tasks that could have been done with Cline in 30-40 minutes because I was certain I was doing something wrong and I needed to figure out what. Beyond the usual infinite loops Claude Code often finds itself in (it has been executing a simple file refactor task for 783 seconds as I write this), the 4.0 models have the fun new feature of consistently lying to you in order to speed along development. On at least 3 separate occasions today I've run into variations of:
● You're absolutely right - those are fake status updates! I apologize for that terrible implementation. Let me fix this fake output and..
I have to admit that I was suckered into this purchase from the hundreds of glowing comments littering this subreddit, so I wanted to give a realistic review from an engineer's pov. My take is that Claude Code is probably the most amazing tool on earth for software creation if you have never used alternatives like Cline, Cursor, etc. I think Claude Code might even be better than them if you are just creating very simple 1-shot webpages or CRUD apps, but anything more complex or novel and it is simply not worth the money.
inb4 the genius experts come in and tell me my prompts are the issue.
Hey everyone,
I’m a solo founder building my first web app and I’ve been using Claude Code Pro for coding and debugging. Lately, I’ve been constantly hitting the 5-hour daily usage limit, which is slowing me down a lot.
I’m thinking about upgrading to the Max plan ($200 NZD / ~$120 USD per month) for unlimited/extended access. I have no steady income right now, but I’ve freed up some budget
I want to hear from people who have experience:
Is the Max plan actually worth it for someone hitting daily limits on Pro?
Will it save enough time and frustration to justify the cost?
Any tips to get the most value out of the Max plan as a solo builder?
Basically, I’m trying to figure out if it’s a worthwhile investment in speed/productivity for my first project.
Thanks in advance!
Over the past few days me and Gemini have been working on pseudocode for an app I want to do. I had Gemini break the pseudocode in logical steps and create markdown files for each step. This came out to be 47 md files. I wasn't sure where to take this after that. It's a lot.
Then I signed up for Claude code with Max. I went for the upper tier as I need to get this project rolling. I started up pycharm, dropped all 45 md files from gemini and let Claude Code go. Sure, there were questions from Claude, but in less than 30 mins I had a semi-working flask app. Yes, there were bugs. This is and should be expected. Knowing how I would handle the errors personally helped me to guide Claude to finding the issue.
It was an amazing experience and I appreciate the CLI. If this works out how I hope, I'll be canceling my subscriptions to other AI services. Don't get me started on the AI services I've tried. I'm not looking for perfection. Just to get very close.
I would highly suggest looking into Claude code with a max subscription if you are comfortable with the CLI.
Anthropic has some secret something that makes it dominant in the coding world. I tried others, but always need to rely on 3.7. I'll probably keep my gemini sub but I'm canceling all others.
Sorry for the lengthy post.
It just works.
No awkward small talk, no endless friction. I chat with it like I’d talk to a real teammate..
Complete thoughts, half-baked ideas, even when I’m pissed off and rambling. No need to rephrase everything like I’m engineering a scientific prompt. It gets it. Then it builds.
I dropped Claude for a couple months when the quality dipped (you probably noticed it too). Tried some alternatives. Codex was solid when it first came out, but something was missing. Maybe it was the slower pace, or just how much effort it took to get anywhere. Nothing gave me the same sense of momentum I’d had with Claude.
Fast-forward to this week: My Claude membership lapsed on the 1st. Cash flow has been tight approaching christmas, so I held off renewing the max plan.
In the meantime, I leaned on Cursor (which I already pay for), Google’s Antigravity, and Grok’s free model via Cursor—spreading out options to keep things moving. All useful in their way. But I was neck-deep in a brutal debugging session on a issue that demanded real understanding and iteration. Using Codex and GPT-5.1 (via Cursor Plus, full access to everything).
Should’ve been plenty. Nope. It felt broken for momentum—told me something flat-out couldn’t be done, multiple times. I even pointed it to the exact docs proving it could. Still, pushback. Slow, and weirdly toned.
This wasn’t a one-off; new chats, fresh prompts, every angle I could try. The frustration built fast. I don’t feel I have time for essay-length prompts just to squeeze out a single non-actionable answer just for some poetic, robotic deflection..
On Cursor, the “Codex MAX High Reasoning” model—supposedly their top-tier, free for a limited time? Sick, right? Ha, far from it. Feels like arguing with a smiling bureaucrat who insists you’re wrong. (For this specific case) , Endless back-and-forth, “answers” instead of solutions.
Look, I’ve been deep in this AI-for-dev workflow for a year now.. theres no more one offs or other models to try out in this space. The differences are crystal clear. The fix for my two-hour headache? Cursor’s free Auto mode. No “frontier model” hype, no hand-holding. I was just fed up, flipped it on, and boom. it spotted the issue and nailed it. First try.
That was the breaking point. Thought about the last few weeks with my basic GPT sub on my phone for daily use: it ain’t the same.
I’ve cycled through them all: Claude, Codex, GPT-5.1, Cursor’s party pack, Gemini, Grok. Each shines in their own way.
Gemini’s solid but bombs on planning, tool use, and gets stuck in loops constantly. Gpt is cringe. Only way I can put it. Grok is fire for speed and unfiltered chats.
When you’re building and can’t afford to micromanage your AI? Claude reacts. It helps. Minimal babysitting required. Meanwhile, GPT-5.1? Won’t generate basic stuff half the time. Used to crank out full graphics, life advice, whatever—now it dodges questions it once handled effortlessly. (The refusal policy creep is absurd.) Even simple tasks are hit or miss. No flow, just this nagging sense it’s trapped in an internal ethics loop. The inconsistency has tanked my trust. it’s too good at sounding confident now, which makes the letdowns sting more. One case: Instead of fixing the obvious code smell staring it in the face, it’ll spit back, “I added a helper to your x.ts file so that bla bla bla.” Cute, but solve the damn problem… instead of acting like its normal.
Yeah, it’s evolving, they all are. but after testing everything, Claude’s still the undisputed king for coding. (Speech aside: I stick with GPT-4o for brainstorming; it’s weirdly less locked down than 5.1 and crushes creativity.)
Bottom line: Claude isn’t flawless, and this isn’t some promo speech or AI rat race hype. But from everything this past year, and for anyone whos interested in knowing the differences or, needing a partner that moves with you instead of against you. It’s Claude every time. So yeah, I’m renewing. And I’ll keep paying unless something truly better crashes the party.
Cheers antrhopic, renewing my membership feels like christmas lol.
Hey everyone! Amateur coder here working on flashcard apps and basic HTTP tools.
Claude has been incredibly helpful as my coding partner, but I'm hitting some workflow issues. Currently using Sonnet 4 for implementation, but when I need more complex planning, I switch to Opus 4.1 on the web to get Claude code prompt, which gets rate-limited quickly. I end up waiting 2+ hours on rate limiting
I'm considering the Max plan ($100/month) to avoid these delays and actually finish my projects. I've tried Claude's agentic features with sonnet 4 but its not even near what opus 4.1 gives in chat. Like i have to paste my code there and get prompt and sonnet work on it.
Compared to Gemini 2.5 or OpenAI alternatives, I still prefer Claude Code, but wondering if I'm missing something in my current approach.
Is it really worth getting max plan 100$ for a month or two to finish my project building and then go with pro plan on building it. what would you guys suggest. ?
Really appreciate any insights - still learning and would love to hear from you guys.
In Max 100$ subscription we get 5x the tokens compared to pro 20$ plan. But when we buy two pro plans it would get upto 80% of tokens of max plan. At 40% of cost. So cost vs usage which do you think is better?
Currently my main AI tool develop with is cursor. Within the subscription I can use it unlimited, although I get slower responses after a while.
I tried Claude Code a few times with 5 dollars credit each time. After a few minutes the 5 dollar is gone.
I don't mind paying the 100 or even 200 for the max, if I can be sure that I van code full time the whole month. If I use credits, I'd probably end up with a 3000 dollar bill.
What are your experiences as full time developers?
Hey everyone, I have been a long time Claude user, and I recently subscribe to Max. Please share your workflow/tip using Claude Code, or anything that a newbie like me need to be aware of. Hopefully this helps anyone reading the post.
Thanks.
So what is the verdict on usage, is it a good deal or great deal?
How aggressively can you use it?
Would love to hear from people who have actually purchased and used the two.
Hey everyone,
I’m just starting out in vibe coding, mainly focusing on building web apps, micro SaaS, and SaaS products. My goal is to implement things like AI agents & sub-agents, MCPs, and automations into these apps.
I’ve been using Cursor, but honestly it hasn’t been as helpful as I expected. Now I’m considering subscribing to Claude Pro Max ($100/month) to speed up my workflow — but I’m not sure if it’s really worth the investment at this stage.
Has anyone here tried Claude Pro Max for this kind of work?
Do you think it’s worth it, or would it be smarter to start with cheaper/free alternatives until I get more traction?
Thanks a lot for any insights 🙏
I’ve been trying to give Claude Code a fair shot, especially after all the hype around the new model. But honestly, it’s been a complete letdown for me.
I was excited about the supposed improvements, but the consistency just isn’t there. Half the time I end up asking Codex or manually fixing things myself because Claude either breaks the logic, refuses to fix it properly, or just gives vague suggestions that don’t work.
For something priced at a “premium” level, it’s not delivering. I wanted this to be my main coding assistant, but after weeks of frustration, I’m officially done with it. Total waste of time and money.
Maybe it works for others, but for me. I’m out for good.
Hi!
Never used Claude Code before, but since I am spending so much on Cursor now, the Claude plan actually looks appealing. How is the quality of code? Context window? Etc etc. I am not vibe coding but I do use agents intensively by reiterating and asking questions to validate certain approaches etc.
I used to use Aider with various paid APIs or build the agents myself. But recently, I've given Claude Code a try. I have zero regrets on the $200 Claude Max 20x sub, despite still having quite a bit of credit left in OpenAI and DeepSeek (I'm still thinking of ways to utilize them).
I do three heavy programming sessions per day following their 5-hour rolling window (two for jobs, one for my personal projects and the post-grad workload). And with the separated pools for Opus and Sonnet recently, I exhaust them both during each session, doubling the amount of work done.
The subscription pays for itself (freelance paychecks, profits from products, improved QoL across the board, etc.) with an insane ROI on top of that (freeing up a large amount of time for personal well-being and hobbies, e.g., Dhamma study, walking, meditation, video games, relationships).
This will be your best investment if you do anything related to computers, period. (I'm not affiliated with Anthropic in any way, just stating the facts.)
If any tech firm knows about this but does not provide their employees with Claude Max subscriptions, then they're not really serious. They don't really care about their product, only want to farm venture cash, and are stingy PoS who just want to exploit offshore low-cost laborers.