Hey everyone! Amateur coder here working on flashcard apps and basic HTTP tools.
Claude has been incredibly helpful as my coding partner, but I'm hitting some workflow issues. Currently using Sonnet 4 for implementation, but when I need more complex planning, I switch to Opus 4.1 on the web to get Claude code prompt, which gets rate-limited quickly. I end up waiting 2+ hours on rate limiting
I'm considering the Max plan ($100/month) to avoid these delays and actually finish my projects. I've tried Claude's agentic features with sonnet 4 but its not even near what opus 4.1 gives in chat. Like i have to paste my code there and get prompt and sonnet work on it.
Compared to Gemini 2.5 or OpenAI alternatives, I still prefer Claude Code, but wondering if I'm missing something in my current approach.
Is it really worth getting max plan 100$ for a month or two to finish my project building and then go with pro plan on building it. what would you guys suggest. ?
Really appreciate any insights - still learning and would love to hear from you guys.
I'm a sr. software engineer with ~16 years working experience. I'm also a huge believer in AI, and fully expect my job to be obsolete within the decade. I've used all of the most expensive tiers of all of the AI models extensively to test their capabilities. I've never posted a review of any of them but this pro-Claude hysteria has made me post something this time.
If you're a software engineer you probably already realize there is truly nothing special about Claude Code relative to other AI assisted tools out there such as Cline, Cursor, Roo, etc. And if you're a human being you probably also realize that this subreddit is botted to hell with Claude Max ads.
I initially tried Claude Code back in February and it failed on even the simplest tasks I gave it, constantly got stuck in loops of mistakes, and overall was a disappointment. Still, after the hundreds of astroturfed threads and comments in this subreddit I finally relented and thought "okay maybe after Sonnet/Opus 4 came out its actually good now" and decided to buy the $100 plan to give it another shot.
Same result. I wasted about 5 hours today trying to accomplish tasks that could have been done with Cline in 30-40 minutes because I was certain I was doing something wrong and I needed to figure out what. Beyond the usual infinite loops Claude Code often finds itself in (it has been executing a simple file refactor task for 783 seconds as I write this), the 4.0 models have the fun new feature of consistently lying to you in order to speed along development. On at least 3 separate occasions today I've run into variations of:
● You're absolutely right - those are fake status updates! I apologize for that terrible implementation. Let me fix this fake output and..
I have to admit that I was suckered into this purchase from the hundreds of glowing comments littering this subreddit, so I wanted to give a realistic review from an engineer's pov. My take is that Claude Code is probably the most amazing tool on earth for software creation if you have never used alternatives like Cline, Cursor, etc. I think Claude Code might even be better than them if you are just creating very simple 1-shot webpages or CRUD apps, but anything more complex or novel and it is simply not worth the money.
inb4 the genius experts come in and tell me my prompts are the issue.
Videos
Hi everyone, I'm a developer who has been using Claude Code Max ($200 plan) for 3 months now. With renewal coming up on the 21st, I wanted to share my honest experience.
Initial Experience (First 1-2 months): I was genuinely impressed. Fast prototyping, reasonable code architecture, and great ability to understand requirements even with vague descriptions. It felt like a real productivity booster.
Recent Changes I've Noticed (Past 2-3 weeks):
Performance degradation: Noticeable drop in code quality compared to earlier experience
Unnecessary code generation: Frequently includes unused code that needs cleanup
Excessive logging: Adds way too many log statements, cluttering the codebase
Test quality issues: Generates superficial tests that don't provide meaningful validation
Over-engineering: Tends to create overly complex solutions for simple requests
Problem-solving capability: Struggles to effectively address persistent performance issues
Reduced comprehension: Missing requirements even when described in detail
Current Situation: I'm now spending more time reviewing and fixing generated code than the actual generation saves me. It feels like constantly code-reviewing a junior developer's work rather than having a reliable coding partner.
Given the $200/month investment, I'm questioning the value proposition and currently exploring alternative tools.
Question for the community: Has anyone else experienced similar issues recently? Or are you still having a consistently good experience with Claude Code?
I'm genuinely curious if this is a temporary issue or if others are seeing similar patterns. If performance improves, I'd definitely consider coming back, but right now I'm not seeing the ROI that justified the subscription cost.
I've mostly been using Chinese models, Copilot, and Cursor up to this point, but decided I would bite the bullet and try Claude Code with Claude Max as people say it performs better than other tools, even other tools using Claude models.
I was wondering if there is a way to get the most out of Claude. I already have some stuff setup like superclaude, spec-kit, and BMAD. I am wondering if there is anything else I should know about. I haven't played with hooks yet and am wondering what people use them for.
Built this today. Claude code for both doing the data analysis from raw docs and building the interface to make it useful. Will be open-sourcing this soon.
Has anyone tried it?
I use heavily cursor.ai and frankly saying lately im using repomix (npm package to pack code to xml files) to wrap some parts of code and paste it inside to AI-Studio Google
I'm currently using Google Gemini 2.5 Pro for free but I'm thinking of going back to Claude specifically to use Claude Code. My question are, how quick do you reach the limits for Claude Code? Does it do a good job compared to Cursor with Sonnet 3.7 or Gemini 2.5 Pro?
Using the Claude CLI, usage would be for personal and work projects. The "Pro" plan works just fine for me but wondering if it can speed up my coding even more after reading all the posts made praising it, what do you guys think? Thanks!
P.S. I never tried any Opus models so not sure what to expect anyway.
I have been using Claude since it became available in Canada. I have been working on a project that has several conversations - basically because I would have to start new conversations when current one got too long. I have basically the same 4 files that I update in the project knowledge repository (uses around 60% of the repository's limit). They are code files (3 Python scripts and a notebook - maybe 320kb total for all 4). Whenever I make changes to the code, I'll remove the old one and transfer the new one to the repository so Claude is always reviewing the most recent version.
Today I decided to upgrade to the Max plan to increase my usage with Claude (longer conversations?). I removed the scripts and reloaded the updated versions so Claude is again reviewing the most recent versions. No sooner did I add the files I get a message - This conversation has reached its maximum length. I didn't even get a chance to start a new conversation. I can't because of this length limit.
This is shoddy customer service - actually, it's worse than that, but I am trying to be polite. I have reached out for a refund because this level of service is completely unacceptable. If you are considering an upgrade - DON'T! Save your money, or buy a plan with a competing AI. If this is the level of customer service Anthropic has decided is acceptable, they will not be around much longer.
Sorry to talk about this topic again.
But ive noticed the rate limits are much closer to the API costs now. im on max 200. For power users - how much usage are you getting from max 100/200 compared to the actual API cost?
Hey everyone,
I’m a solo founder building my first web app and I’ve been using Claude Code Pro for coding and debugging. Lately, I’ve been constantly hitting the 5-hour daily usage limit, which is slowing me down a lot.
I’m thinking about upgrading to the Max plan ($200 NZD / ~$120 USD per month) for unlimited/extended access. I have no steady income right now, but I’ve freed up some budget
I want to hear from people who have experience:
Is the Max plan actually worth it for someone hitting daily limits on Pro?
Will it save enough time and frustration to justify the cost?
Any tips to get the most value out of the Max plan as a solo builder?
Basically, I’m trying to figure out if it’s a worthwhile investment in speed/productivity for my first project.
Thanks in advance!
Latest CLaude Code is allowed officially to be used with Claude MAX, no more burning API tokens.
0.2.96
Claude Code can now also be used with a Claude Max subscription (https://claude.ai/upgrade)
https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md
Seem Anthropic want to push Claude Code as alternative to other tools like Cursor and push their Max subscription. May be one day a merge into Claude Desktop.
Edit/Update: more informations here:
https://support.anthropic.com/en/articles/11145838-using-claude-code-with-your-max-plan
Hi, so i’ve been using claude code for about with pro plans for about 3 months the experiences is great. I’m curious if it is worth it to upgrade from claude $20 plan to claude $100 plan? I mainly do svelte projects for internal crud apps. I’m thinking about upgrade it to $100 plan for 1 month. But before that i want to look for other opinions before upgrade it. Thanks
Looking for help with this decision. On pro ($20/mo) plan, I hit the limits pretty easily. On ($100) Max, I never have. Weekly usage I maybe get to 50%.
Should I switch to usage-based? Do I need to be on the pro plan to use the api key?
Edit: thanks for all the replies. Seems pretty obvious that keeping a subscription (pro or max) is the way to go. ccusage was also helpful (I'm way past $500 for the month!).
I have been using Claude code for a long time, practically from the beginning when it was created, and it has completely changed the way I use AI. I don't know so much about code, but since AI is doing well with programming I started creating a couple of applications at the beginning to automate for myself and then streamline things at home. Claude Code, Sonnet 4 and Opus helped me a lot to develop technical skills and thanks to it I have things like automatic opening and closing of blinds or sending alarms when smoke detectors detect something, home lab and smart home is a big area of activities and possibilities.
Although there were sometimes limits I used Opus and Sonnet intensively. I didn't complain too much because the limits were sometimes reached at most an hour before the next 5-hour session. Things started to break down when weekly limits were introduced. Limits fluctuated terribly, sometimes it was better (but not like before the introduction of weekly limits), sometimes it was so bad that the limits in a 5 hour session ended after 1 hour.... My plan didn't change, the way I use it did too. The last 2 weeks have been tragic, because after about 3 days I used up the entire weekend limit. If the Anthropic team says that it does not change the limits then for me it is a simple lie, it is impossible to attract similar habits and use in a similar way so drastically change the limits.
I'll get to the main point, so as not to write too much. I've been testing Codex for a week having the usual $20 plan.
For 4 days I used similarly to Claude codex.... And only at the 4th day I had a limit. And not the cheapest model available just usually used the better ones. Codex has its downsides, but it can all be worked out and set up to achieve better accuracy similar to Claude, although in some cases Codex does better.
I know that OpenAI is probably losing a lot of money on this, and I know that it probably won't last very long, but even if they make it 2 or 3 times worse it will still be better than with Claude, who can with a $200 plan limit access after 1 day. Chatgpt's $20 plan and even more so the $200 plan is worth the money unlike Claude, which was great in the beginning and has now deteriorated.
Anthropic is going the way of Cursor, and it's not a good way because Cursor blatantly scams people, changes limits every different day and purposely worsens the performance of models through its layer just to make it cheaper.
At this point I am switching from claude to Codex and will gladly pay them $200 if necessary than $200 claude, which does not want to see its users.
And all because of the stupid decision of weekend capping. It was enough to ban forever those who used the limits 24 hours a day all week and overtaxed the resources, and give honest users full freedom, then of course because of some idiots who bragged here and created videos how claude works alone 24 hours a day Anthropic had to give a weekend limit. As far as I'm concerned they seized the moment to limit access to everyone because maintenance was too expensive, and that was just an excuse to implement limits.
Sonnet 4.5 will not save the situation, and if it goes on like this, OpenAI will garner more users than Anthropic. Personally, I feel cheated because I pay so much for only 1 day limit without giving any information that the limits are changing.
And if not OpenAI, Chinese models are available to choose from at a good price, or even for free
Time to wake up and be competitive.
I'm back from month hiatus of Claude Max5 Subscription and just recently re-subscribed to Pro plan to test Opus 4.5.
At first, I was laughing on how people comments and said in here that you can only prompt one Opus 4.5 and your 5-hour limit is gone until I literally experienced it. Now, I upgrade my Plan to Max5 and the usage limit difference is HUUUUUUUUUUUUGE compared to Pro Plan. It is not just 5x. So I feel like the Pro plan (This should be renamed to just "Plus" because there's no pro in this plan) is really just to test the model and Anthropic will force you to upgrade to Max.
Right now, been coding on 2 sessions simultaneously continuously using opusplan model and I'm only 57% of the 5-hour limit, reset in 1 hour.
Anyhow,
Opus 4.5 is great, the limit is higher. I'm happy but my wallet hurts. Lol
Hey everyone,
I’m just starting out in vibe coding, mainly focusing on building web apps, micro SaaS, and SaaS products. My goal is to implement things like AI agents & sub-agents, MCPs, and automations into these apps.
I’ve been using Cursor, but honestly it hasn’t been as helpful as I expected. Now I’m considering subscribing to Claude Pro Max ($100/month) to speed up my workflow — but I’m not sure if it’s really worth the investment at this stage.
Has anyone here tried Claude Pro Max for this kind of work?
Do you think it’s worth it, or would it be smarter to start with cheaper/free alternatives until I get more traction?
Thanks a lot for any insights 🙏
I've been using Claude since 3.5 through the API and Cursor and switched to Claude Code with the $200 max plan once they released it.
It was great and completely worth it, but now I'm not sure if it's still worth it and the main reasons are the following:
Claude is very good at agentic tooling but it's not as smart as GPT Codex for example. I find Codex to be very smart and many times it can fix issues that Claude can't but it's not optimal for daily use because it's very slow.
Now we have more models that work very similarly to Claude like GLM and MiniMax M2 so I tried the coding plan for GLM and it works very well. It's not as good as Claude to be honest but combining it with other models like Codex, Kimi2, etc. can make it very good.
There's no flagship model anymore. Opus is mostly useless because of how expensive it is and actually it's not even smarter than Codex.
So probably GLM coding plan + Codex + Kimi2 thinking and soon Gemini 3 is a better combo and will also be much cheaper?