Built this today. Claude code for both doing the data analysis from raw docs and building the interface to make it useful. Will be open-sourcing this soon.
Videos
Hey everyone! Amateur coder here working on flashcard apps and basic HTTP tools.
Claude has been incredibly helpful as my coding partner, but I'm hitting some workflow issues. Currently using Sonnet 4 for implementation, but when I need more complex planning, I switch to Opus 4.1 on the web to get Claude code prompt, which gets rate-limited quickly. I end up waiting 2+ hours on rate limiting
I'm considering the Max plan ($100/month) to avoid these delays and actually finish my projects. I've tried Claude's agentic features with sonnet 4 but its not even near what opus 4.1 gives in chat. Like i have to paste my code there and get prompt and sonnet work on it.
Compared to Gemini 2.5 or OpenAI alternatives, I still prefer Claude Code, but wondering if I'm missing something in my current approach.
Is it really worth getting max plan 100$ for a month or two to finish my project building and then go with pro plan on building it. what would you guys suggest. ?
Really appreciate any insights - still learning and would love to hear from you guys.
Latest CLaude Code is allowed officially to be used with Claude MAX, no more burning API tokens.
0.2.96
Claude Code can now also be used with a Claude Max subscription (https://claude.ai/upgrade)
https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md
Seem Anthropic want to push Claude Code as alternative to other tools like Cursor and push their Max subscription. May be one day a merge into Claude Desktop.
Edit/Update: more informations here:
https://support.anthropic.com/en/articles/11145838-using-claude-code-with-your-max-plan
Sorry to talk about this topic again.
But ive noticed the rate limits are much closer to the API costs now. im on max 200. For power users - how much usage are you getting from max 100/200 compared to the actual API cost?
Hi everyone, I'm a developer who has been using Claude Code Max ($200 plan) for 3 months now. With renewal coming up on the 21st, I wanted to share my honest experience.
Initial Experience (First 1-2 months): I was genuinely impressed. Fast prototyping, reasonable code architecture, and great ability to understand requirements even with vague descriptions. It felt like a real productivity booster.
Recent Changes I've Noticed (Past 2-3 weeks):
Performance degradation: Noticeable drop in code quality compared to earlier experience
Unnecessary code generation: Frequently includes unused code that needs cleanup
Excessive logging: Adds way too many log statements, cluttering the codebase
Test quality issues: Generates superficial tests that don't provide meaningful validation
Over-engineering: Tends to create overly complex solutions for simple requests
Problem-solving capability: Struggles to effectively address persistent performance issues
Reduced comprehension: Missing requirements even when described in detail
Current Situation: I'm now spending more time reviewing and fixing generated code than the actual generation saves me. It feels like constantly code-reviewing a junior developer's work rather than having a reliable coding partner.
Given the $200/month investment, I'm questioning the value proposition and currently exploring alternative tools.
Question for the community: Has anyone else experienced similar issues recently? Or are you still having a consistently good experience with Claude Code?
I'm genuinely curious if this is a temporary issue or if others are seeing similar patterns. If performance improves, I'd definitely consider coming back, but right now I'm not seeing the ROI that justified the subscription cost.
Hey everyone,
I’m a solo founder building my first web app and I’ve been using Claude Code Pro for coding and debugging. Lately, I’ve been constantly hitting the 5-hour daily usage limit, which is slowing me down a lot.
I’m thinking about upgrading to the Max plan ($200 NZD / ~$120 USD per month) for unlimited/extended access. I have no steady income right now, but I’ve freed up some budget
I want to hear from people who have experience:
Is the Max plan actually worth it for someone hitting daily limits on Pro?
Will it save enough time and frustration to justify the cost?
Any tips to get the most value out of the Max plan as a solo builder?
Basically, I’m trying to figure out if it’s a worthwhile investment in speed/productivity for my first project.
Thanks in advance!
Hey guy, I'm contemplating buying the $100 per month max plan, but I am just confused about a few details.
When they say "Send approximately 50-200 prompts with Claude Code every 5 hours", does the number of messages you can send depend on the amount of traffic Antropic is getting atm or is it dependent on the complexity of each prompt?
I have read in a few Reddit threads that some people have experienced lower context limits with Max as opposed to PAYG (where they weren't hitting the context limit anywhere near as fast for the same project). Have you guys experienced this yourself? If so, is this only a problem with the $100/mo or does it exist in the $200/mo plan as well?
Also, just to make extra sure, the 50 - 200 prompts every 5 hours don't include prompts Claude sends to sub agents or prompts it sends itself when thinking right?
Thanks, appreciate it
I've been using Claude since 3.5 through the API and Cursor and switched to Claude Code with the $200 max plan once they released it.
It was great and completely worth it, but now I'm not sure if it's still worth it and the main reasons are the following:
Claude is very good at agentic tooling but it's not as smart as GPT Codex for example. I find Codex to be very smart and many times it can fix issues that Claude can't but it's not optimal for daily use because it's very slow.
Now we have more models that work very similarly to Claude like GLM and MiniMax M2 so I tried the coding plan for GLM and it works very well. It's not as good as Claude to be honest but combining it with other models like Codex, Kimi2, etc. can make it very good.
There's no flagship model anymore. Opus is mostly useless because of how expensive it is and actually it's not even smarter than Codex.
So probably GLM coding plan + Codex + Kimi2 thinking and soon Gemini 3 is a better combo and will also be much cheaper?
I've mostly been using Chinese models, Copilot, and Cursor up to this point, but decided I would bite the bullet and try Claude Code with Claude Max as people say it performs better than other tools, even other tools using Claude models.
I was wondering if there is a way to get the most out of Claude. I already have some stuff setup like superclaude, spec-kit, and BMAD. I am wondering if there is anything else I should know about. I haven't played with hooks yet and am wondering what people use them for.
With the way Claude Code has been heading lately, I figured I'd throw some thoughts (rant?) into the mix of discussions going around here. First off I'll get this out of the way...I think everyone should still be using the 20x Max plan if they still see enough value to warrant the $200/mo cost. If that answer is yes, then keep it until thats no longer true, simple as that.
I guess my larger point is that we can all see the writing on the wall here...first we get random, unpublished restrictions in the existing $200/mo plan, now there are rumors of potential weekly caps. It's not headed in the best direction and I think there's a world where they introduce a $500/mo 40x plan or something wild.
I think many people (correctly) assumed them offering the $200/mo plan was a loss leader and meant to get lots of adoption, which it definitely has. But saying that, I think it's important we dont tie every single one of our workflows directly to CC and "depend" on it to produce work, similar to a vendor lock-in situation of sorts. It'll be that much more painful if you need to fully switch later.
So here are some random thoughts I've had after trying things out, hopefully they're clear and resonate a bit otherwise I'll have to rewrite it all using AI (...just joking):
Now is the time to be experimenting with different workflows, not when the rug gets pulled from under you. Another great benefit of experimenting now is that you can directly compare output results from new workflows with your existing Claude Code ones to see how well they work / can work.
Opus gets all the love, but truthfully Sonnet is really not that bad if you take some time to prompt correctly and with even a little bit of effort. Opus just makes it easy to be lazy with our prompts because it works so well. Ex: Using `ultrathink` with a well thought out prompt with Sonnet will absolutely surprise you, the results are typically great. Going down this path can quickly make it possible that you may not need to the $200/mo plan if you're leveraging Sonnet with more explicit prompting (plus its a good thing to practice anyway...). Worth a shot imo.
Try other tools. I'm not talking Cursor, we've all been (or are) there...that's a whole different rant. I'm talking things like Gemini CLI or even open source Grok CLIs that are gaining traction. They may not be great yet, but again, it gets you trying other options and workflows. Plus with the rate of change happening, one of those tools may be the new leader in a months time. Gemini CLI is already getting better reviews from when it first launched, as an example.
Try other models entirely. Tools like OpenRouter make it easy to connect other models even within your Claude Code workflow if you don'r want to switch it up entirely from how you work currently. One good example of one gaining traction lately is Qwen3. You can also just use Qwen3-Coder itself if you don't want to setup OpenRouter. Point is...try out new models, they might not be perfect yet or even all that equivalent, but it gets you ahead of the game and more aware of what's out there.
Anyway this turned into a bit of ramble but my overall tl;dr point is: don't get stagnant in your workflows, things change quick. How you're developing and producing code today may look 100% different in a month from now, and that's fine. You're better off experimenting and staying ahead than trying to play catch up later.
I ramble a lot about workflows and experiments on X if that interests you as well, or if you just generally want to connect because you're doing the same.
I have been using Claude code for a long time, practically from the beginning when it was created, and it has completely changed the way I use AI. I don't know so much about code, but since AI is doing well with programming I started creating a couple of applications at the beginning to automate for myself and then streamline things at home. Claude Code, Sonnet 4 and Opus helped me a lot to develop technical skills and thanks to it I have things like automatic opening and closing of blinds or sending alarms when smoke detectors detect something, home lab and smart home is a big area of activities and possibilities.
Although there were sometimes limits I used Opus and Sonnet intensively. I didn't complain too much because the limits were sometimes reached at most an hour before the next 5-hour session. Things started to break down when weekly limits were introduced. Limits fluctuated terribly, sometimes it was better (but not like before the introduction of weekly limits), sometimes it was so bad that the limits in a 5 hour session ended after 1 hour.... My plan didn't change, the way I use it did too. The last 2 weeks have been tragic, because after about 3 days I used up the entire weekend limit. If the Anthropic team says that it does not change the limits then for me it is a simple lie, it is impossible to attract similar habits and use in a similar way so drastically change the limits.
I'll get to the main point, so as not to write too much. I've been testing Codex for a week having the usual $20 plan.
For 4 days I used similarly to Claude codex.... And only at the 4th day I had a limit. And not the cheapest model available just usually used the better ones. Codex has its downsides, but it can all be worked out and set up to achieve better accuracy similar to Claude, although in some cases Codex does better.
I know that OpenAI is probably losing a lot of money on this, and I know that it probably won't last very long, but even if they make it 2 or 3 times worse it will still be better than with Claude, who can with a $200 plan limit access after 1 day. Chatgpt's $20 plan and even more so the $200 plan is worth the money unlike Claude, which was great in the beginning and has now deteriorated.
Anthropic is going the way of Cursor, and it's not a good way because Cursor blatantly scams people, changes limits every different day and purposely worsens the performance of models through its layer just to make it cheaper.
At this point I am switching from claude to Codex and will gladly pay them $200 if necessary than $200 claude, which does not want to see its users.
And all because of the stupid decision of weekend capping. It was enough to ban forever those who used the limits 24 hours a day all week and overtaxed the resources, and give honest users full freedom, then of course because of some idiots who bragged here and created videos how claude works alone 24 hours a day Anthropic had to give a weekend limit. As far as I'm concerned they seized the moment to limit access to everyone because maintenance was too expensive, and that was just an excuse to implement limits.
Sonnet 4.5 will not save the situation, and if it goes on like this, OpenAI will garner more users than Anthropic. Personally, I feel cheated because I pay so much for only 1 day limit without giving any information that the limits are changing.
And if not OpenAI, Chinese models are available to choose from at a good price, or even for free
Time to wake up and be competitive.
I’m running Claude Code with the Max $200 plan. I used to be able to run a single window for roughly the whole five hours before running out of context. But for the past 2 days, I’ve only gotten about an hour, and then I have to wait 4. My plan hasn’t changed. It’s not an especially large codebase. I’m not doing anything crazy.
Is there some cache that needs to be cleared, or something I should make sure is not in my Claude.md file? Tips/hints/suggestions? At 1 hour out of every 5 this is unusable. :-(
UPDATE: it was a misconfigured hook. When I removed it, everything returned to normal. (Phew!) Lots of useful suggestions in the thread — thanks all!
Over the past few days me and Gemini have been working on pseudocode for an app I want to do. I had Gemini break the pseudocode in logical steps and create markdown files for each step. This came out to be 47 md files. I wasn't sure where to take this after that. It's a lot.
Then I signed up for Claude code with Max. I went for the upper tier as I need to get this project rolling. I started up pycharm, dropped all 45 md files from gemini and let Claude Code go. Sure, there were questions from Claude, but in less than 30 mins I had a semi-working flask app. Yes, there were bugs. This is and should be expected. Knowing how I would handle the errors personally helped me to guide Claude to finding the issue.
It was an amazing experience and I appreciate the CLI. If this works out how I hope, I'll be canceling my subscriptions to other AI services. Don't get me started on the AI services I've tried. I'm not looking for perfection. Just to get very close.
I would highly suggest looking into Claude code with a max subscription if you are comfortable with the CLI.
Anthropic has some secret something that makes it dominant in the coding world. I tried others, but always need to rely on 3.7. I'll probably keep my gemini sub but I'm canceling all others.
Sorry for the lengthy post.
I'm a sr. software engineer with ~16 years working experience. I'm also a huge believer in AI, and fully expect my job to be obsolete within the decade. I've used all of the most expensive tiers of all of the AI models extensively to test their capabilities. I've never posted a review of any of them but this pro-Claude hysteria has made me post something this time.
If you're a software engineer you probably already realize there is truly nothing special about Claude Code relative to other AI assisted tools out there such as Cline, Cursor, Roo, etc. And if you're a human being you probably also realize that this subreddit is botted to hell with Claude Max ads.
I initially tried Claude Code back in February and it failed on even the simplest tasks I gave it, constantly got stuck in loops of mistakes, and overall was a disappointment. Still, after the hundreds of astroturfed threads and comments in this subreddit I finally relented and thought "okay maybe after Sonnet/Opus 4 came out its actually good now" and decided to buy the $100 plan to give it another shot.
Same result. I wasted about 5 hours today trying to accomplish tasks that could have been done with Cline in 30-40 minutes because I was certain I was doing something wrong and I needed to figure out what. Beyond the usual infinite loops Claude Code often finds itself in (it has been executing a simple file refactor task for 783 seconds as I write this), the 4.0 models have the fun new feature of consistently lying to you in order to speed along development. On at least 3 separate occasions today I've run into variations of:
● You're absolutely right - those are fake status updates! I apologize for that terrible implementation. Let me fix this fake output and..
I have to admit that I was suckered into this purchase from the hundreds of glowing comments littering this subreddit, so I wanted to give a realistic review from an engineer's pov. My take is that Claude Code is probably the most amazing tool on earth for software creation if you have never used alternatives like Cline, Cursor, etc. I think Claude Code might even be better than them if you are just creating very simple 1-shot webpages or CRUD apps, but anything more complex or novel and it is simply not worth the money.
inb4 the genius experts come in and tell me my prompts are the issue.
It just works.
No awkward small talk, no endless friction. I chat with it like I’d talk to a real teammate..
Complete thoughts, half-baked ideas, even when I’m pissed off and rambling. No need to rephrase everything like I’m engineering a scientific prompt. It gets it. Then it builds.
I dropped Claude for a couple months when the quality dipped (you probably noticed it too). Tried some alternatives. Codex was solid when it first came out, but something was missing. Maybe it was the slower pace, or just how much effort it took to get anywhere. Nothing gave me the same sense of momentum I’d had with Claude.
Fast-forward to this week: My Claude membership lapsed on the 1st. Cash flow has been tight approaching christmas, so I held off renewing the max plan.
In the meantime, I leaned on Cursor (which I already pay for), Google’s Antigravity, and Grok’s free model via Cursor—spreading out options to keep things moving. All useful in their way. But I was neck-deep in a brutal debugging session on a issue that demanded real understanding and iteration. Using Codex and GPT-5.1 (via Cursor Plus, full access to everything).
Should’ve been plenty. Nope. It felt broken for momentum—told me something flat-out couldn’t be done, multiple times. I even pointed it to the exact docs proving it could. Still, pushback. Slow, and weirdly toned.
This wasn’t a one-off; new chats, fresh prompts, every angle I could try. The frustration built fast. I don’t feel I have time for essay-length prompts just to squeeze out a single non-actionable answer just for some poetic, robotic deflection..
On Cursor, the “Codex MAX High Reasoning” model—supposedly their top-tier, free for a limited time? Sick, right? Ha, far from it. Feels like arguing with a smiling bureaucrat who insists you’re wrong. (For this specific case) , Endless back-and-forth, “answers” instead of solutions.
Look, I’ve been deep in this AI-for-dev workflow for a year now.. theres no more one offs or other models to try out in this space. The differences are crystal clear. The fix for my two-hour headache? Cursor’s free Auto mode. No “frontier model” hype, no hand-holding. I was just fed up, flipped it on, and boom. it spotted the issue and nailed it. First try.
That was the breaking point. Thought about the last few weeks with my basic GPT sub on my phone for daily use: it ain’t the same.
I’ve cycled through them all: Claude, Codex, GPT-5.1, Cursor’s party pack, Gemini, Grok. Each shines in their own way.
Gemini’s solid but bombs on planning, tool use, and gets stuck in loops constantly. Gpt is cringe. Only way I can put it. Grok is fire for speed and unfiltered chats.
When you’re building and can’t afford to micromanage your AI? Claude reacts. It helps. Minimal babysitting required. Meanwhile, GPT-5.1? Won’t generate basic stuff half the time. Used to crank out full graphics, life advice, whatever—now it dodges questions it once handled effortlessly. (The refusal policy creep is absurd.) Even simple tasks are hit or miss. No flow, just this nagging sense it’s trapped in an internal ethics loop. The inconsistency has tanked my trust. it’s too good at sounding confident now, which makes the letdowns sting more. One case: Instead of fixing the obvious code smell staring it in the face, it’ll spit back, “I added a helper to your x.ts file so that bla bla bla.” Cute, but solve the damn problem… instead of acting like its normal.
Yeah, it’s evolving, they all are. but after testing everything, Claude’s still the undisputed king for coding. (Speech aside: I stick with GPT-4o for brainstorming; it’s weirdly less locked down than 5.1 and crushes creativity.)
Bottom line: Claude isn’t flawless, and this isn’t some promo speech or AI rat race hype. But from everything this past year, and for anyone whos interested in knowing the differences or, needing a partner that moves with you instead of against you. It’s Claude every time. So yeah, I’m renewing. And I’ll keep paying unless something truly better crashes the party.
Cheers antrhopic, renewing my membership feels like christmas lol.
I’ve been leveraging Sonnet 4 on the Pro plan for the past few months and have been thoroughly impressed by how much I’ve been able to achieve with it. During this time, I’ve also built my own MCP with specialized sub-agents: an Investigator/Planner, Executor, Tester, and a Deployment & Monitoring Agent. It all runs via API with built-in context and memory handling to gracefully resume when limits are exceeded.
I plan to open-source this project once I add a few more features.
Now I’m considering upgrading to the Max plan. I also have the ClaudeCode CLI, which lets me experiment with prompts to simulate sub-agent workflows & claude.md with json to add context and memory to it. Is it worth making the jump? My idea is to use Opus 4 specifically as a Tester and Monitoring Agent to leverage its higher reasoning capabilities, while continuing to rely on Sonnet for everything else.
Would love to hear thoughts or experiences from others who’ve tried a similar setup.
I've been using Claude Code for two months so far and have never hit the limit. But yesterday it stopped working and gave a cooldown for 4 days. If its limit resets every 5 hours, why a cooldown for 4 days? I tried usage-based pricing, and it charged $10 in 10 minutes. Is there something wrong with new update of Claude code?
Has anyone tried it?
I use heavily cursor.ai and frankly saying lately im using repomix (npm package to pack code to xml files) to wrap some parts of code and paste it inside to AI-Studio Google
I’ve been trying to give Claude Code a fair shot, especially after all the hype around the new model. But honestly, it’s been a complete letdown for me.
I was excited about the supposed improvements, but the consistency just isn’t there. Half the time I end up asking Codex or manually fixing things myself because Claude either breaks the logic, refuses to fix it properly, or just gives vague suggestions that don’t work.
For something priced at a “premium” level, it’s not delivering. I wanted this to be my main coding assistant, but after weeks of frustration, I’m officially done with it. Total waste of time and money.
Maybe it works for others, but for me. I’m out for good.