Hey everyone,
I’m a solo founder building my first web app and I’ve been using Claude Code Pro for coding and debugging. Lately, I’ve been constantly hitting the 5-hour daily usage limit, which is slowing me down a lot.
I’m thinking about upgrading to the Max plan ($200 NZD / ~$120 USD per month) for unlimited/extended access. I have no steady income right now, but I’ve freed up some budget
I want to hear from people who have experience:
Is the Max plan actually worth it for someone hitting daily limits on Pro?
Will it save enough time and frustration to justify the cost?
Any tips to get the most value out of the Max plan as a solo builder?
Basically, I’m trying to figure out if it’s a worthwhile investment in speed/productivity for my first project.
Thanks in advance!
Videos
There’s a lot of debate around whether Anthropic loses money on the Max plan. Maybe they do, maybe they break even, who knows.
But one thing I do know is that I was never going to pay $1000 a month in API credits to use Claude Code. Setting up and funding an API account just for Claude Code felt bad. But using it through the Max plan got me through the door to see how amazing the tool is.
And guess what? Now we’re looking into more Claude Code SDK usage at work, where we might spend tens of thousands of dollars a month on API costs. There’s no Claude Code usage included in the Teams plan either, so that’s all API costs there as well. And it will be worth it.
So maybe the Max plan is just a great loss leader to get people to bring Anthropic into their workplaces, where a company can much more easily eat the API costs.
Hey everyone! Amateur coder here working on flashcard apps and basic HTTP tools.
Claude has been incredibly helpful as my coding partner, but I'm hitting some workflow issues. Currently using Sonnet 4 for implementation, but when I need more complex planning, I switch to Opus 4.1 on the web to get Claude code prompt, which gets rate-limited quickly. I end up waiting 2+ hours on rate limiting
I'm considering the Max plan ($100/month) to avoid these delays and actually finish my projects. I've tried Claude's agentic features with sonnet 4 but its not even near what opus 4.1 gives in chat. Like i have to paste my code there and get prompt and sonnet work on it.
Compared to Gemini 2.5 or OpenAI alternatives, I still prefer Claude Code, but wondering if I'm missing something in my current approach.
Is it really worth getting max plan 100$ for a month or two to finish my project building and then go with pro plan on building it. what would you guys suggest. ?
Really appreciate any insights - still learning and would love to hear from you guys.
Currently my main AI tool develop with is cursor. Within the subscription I can use it unlimited, although I get slower responses after a while.
I tried Claude Code a few times with 5 dollars credit each time. After a few minutes the 5 dollar is gone.
I don't mind paying the 100 or even 200 for the max, if I can be sure that I van code full time the whole month. If I use credits, I'd probably end up with a 3000 dollar bill.
What are your experiences as full time developers?
I'm a sr. software engineer with ~16 years working experience. I'm also a huge believer in AI, and fully expect my job to be obsolete within the decade. I've used all of the most expensive tiers of all of the AI models extensively to test their capabilities. I've never posted a review of any of them but this pro-Claude hysteria has made me post something this time.
If you're a software engineer you probably already realize there is truly nothing special about Claude Code relative to other AI assisted tools out there such as Cline, Cursor, Roo, etc. And if you're a human being you probably also realize that this subreddit is botted to hell with Claude Max ads.
I initially tried Claude Code back in February and it failed on even the simplest tasks I gave it, constantly got stuck in loops of mistakes, and overall was a disappointment. Still, after the hundreds of astroturfed threads and comments in this subreddit I finally relented and thought "okay maybe after Sonnet/Opus 4 came out its actually good now" and decided to buy the $100 plan to give it another shot.
Same result. I wasted about 5 hours today trying to accomplish tasks that could have been done with Cline in 30-40 minutes because I was certain I was doing something wrong and I needed to figure out what. Beyond the usual infinite loops Claude Code often finds itself in (it has been executing a simple file refactor task for 783 seconds as I write this), the 4.0 models have the fun new feature of consistently lying to you in order to speed along development. On at least 3 separate occasions today I've run into variations of:
● You're absolutely right - those are fake status updates! I apologize for that terrible implementation. Let me fix this fake output and..
I have to admit that I was suckered into this purchase from the hundreds of glowing comments littering this subreddit, so I wanted to give a realistic review from an engineer's pov. My take is that Claude Code is probably the most amazing tool on earth for software creation if you have never used alternatives like Cline, Cursor, etc. I think Claude Code might even be better than them if you are just creating very simple 1-shot webpages or CRUD apps, but anything more complex or novel and it is simply not worth the money.
inb4 the genius experts come in and tell me my prompts are the issue.
I was genuinely surprised when somebody made a working clone of my app Shotomatic using Claude in 15 minutes.
At first I didn't believe it, so I decided to give it a try myself. I thought, screw it, and went all-in for the $200 Max plan to see what it could really do.
Man, I was impressed.
The feature (the one in the video) I tried was something like this:
You register a few search keywords, the app (Shotomatic) opens the browser, runs the searches, and automatically takes screenshots of the results. The feature should seamlessly integrate with the existing app.
The wild part? I didn’t write a single line of code.
The entire thing was implemented using Claude Code, and I didn't touch the code myself at all. I only interacted through the terminal giving instructions. From planning to implementation, code review, creating of PR and merging, everything was done with natural language.
It was an insanely productive, and honestly a little scary experience.
Why haven't I tried this before?
I just purchased Max subscription to save on my Claude Code API usage (I've been spending around $200 per month). I can clearly see that the context window is smaller. When I started using Claude Code with Max subscription I've hit all the time the error:
Error: File content (33564 tokens) exceeds maximum allowed tokens (25000). Please use offset and limit parameters to read specific portions
of the file, or use the GrepTool to search for specific content.
which I didn't see at all when using API. Because of that I've had pretty bad experience so far. While Claude Code with API is top notch agent assistant, the version with Max subscription has trashed my files, causing linting errors everywhere, because it couldn't load the full file.
I asked Anthropic support for clear information about context size, but so far I am pretty sure that they limited the context window, because it would be too good to have 225 messages per 5 hours for $100 per month.
If you have big projects with big database – it might not be good for you.
So yeah, I've spent those $100 so you don't have to.
If you’re considering Anthropic’s Claude MAX—or believe that “premium” means reliability, accountability, and respect—please read my full account below. I’m sharing the complete, chronological email thread between myself and Anthropic, fully redacted, to let the facts speak for themselves.
Why I’m Posting This
I work professionally with enterprise clients to improve customer experience and trust. My standards are high, but fair. I did not want to make this public—yet after being ignored by every channel at Anthropic, I believe transparency is necessary to protect others.
The Situation • I subscribed to Claude MAX at significant cost, expecting premium service, reliability, and support. • My experience was the opposite: frequent outages, unreliable availability, broken context/memory, and sudden chat cutoffs with no warning. • When Anthropic’s Head of Growth reached out for feedback, I responded candidly and in detail. • He acknowledged my complaints, apologized, and promised both technical fixes and a timely decision on compensation. • Weeks later: Despite multiple polite and then urgent follow-ups—including a final escalation CC’d to every possible Anthropic address—I have received zero further response. • As soon as I canceled my subscription (completely justified by my experience), I lost all access to support, even though my complaint was active and acknowledged.
Why This Matters
This isn’t just bad customer support—it’s a fundamental breach of trust. It’s especially alarming coming from a company whose “Growth” lead made the promises, then simply vanished. In my professional opinion, this is a case study in how to lose customer confidence, damage your brand, and make a mockery of the word “premium.”
Below is the complete, unedited email thread, with my personal info redacted, so you can judge for yourself.
⸻
Full Email Communication (Chronological, Redacted):
⸻
June 17, 2025 – Amol Avasare (Anthropic Growth Team) writes:
Hey there!
My name’s Amol and I lead the growth team at Anthropic.
I’m doing some work to better understand what Max subscribers use Claude for, as well as to get a clearer sense for how we can improve the experience.
If you’ve got 2 minutes, would love if you could fill out this short survey!
Separately, let me know if there’s any other feedback you have around Max.
Thanks, Amol
⸻
June 24, 2025 – [REDACTED] responds:
Hello Amol,
I am happy you reached out, as I was about to contact Claude ai customer support.
Hereby I want to formally express my dissatisfaction with the Claude MAX subscription service, which I subscribed to in good faith and at significant cost, expecting a reliable and premium AI experience.
Unfortunately, my experience has fallen far short of expectations. I have encountered repeated instances where Claude’s servers were overloaded, rendering the service entirely unavailable. This has happened far too often, to the point where I’ve simply stopped trying to use the service — not because I don’t need it, but because I cannot trust it to be available when I do. This is completely unacceptable for a paid service, let alone one marketed as your top-tier offering.
On top of this, I’ve had to constantly prompt Claude on how it should behave or answer. The model frequently loses track of context and does not retain conversational flow, despite clear input. The usefulness of the assistant is severely diminished when it has to be guided step-by-step through every interaction. This lack of consistency and memory support defeats the core purpose of an AI assistant.
To make matters worse, I have been repeatedly cut off mid-session by an abrupt message that “the chat is too long.” There is no prior warning, no indication that I am approaching a system-imposed limit — just an instant and unexplained stop. This is an incredibly frustrating user experience. If there are hard constraints in place, users should be clearly and proactively informed through visual indicators or warnings before reaching those limits, not after.
In light of these ongoing issues — ranging from unreliability and server outages, to poor conversational continuity, and lack of proper system feedback — I can no longer justify continuing this subscription. I am cancelling my Claude MAX subscription effective June 26th, and will not be renewing.
Given the consistent lack of access and the severely diminished value I’ve received from the service, I believe compensation is warranted. I therefore request a partial refund for the period affected, as I have paid for access and reliability that were simply not delivered.
I trust you will take this feedback seriously and hope to hear from your team promptly regarding the refund request.
My best, [REDACTED]
⸻
June 26, 2025 – Amol Avasare (Anthropic) replies:
Hey [REDACTED],
Really sorry to hear you’ve run into those issues, that sucks.
There were a couple of Google Cloud outages in the last month that had impacts here, those are unfortunately out of our control. Our servers were also a bit overloaded given excessive demand after the Claude 4 launch – we have a LOT of people working around the clock to increase capacity and stability, but these are really tough problems when demand just keeps growing significantly. Nonetheless agree that it’s unacceptable to be seeing these kinds of errors on a premium plan, I’m going to push hard internally on this.
Appreciate the feedback on consistency and memory. On the “this conversation is too long”, we’re going to be rolling out a fix for that in the next 1-2 weeks so that won’t happen going forward.
Let me check in on whether we can give a refund or a credit – we don’t typically do this, but can feel your frustration so I’ll see what I can do. Will reach back out in next few days.
—Amol
⸻
June 30, 2025 – [REDACTED] responds:
Hello Amol,
Thank you for your response and for acknowledging the issues I raised. I appreciate that you’re looking into the possibility of a refund or credit — I believe that would be appropriate, given that I subscribed to a top-tier service which ultimately failed to deliver the expected level of reliability and performance.
While I understand that infrastructure challenges and surges in demand can occur, the frequency and severity of the disruptions — combined with limitations such as the abrupt chat length cutoffs — have had a significant negative impact on the overall usability of the service.
It’s reassuring to hear that a fix for the session length issue is forthcoming and that your team is actively working to address capacity concerns. I look forward to your follow-up regarding a compensation.
Best regards, [REDACTED]
⸻
July 7, 2025 – [REDACTED] follows up:
Follow-up on our email conversation. Urgent Response Needed!!!!
Hello Amol,
On June 26th, you committed to providing an update on my refund/credit request within a couple of days. It is now July 7th — nearly two weeks later — and I have yet to receive any communication from you.
As a paying customer of a premium-tier service, I find this lack of follow-through unacceptable. When a company commits to respond within a defined timeframe, it is entirely reasonable to expect that commitment to be honored.
In addition, you previously mentioned that a fix for the “conversation too long” issue and improvements around consistency and memory would be implemented within 1–2 weeks. To date, I have not received any updates regarding this either.
This ongoing lack of communication has left me unable to decide whether I should reevaluate Claude ai, or whether I should transition my project to another provider. My project has now been on hold for almost two weeks while awaiting your response, which further compounds what has already been an unsatisfactory experience.
Please provide a definitive update on both the refund/credit request and the status of the promised fixes asap. If I do not receive a response by the end of this week, I will consider the matter unresolved and escalate it accordingly.
I expect your urgent attention to this matter.
Sincerely, [REDACTED]
⸻
July 13, 2025 – [REDACTED] escalates and mass-CC’s all Anthropic contacts:
Re: Follow-up on our email conversation. Urgent Response Needed!!!
Hello Amol and Anthropic Support,
I am writing to escalate my unresolved support case regarding my Claude MAX subscription.
As detailed in our previous correspondence, I raised a formal request for a partial refund due to the service’s repeated outages, poor conversational consistency, and abrupt session cutoffs—all of which seriously impacted my ability to use the product as promised. Amol acknowledged these issues on June 26th and assured me a follow-up regarding compensation “in the next few days.” Despite further urgent follow-ups, I have received no additional response.
I want to emphasize how amazed I am that this is how Anthropic—an AI company focused on growth—treats its paying customers. The initial customer experience was already extremely disappointing, but the silent treatment that has followed has made the experience significantly worse. I find it particularly astonishing that an employee responsible for growth would handle a premium customer issue in this way. This is not only a poor customer experience, but a clear breach of trust.
For context: I work for a leading company in Denmark, where I am responsible for helping enterprise clients optimize their customer experience and strengthen trust with their own customers. From that perspective, the handling of this case by Anthropic is both surprising and deeply concerning. When an organization—especially one positioning itself as premium—fails to communicate or deliver on commitments, it fundamentally undermines customer trust.
Because of this ongoing lack of support and broken promises, I have canceled my Claude MAX subscription. However, I find it unacceptable that support is now apparently unavailable simply because I will not continue to pay for a service that failed to meet even basic expectations. Cutting off a customer with an open and acknowledged complaint only compounds the initial problem.
I am once again requesting a concrete update and resolution to my refund or credit request. If I do not receive a definitive response within five (5) business days, I will be forced to share my experience publicly and pursue alternative means of recourse.
This is a final opportunity for Anthropic to demonstrate a genuine commitment to its customers—even when things do not go as planned.
Sincerely, [REDACTED]
CC: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]
⸻
As of July 21, 2025: No response, from anyone, at Anthropic.
⸻
Conclusion: Do Not Trust Claude MAX or Anthropic with Your Business • I have received no reply, no resolution, and frankly—not even the bare minimum acknowledgment—from any Anthropic employee, even after escalating to every single public contact at the company. • As soon as you stop paying, you are cut off—even if your issue was acknowledged and unresolved. • If you value trust, reliability, and any sense of accountability, I cannot recommend Claude MAX or Anthropic at this time.
If you are a business or professional considering Claude, learn from my experience: this is a real risk. Apologies and promises are meaningless if a company’s culture is to go silent and hide from responsibility.
⸻
If anyone else has been treated this way, please share your story below. Anthropic needs to be held publicly accountable for how it treats its customers—especially the ones who trusted them enough to pay for “premium.”
I want to know if I'm getting the maximum of what I've paid. Does anyone have experiences on:
Google AI Ultra (jules), it's $250, but there's promo price of $125 for first 3 months
BYOK on kilo code / roo / etc. I heard GPT 5 is significantly cheaper
$200 plan on OpenAI Codex
$100 plan on Claude Code, will it affect quality ? Aside from token limit, I mean like will CC try to minimize context size on $100 plan vs when using $200 plan? Also from ccusage, I saw like 80% opus and 20% sonnet, I'm afraid $100 plan will have more sonnet percentage.
My usage look like this, I still have 4 more days:
"totals": {
"inputTokens": 71803,
"outputTokens": 1305059,
"cacheCreationTokens": 22792643,
"cacheReadTokens": 253886371,
"totalCost": 747.48,
"totalTokens": 278055876
}The total costs of $747 says the plan is beneficial for me rather than buying same amount of token. But just in case there are better alternatives out there.