This Megathread is a continuation of the discussion of your thoughts, concerns and suggestions about the changes involving the Weekly Usage Limits implemented alongside the recent Claude 4.5 release. Please help us keep all your feedback in one place so we can prepare a report for Anthropic's consideration about readers' suggestions, complaints and feedback. This also helps us to free the feed for other discussion. For discussion about recent Claude performance and bug reports, please use the Weekly Performance Megathread instead.
Please try to be as constructive as possible and include as much evidence as possible. Be sure to include what plan you are on. Feel free to link out to images.
Recent related Anthropic announcement : https://www.reddit.com/r/ClaudeAI/comments/1ntq8tv/introducing_claude_usage_limit_meter/
Original Anthropic announcement here: https://www.reddit.com/r/ClaudeAI/comments/1mbo1sb/updating_rate_limits_for_claude_subscription/
Anthropic's update on usage limits post here : https://www.reddit.com/r/ClaudeAI/comments/1nvnafs/update_on_usage_limits/
Last week's Megathread: https://www.reddit.com/r/ClaudeAI/comments/1nu9wew/usage_limits_discussion_megathread_beginning_sep/
Megathread's response to Anthropic's usage limits update post here:
https://www.reddit.com/r/ClaudeAI/comments/1o1wn34/megathreads_response_to_anthropics_post_update_on/
Iโm running Claude Code with the Max $200 plan. I used to be able to run a single window for roughly the whole five hours before running out of context. But for the past 2 days, Iโve only gotten about an hour, and then I have to wait 4. My plan hasnโt changed. Itโs not an especially large codebase. Iโm not doing anything crazy.
Is there some cache that needs to be cleared, or something I should make sure is not in my Claude.md file? Tips/hints/suggestions? At 1 hour out of every 5 this is unusable. :-(
UPDATE: it was a misconfigured hook. When I removed it, everything returned to normal. (Phew!) Lots of useful suggestions in the thread โ thanks all!
Videos
I have been using Claude code for a long time, practically from the beginning when it was created, and it has completely changed the way I use AI. I don't know so much about code, but since AI is doing well with programming I started creating a couple of applications at the beginning to automate for myself and then streamline things at home. Claude Code, Sonnet 4 and Opus helped me a lot to develop technical skills and thanks to it I have things like automatic opening and closing of blinds or sending alarms when smoke detectors detect something, home lab and smart home is a big area of activities and possibilities.
Although there were sometimes limits I used Opus and Sonnet intensively. I didn't complain too much because the limits were sometimes reached at most an hour before the next 5-hour session. Things started to break down when weekly limits were introduced. Limits fluctuated terribly, sometimes it was better (but not like before the introduction of weekly limits), sometimes it was so bad that the limits in a 5 hour session ended after 1 hour.... My plan didn't change, the way I use it did too. The last 2 weeks have been tragic, because after about 3 days I used up the entire weekend limit. If the Anthropic team says that it does not change the limits then for me it is a simple lie, it is impossible to attract similar habits and use in a similar way so drastically change the limits.
I'll get to the main point, so as not to write too much. I've been testing Codex for a week having the usual $20 plan.
For 4 days I used similarly to Claude codex.... And only at the 4th day I had a limit. And not the cheapest model available just usually used the better ones. Codex has its downsides, but it can all be worked out and set up to achieve better accuracy similar to Claude, although in some cases Codex does better.
I know that OpenAI is probably losing a lot of money on this, and I know that it probably won't last very long, but even if they make it 2 or 3 times worse it will still be better than with Claude, who can with a $200 plan limit access after 1 day. Chatgpt's $20 plan and even more so the $200 plan is worth the money unlike Claude, which was great in the beginning and has now deteriorated.
Anthropic is going the way of Cursor, and it's not a good way because Cursor blatantly scams people, changes limits every different day and purposely worsens the performance of models through its layer just to make it cheaper.
At this point I am switching from claude to Codex and will gladly pay them $200 if necessary than $200 claude, which does not want to see its users.
And all because of the stupid decision of weekend capping. It was enough to ban forever those who used the limits 24 hours a day all week and overtaxed the resources, and give honest users full freedom, then of course because of some idiots who bragged here and created videos how claude works alone 24 hours a day Anthropic had to give a weekend limit. As far as I'm concerned they seized the moment to limit access to everyone because maintenance was too expensive, and that was just an excuse to implement limits.
Sonnet 4.5 will not save the situation, and if it goes on like this, OpenAI will garner more users than Anthropic. Personally, I feel cheated because I pay so much for only 1 day limit without giving any information that the limits are changing.
And if not OpenAI, Chinese models are available to choose from at a good price, or even for free
Time to wake up and be competitive.
This Megathread is to discuss your thoughts, concerns and suggestions about the changes involving the Weekly Usage Limits. Please help us keep them all in one place so we can prepare a report for Anthropic's consideration about readers' feedback. This also helps us to free the feed for other discussion.
Announcement details here: https://www.reddit.com/r/ClaudeAI/comments/1mbo1sb/updating_rate_limits_for_claude_subscription/
UPDATE (August 6): Usage Limits Discussion Report now available : https://www.reddit.com/r/ClaudeAI/comments/1mj0eyf/usage_limits_megathread_discussion_report_july_28/
UPDATE: I switched to Claude Code CLI and the token consumption is now way more reasonable.
After hitting the same frustrating wall with Claude Desktop + MCP filesystem, someone recommended trying Claude Code instead.
What changed:
No need to the filesystem MCP, claude code read/write directly from your computer
Same tasks
3โ5x less token consumption on average
No more random "max length" errors on brand new chats
The paradox: MCP is the reason I chose Claude in the first place. The ability to connect to filesystems, databases, Notion, etc. is too powerful to ignore but the token management makes it almost unusable for real work.
If Anthropic fixes MCP integration and token optimization , theyโll easily dominate the market.
MCP is revolutionary, the model is brilliant, but the UX is holding it back.
Anthropic is sitting on a goldmine !! Fix the token management and Claude becomes the undisputed #1.
-------------------------------------------------------------------
ORIGINAL POST
Iโm on Claude Pro, and honestly, in 20 years of using paid software, Iโve never been this frustrated.
The model itself is absolutely brilliant but using Claude is just a p*** in the a**.
Hereโs what happened:
I opened a brand-new chat inside a folder (the folder has a short instruction and 3 small chats).
Sent one single request asking Claude to analyze a README through the MCP filesystem.
Claude reads the environment variables, then instantly throws:โClaude hits the maximum length for this conversation.โ
Likeโฆ what?!
Brand new chat
Claude Sonnet
30% session usage
20% of my weekly limit And it just dies.
Is the folder context included in the token count?
Or are the MCP env vars blowing the context window? Because this behavior makes absolutely no sense.
The model is extraordinary, but the user experience is pure madness.
How can a Pro user hit a max length after one request? This shouldnโt even be possible.
Anyone else seeing this nonsense?
Currently my main AI tool develop with is cursor. Within the subscription I can use it unlimited, although I get slower responses after a while.
I tried Claude Code a few times with 5 dollars credit each time. After a few minutes the 5 dollar is gone.
I don't mind paying the 100 or even 200 for the max, if I can be sure that I van code full time the whole month. If I use credits, I'd probably end up with a 3000 dollar bill.
What are your experiences as full time developers?
We've just reset weekly limits for all Claude users on paid plans.
We've seen members of this community hitting their weekly usage limits more quickly than they might have expected. This is driven by usage of Opus 4.1, which can cause you to hit the limits much faster than Sonnet 4.5.
To help during this transition, we've reset weekly limits for all paid Claude users.
Our latest model, Sonnet 4.5 is now our best coding model and comes with much higher limits than Opus 4.1. We recommend switching your usage over from Opus, if you want more usage. You will also get even better performance from Sonnet 4.5 by turning on "extended thinking" mode. In Claude Code, just use the tab key to toggle this mode on.
We appreciate that some of you have a strong affinity for our Opus models (we do too!). So we've added the ability to purchase extra usage if you're subscribed to the Max 20x plan. Weโll put together more guidance on choosing between our models in the coming weeks.
We value this communityโs feedback. Please keep it coming โ we want our models and products to work well for you.
Hey everyone,
Not sure whatโs going on, but starting today, Iโm suddenly hitting my usage limits after only a few non coding related prompts (like 3โ4). This has never happened before.
I didnโt change my plan, my workflow, or the size of my prompts. Iโm using Claude Code normally, and out of nowhere it tells me Iโm at my limit and blocks further use.
A couple things Iโm trying to figure out:
Is this happening to anyone else today specifically?
Did Anthropic quietly change the quota calculations?
Could it be a bug or rate-limit miscount?
Is there any workaround people found? Logging out, switching networks, switching country, etc.?
Itโs super frustrating because I literally canโt work with only a few prompts before getting locked out.
If anyone has info or experienced the same thing today, please let me know.
Thanks!
I'm thinking of upgrading to the $20/month plan specifically to use Claude Code. Are you guys hitting the limit constantly? Just trying to figure out if it's usable for a full workday or if I'll get capped immediately.
What are the exact weekly and 5-hour usage limits (in coding hours and request counts) for Claude Code when comparing the Anthropic Max 20ร plan vs. a Team Premium seat? I know both have the same 200k token context per request, but Iโm looking for concrete numbers on how many coding hours and prompts per window/ per week are actually allowed.
Latest Workarounds Report: https://www.reddit.com/r/ClaudeAI/wiki/latestworkaroundreport
Full record of past Megathreads and Reports : https://www.reddit.com/r/ClaudeAI/wiki/megathreads/
Why a Performance, Usage Limits and Bugs Discussion Megathread?
This Megathread should make it easier for everyone to see what others are experiencing at any time by collecting all experiences. Most importantly, this will allow the subreddit to provide you a comprehensive periodic AI-generated summary report of all performance and bug issues and experiences, maximally informative to everybody. See the previous period's performance and workarounds report here https://www.reddit.com/r/ClaudeAI/wiki/latestworkaroundreport
It will also free up space on the main feed to make more visible the interesting insights and constructions of those using Claude productively.
What Can I Post on this Megathread?
Use this thread to voice all your experiences (positive and negative) as well as observations regarding the current performance of Claude. This includes any discussion, questions, experiences and speculations of quota, limits, context window size, downtime, price, subscription issues, general gripes, why you are quitting, Anthropic's motives, and comparative performance with other competitors.
So What are the Rules For Contributing Here?
All the same as for the main feed (especially keep the discussion on the technology)
Give evidence of your performance issues and experiences wherever relevant. Include prompts and responses, platform you used, time it occurred. In other words, be helpful to others.
The AI performance analysis will ignore comments that don't appear credible to it or are too vague.
All other subreddit rules apply.
Do I Have to Post All Performance Issues Here and Not in the Main Feed?
Yes. This helps us track performance issues, workarounds and sentiment and keeps the feed free from event-related post floods.
I've been using Claude Code for two months so far and have never hit the limit. But yesterday it stopped working and gave a cooldown for 4 days. If its limit resets every 5 hours, why a cooldown for 4 days? I tried usage-based pricing, and it charged $10 in 10 minutes. Is there something wrong with new update of Claude code?
Latest Performance Report: https://www.reddit.com/r/ClaudeAI/comments/1n4o701/claude_performance_report_with_workarounds_august/
Full record of past Megathreads and Reports : https://www.reddit.com/r/ClaudeAI/wiki/megathreads/
Why a Performance Discussion Megathread?
This Megathread should make it easier for everyone to see what others are experiencing at any time by collecting all experiences. Most importantly, this will allow the subreddit to provide you a comprehensive periodic AI-generated summary report of all performance issues and experiences, maximally informative to everybody. See the previous period's performance report here https://www.reddit.com/r/ClaudeAI/comments/1n4o701/claude_performance_report_with_workarounds_august/
It will also free up space on the main feed to make more visible the interesting insights and constructions of those using Claude productively.
What Can I Post on this Megathread?
Use this thread to voice all your experiences (positive and negative) as well as observations regarding the current performance of Claude. This includes any discussion, questions, experiences and speculations of quota, limits, context window size, downtime, price, subscription issues, general gripes, why you are quitting, Anthropic's motives, and comparative performance with other competitors.
So What are the Rules For Contributing Here?
All the same as for the main feed (especially keep the discussion on the technology)
Give evidence of your performance issues and experiences wherever relevant. Include prompts and responses, platform you used, time it occurred. In other words, be helpful to others.
The AI performance analysis will ignore comments that don't appear credible to it or are too vague.
All other subreddit rules apply.
Do I Have to Post All Performance Issues Here and Not in the Main Feed?
Yes. This helps us track performance issues, workarounds and sentiment and keeps the feed free from event-related post floods.
I'm back from month hiatus of Claude Max5 Subscription and just recently re-subscribed to Pro plan to test Opus 4.5.
At first, I was laughing on how people comments and said in here that you can only prompt one Opus 4.5 and your 5-hour limit is gone until I literally experienced it. Now, I upgrade my Plan to Max5 and the usage limit difference is HUUUUUUUUUUUUGE compared to Pro Plan. It is not just 5x. So I feel like the Pro plan (This should be renamed to just "Plus" because there's no pro in this plan) is really just to test the model and Anthropic will force you to upgrade to Max.
Right now, been coding on 2 sessions simultaneously continuously using opusplan model and I'm only 57% of the 5-hour limit, reset in 1 hour.
Anyhow,
Opus 4.5 is great, the limit is higher. I'm happy but my wallet hurts. Lol
TL;DR: Built shell scripts that let you instantly switch between your Claude Pro/Max subscription and API keys from AWS, Google Cloud, Azure, and Anthropic. When you hit rate limits mid-task, one command and you're back to work. Bonus: Easy access to Opus 4.5 on any provider without needing Max subscription. GitHub repo
You know the feeling. You're deep in a debugging session, finally making progress, and then:
"Rate limit exceeded. Try again in 4 hours 23 minutes."
Your momentum dies. Your context is gone. You're stuck.
The Pro plan gives you maybe 10-15 good prompts per 5-hour window. Even Max caps around 40. For anyone shipping fast, that's not enough.
The obvious solution: Use API keys and pay per token. But switching between subscription and API mode in Claude Code is painful โ manually editing config files, managing multiple providers, remembering different model IDs. Most of us just... wait for the timer to reset.
So I built something to fix this.
What it does
Simple shell scripts that make switching instant:
# Working with Claude Pro, hit rate limit claude # "Rate limit exceeded. Try again in 4 hours." # Immediately continue with AWS (keeps your conversation) claude-aws --resume # Or switch to Haiku for speed, Opus 4.5 for complex reasoning claude-aws --haiku --resume claude-vertex --opus --resume # Back to Pro when limits reset claude --resume
The --resume flag picks up your last conversation exactly where you left off. No lost context.
Supports four providers:
claude-awsโ AWS Bedrockclaude-vertexโ Google Cloud Vertex AIclaude-azureโ Microsoft Foundry on Azureclaude-apikeyโ Anthropic API directly
Each command is session-scoped and non-destructive. Your original config is automatically restored when you exit.
Why this matters
1. Maximize your subscription value
Your $20 or $200/mo is incredibly cost-effective when you can actually use it. Rate limits mean you're paying for capacity you can't access. This lets you use your subscription fully, then seamlessly switch to API when you hit limits.
2. Easy Opus 4.5 access (just released today!)
Claude Pro users: You can now easily switch to Opus 4.5 on any provider when you need its power, without needing a Max subscription ($200/mo). For example, just claude-aws --opus or claude-vertex --opus.
3. Use your cloud credits
If you have AWS, Google Cloud, or Azure credits, these work with Claude Code but setting it up is complex. This makes it simple โ add credentials once, switch with one command.
Quick setup
git clone https://github.com/andisearch/claude-switcher.git cd claude-switcher ./setup.sh
Then edit ~/.claude-switcher/secrets.sh:
# AWS export AWS_PROFILE="my-profile" export AWS_REGION="us-west-2" # Google Cloud export ANTHROPIC_VERTEX_PROJECT_ID="my-project" export CLOUD_ML_REGION="global" # Anthropic API export ANTHROPIC_API_KEY="sk-ant-..." # Azure export ANTHROPIC_FOUNDRY_API_KEY="my-key" export ANTHROPIC_FOUNDRY_RESOURCE="my-resource"
That's it. Now you can switch between any provider with one command.
Utilities included
claude-statusโ Shows current auth mode and configurationclaude-sessionsโ Lists all active sessions with provider, model, uptime
My workflow
Start with
claude(Pro subscription)Hit limit โ
claude-aws --resume(AWS credits)Need faster responses โ
claude-aws --haiku --resumeComplex architecture โ
claude-vertex --opus --resumeLimits reset โ back to
claude --resume
This has saved me hundreds in API costs while keeping productive. More importantly, it removed the friction that killed my momentum.
GitHub: github.com/andisearch/claude-switcher
Built this for myself at Andi (we're building an AI search engine) because we needed it to keep shipping. Figured it might help others too.
If it's useful, a star would be appreciated. Questions or issues, happy to help in comments.