I pay for pro and find the usage limits almost intolerably low, especially when having Claude write code for me. I'd been thinking about going with a Team plan for increased usage, but here you're saying that it's bad even for Teams. The free version must be basically unusable. ChatGPT doesn't do as good a job, but at least I can actually use it all day long, essentially. Answer from gordonf23 on reddit.com
🌐
Reddit
reddit.com › r/claudeai › anyone else struggling with claude's usage limits?
r/ClaudeAI on Reddit: Anyone else struggling with Claude's usage limits?
July 5, 2024 -

Hey,

Is anyone else finding the usage limits on Claude's team plan a bit restrictive? My team and I keep bumping into the low thresholds, and it's really starting to hold us back. We've noticed that GPT-4 Teams allows for way more tokens and larger file sizes, which makes things much easier.

The thing is.. Claude is giving better results for us on all metrics when inside the limits it has.

Also, it would be awesome if Claude could support MongoDB vector search. This feature would really boost our projects and make a huge difference in our workflow.

Anyone else in the same boat? Would love to hear your thoughts or any tips you might have. And if anyone from Claude is listening, please consider these requests to help us out!

Thanks!

🌐
Reddit
reddit.com › r/claudeai › claude’s unreasonable message limitations, even for pro!
r/ClaudeAI on Reddit: Claude’s unreasonable message limitations, even for Pro!
September 15, 2024 -

Claude has this 45 messages limit per 5 hours for pro subs as well. Is there any way to get around it?

Claude has 3 models and I have been mostly using sonet. From my initial observations, these limits apply for all the models at once.

I.e., if I exhaust limit with sonet, does that even restrict me from using opus and haiku ? Is there anyway to get around it?

I can also use API keys if there’s a really trusted integrator but help?

Update on documentation: From what I’ve seen till now this doesn’t give us very stood out notice about the limitations, they mentioned that there is a limit but there is a very vague mention of dynamic nature of limitations.

Edit (18 July, 2025):

Claude has tightened the limits of Claude Code silently, people are repeatedly facing this issue :: "Invalid model. Claude Pro users are not currently able to use Opus 4 in Claude Code" and also https://github.com/anthropics/claude-code/issues/3566

Make no mistake, I love claude to the core. I was probably in the mid-early adopters of Claude. I love the Artifact generation more than anything. But this limitations are really bad. Some power users are really happy on claude Max plan because they were able to get it to work precisely. I think this is more to do with Prompt engineering, and context engineering. I hope sooner or later, claude can really work like how ChatGPT is accessible now-a-days.

Edit ( 7 sept, 2025):

The fact that this post is still getting so much attention is a testament to Claude not listening to the users. I love Claude and Claude Code too much, and I am a fan of Anthropic adding new features. Unfortunately, this Claude code also hits the “Compacting conversation” too quick - for me atleast, and the limits are a little better honestly. But the cool down period is painful.

🌐
Reddit
reddit.com › r/claudeai › colada for claude – get past your claude limits using your own api key
r/ClaudeAI on Reddit: Colada for Claude – Get past your Claude limits using your own API key
January 2, 2025 -

Tired of running into Claude.ai's daily limits? Try using Colada for Claude.

I actually enjoy Claude.ai's interface and artifacts implementation. I didn't want to lift and shift over to another LLM tool. So, to get past the daily limits, I decided to build a simple Chrome extension which continues conversations using your own Anthropic API key.

In short:

• Get past Claude.ai conversation limits (click on pineapple emoji the to activate Colada)
• Bring your own Anthropic API key (requests made directly from your machine)
• Preserve conversation context (scraped from the DOM)
• Simple, lightweight implementation (everything stored locally)

It's a $4.99 one-time purchase- just use "REDDIT" at checkout.

Let me know what you think. Open to any and all feedback.

Chrome Extension URL: https://chromewebstore.google.com/detail/colada-for-claude/pfgmdmgnpdgbifhbhcjjaihddhnepppj

🌐
Reddit
reddit.com › r/claudeai › how i work with claude all day without limit problems
r/ClaudeAI on Reddit: How I Work With Claude All Day Without Limit Problems
January 9, 2025 -

First of all, I want to be clear here that I'm not claiming limits don't exist. I was getting bitten by them constantly back in the October timeframe. I think they've gotten better since then but I've also changed my workflow a lot and I wanted to share with everyone what my workflow is in hopes that it can help some people who are struggling.

I'm a software developer and I spend basically all day every day in a chat with Claude using their Mac desktop interface as I do my work. Help in code generation and debugging mostly, but also thinking through designs and the like. I can't remember the last time I got limited by Claude (though when it does happen for whatever it's worth it tends to be late in the workday Pacific Time).

  1. I only work in text files. I think a lot of the issues people are having come from working directly in PDFs. If you need to do that, these techniques may not help you much.

  2. I don't use projects. I only attach exactly the files that will be needed for the task at hand.

  3. Work hard to keep context short! I started doing this not because of limits but because I felt the quality dropped off as the context lengthened. But it had to side effects of keeping me away from the limits. This makes sense if you think about it. I have no idea what the actual token limit is, but let's say it's 1M tokens in 3 hours. If you've got one long-running chat with 100k tokens in it, than gives you 10 exchanges. But if you can make that have 10k tokens, you've got 100, and if you can cut it back to 1k tokens you've got 1000.

  4. Start over frequently! I limit myself generally to a single task. Writing a function. Debugging a particular error. As soon as the task is done, I start a new chat. I'll frequently have scores of individual chats in any given day.

  5. Don't ever correct Claude! Don't say "no, don't do it that way." Instead, edit your original prompt. Add "Don't do it this way" to the prompt and regenerate. I've had to regenerate two or three times to get what I want. By doing this, you keep the context short, and long-context exchanges are how you eat up your token limit.

Anyway, hope this helps someone. If you've got other tips, I'd love to hear about them!

🌐
Reddit
reddit.com › r/claude › the only method i've found to bypass the 5-hour limit
r/claude on Reddit: The only method I've found to bypass the 5-hour limit
September 9, 2025 -

Okay guys, hear me out 😅

You know that annoying moment when you're coding with Claude and BAM - "5 hour limit reached" right in the middle of debugging? Yeah, it sucks.

So I discovered the limit works on a rolling 5-hour window from your LAST message, not your first.

My hack:

  • 3 hours before I need Claude, I start sending random msgs every hour

  • Just quick stuff like "hey" or "give me a fun fact"

  • Takes 30 seconds lol

Result: When I actually start working, I'm already at hour 4 of the limit, so I get hour 5 PLUS a fresh 5-hour window = way more uninterrupted coding time 🎯

Is it janky? Yes. Does it work? ABSOLUTELY.

Anyone else doing this or am I just being extra? 😂 Drop your limit hacks below!

🌐
Reddit
reddit.com › r/claudeai › claude usage limits
Claude usage limits : r/ClaudeAI
September 15, 2024 - How to Resolve This Issue: Split ... for each of the smaller files. Working Around the Limit: For future uploads, consider breaking down large datasets into smaller, more manageable chunks....
🌐
Reddit
reddit.com › r/claudeai › best way to avoid claude pro session limits without spending $100/month?
r/ClaudeAI on Reddit: Best way to avoid Claude Pro session limits without spending $100/month?
1 week ago -

I have Claude Pro ($20/month) and consistently run into the per-session usage limits when using Claude Code (CLI tool). I'll max out my current session and have to wait for the window to reset, even though I often end up using only 20-40% of my overall weekly allowance.

My budget is around $30/month total. Is there a better solution than Pro + occasional overage purchases?

Options I'm considering:

Paying for extra usage when I hit limits (but feels inefficient)

Switching to API pay-as-you-go for Claude Code specifically

Upgrading to a higher tier (but $100/month seems excessive for my usage)

For those who use Claude Code heavily in bursts but inconsistently week-to-week - what's your setup?

🌐
Reddit
reddit.com › r/claudeai › usage limits discussion megathread - beginning october 8, 2025
r/ClaudeAI on Reddit: Usage Limits Discussion Megathread - beginning October 8, 2025
October 9, 2025 -

This Megathread is a continuation of the discussion of your thoughts, concerns and suggestions about the changes involving the Weekly Usage Limits implemented alongside the recent Claude 4.5 release. Please help us keep all your feedback in one place so we can prepare a report for Anthropic's consideration about readers' suggestions, complaints and feedback. This also helps us to free the feed for other discussion. For discussion about recent Claude performance and bug reports, please use the Weekly Performance Megathread instead.

Please try to be as constructive as possible and include as much evidence as possible. Be sure to include what plan you are on. Feel free to link out to images.

Recent related Anthropic announcement : https://www.reddit.com/r/ClaudeAI/comments/1ntq8tv/introducing_claude_usage_limit_meter/

Original Anthropic announcement here: https://www.reddit.com/r/ClaudeAI/comments/1mbo1sb/updating_rate_limits_for_claude_subscription/

Anthropic's update on usage limits post here : https://www.reddit.com/r/ClaudeAI/comments/1nvnafs/update_on_usage_limits/

Last week's Megathread: https://www.reddit.com/r/ClaudeAI/comments/1nu9wew/usage_limits_discussion_megathread_beginning_sep/


Megathread's response to Anthropic's usage limits update post here:

https://www.reddit.com/r/ClaudeAI/comments/1o1wn34/megathreads_response_to_anthropics_post_update_on/

Find elsewhere
🌐
Reddit
reddit.com › r/claudeai › usage limits and you - how they work, and how to get the most out of claude.ai
r/ClaudeAI on Reddit: Usage limits and you - How they work, and how to get the most out of Claude.ai
January 9, 2025 -

Here's the TL;DR up front:

  • The usage limits are based on token amounts.

  • Disable any features you don't need (artifacts, analysis tool etc) to save tokens.

  • Start new chats once you get past 32k tokens to be safe, 40-50k if you want to push it!

  • Get the (disclaimer: mine) usage tracker extension for Firefox and Chrome to track how many messages you have left, and how long the chat is. It correctly handles everything listed here, and developing it is how I figured out everything.

Ground rules/assumptions

Alright, let's start with some ground rules/assumptions - these are from what I and other people have observed (+ the stats from the extension) so I'm fairly confident for most of these. If you have experiences that don't match up, install the extension and try to get some measuraments, and write below.

  1. The limits don't change based on the time of day. The only thing that seems to happen is that free users get bumped down to Sonnet, and Pro users get defaulted onto Concise responses. But I have yet to get any data that the limits themselves change.

  2. There are three separate limits, and reset times - one for each model "class". We'll be looking at Sonnet in all the following examples.

  3. I am assuming that the "cost" scales linearly with the number of tokens. This is the same behavior the API exhibits, so I'm pretty confident.

  4. The reset times are always the same - five hours after the hour of your first message. You send the first at 5:45, the reset is at 5:00+5 hrs = 10:00.

What is "the limit", anyway?

This one has a pretty clear cut answer. There is no message limit.

Think of each message as having a "cost" associated with it, depending on how many tokens you're consuming (we'll go over what influences this number in a later section).

For Sonnet on the Pro plan, I've estimated the limit to be around 1.5/1.6 million tokens. Team seems to be 1.5x that, Enterprise 4.5x or something.

A small practical example

Before we continue, it's worth looking at a small, basic example.

Let's assume you have no special features enabled, and it's a fresh chat. We will also assume that every message you send is 500 tokens, and that every response from Sonnet is 1k tokens, to make the math easier.

The first message you send - it'll cost you 500+1k = 1.5k tokens. Pretty small compared to 1.5 million, right? Let's keep going.

Second message - it'll cost you 1.5k+500+1k = 3k tokens. Double already.

Third message: 3k+500+1k = 4.5k tokens.

That's just three messages, without any attachments, and already we're at 1.5k+3k+4.5k = 9k tokens.

The more we continue, the faster this builds up. By the tenth message, you'll be using up 16.5k tokens of your cap EACH MESSAGE.

And this was without any attachments. Let's get into the details, now.

What counts against that limit?

Many, many things. Let's start with the obvious ones.

Your chat history, your style, your custom preferences

This is all pretty basic stuff, as all of this is just text. It counts for however many tokens long it is. You upload a file that's 5k tokens long, that's 5k tokens.

The system prompt(s)

The base system prompt

This is the system prompt that's listed on Anthropic's docs. Around 3.2k tokens in length. So every message starts with a baseline cost of 3.2k.

The feature-specific system prompts

This one is a HUGE gotcha. Each feature you enable, especially artifacts, incurs a cost.

This is because Anthropic has to include a bunch of instructions to "teach" the model how to use that feature.

The ones that are particularly relevant are:

  • Artifacts, coming in at a hefty 8.4k tokens

  • Analysis tool, at 2.2k

  • Enabling your "preferences" under the style, at 800 (plus the length of the preferences themselves)

  • Any MCPs, as those also need to define the available tools. The more MCPs, the more cost.

Custom styles actually don't incur any penalty, as the explanation for styles is part of the base system prompt.

This builds up fast - with everything enabled, you're spending 12k tokens EACH MESSAGE in system prompt alone!

Attachments

Text attachments - Code, text, etc. (Except CSVs with the Analysis Tool enabled)

These ones are pretty simple - they just cost however many tokens long the file is. File is 10k tokens, it'll cost 10k. Simple as.

CSVs with the Analysis Tool enabled

These actually don't cost anything - the model can only access their data via the Analysis Tool.

Images

High quality images cost around 1200-1500 tokens each. Lower quality ones cost less. They can never cost more than 1600, as any bigger images get downscaled.

PDFs

This is another BIG gotcha. In order to allow the model to "see" any graphs included in the PDF, each page is provided both as text, and as an image!

This means that in addition to the cost of the text in the PDF, you have to factor in the cost of the image.

Anthropic's docs estimate each PDF as costing between 1500-3000 per page in text alone, plus the image cost we mentioned above. So at the upper end, you can estimate around 3000-4500 per page! So a 10 page PDF, will end up costing you 30k-45k tokens!

That's great and all... but how do I get more usage?

In short - include only what the model absolutely needs to know.

  • Do you not care about the images in your PDFs? Convert them to markdown, or upload them as project knowledge (there, the images aren't processed).

  • Do you really need to give it your entire codebase every time? Probably not. Only give it what it needs, and a general overview of the rest.

  • Has the chat gotten over 40-50k? Start a new one, summarizing what you've done so far! Update all your code, and provide it the new version.

  • Keep your chats short, and single-purpose. Does your offhand question about some library really need to be asked in the already long chat? Probably not.

  • Don't waste messages! If the AI gets something wrong, go back and edit your prompt, instead of telling it that it got it wrong. Otherwise, you will keep that "wrong" version in your history, and it will sit there eating up more tokens! (Credit to u/the_quark for reminding me about this one)

  • If you use projects, be very VERY careful about how much information you include in project knowledge, as that will be added to every message, in every chat! Keep it as low as you can, maybe just a general overview! (As above, credit to u/the_quark)

🌐
Reddit
reddit.com › r/claudeai › easiest way to use claude api to avoid daily limits
r/ClaudeAI on Reddit: Easiest way to use Claude API to avoid daily limits
April 13, 2024 -

I switched from GPT-4 to now mostly using Claude 3, but the daily message limits are annoying. Also, I really like the ChatGPT interface better. Claude is missing a features like a button to STOP when it’s streaming a long response that is bad.

I use this project to get my own front-end for Claude 3. It uses the API so I have no daily limit to messages, and I get to use a chat interface that has all the features ChatGPT has. Even if you’re not very technical, it’s easy to set up. https://github.com/allyourbot/hostedgpt

It supports both GPT-4 and Claude 3 so it lets you do really cool things like you’re having a conversation with one of them and you don’t like it’s response, you click the “re-generate” button and you can switch assistants mid-conversation. This animated gif demonstrates. The other day I asked it a technical question and switched assistants and finally got the right answer! https://p425.p0.n0.cdn.zight.com/items/ApuPWXb6/c413f8dc-519f-4f5f-b250-45a7688b99d1.gif

It’s also really nice to have access to both GPT-4 and Claude 3 without committing to $20 per month to either. I just get charged for what I use. For example, OpenAI released a new version of the GPT-4 model last week I’ve been going back to it to see how much they improved it. (I don’t have a well-formed opinion yet).

🌐
Reddit
reddit.com › r/claudeai › discovered: how to bypass claude code conversation limits by manipulating session logs
r/ClaudeAI on Reddit: Discovered: How to bypass Claude Code conversation limits by manipulating session logs
September 18, 2025 -

TL;DR: git init in ~/.claude/, delete old log lines (skip line 1), restart Claude Code = infinite conversation

⚠️ Use at your own risk - always backup with git first

Found an interesting workaround when hitting Claude Code conversation limits. The session logs can be edited to continue conversations indefinitely.

The Discovery: Claude Code stores conversation history in log files. When you hit the conversation limit, you can actually delete the beginning of the log file and continue the conversation.

Steps:

  1. Setup git backup (CRITICAL)

    cd ~/.claude/
    git init
    git add .
    git commit -m "backup before log manipulation"
  2. Find your session ID

    • In Claude Code, type /session

    • Copy the session ID

  3. Locate the session log

    cd ~/.claude/
    # Find your session file using the ID
  4. Edit the session file

    • Open in VSCode (Cmd+P to quick open if on Mac)

    • IMPORTANT: Disable word wrap (Opt+Z for Mac) for clarity

    • DO NOT touch the first line

    • Delete lines from the beginning (after line 1) to free up space

  5. Restart the conversation

    • Close Claude Code

    • Reopen Claude Code

    • Continue sending messages - the conversation continues!

Why this works: The conversation limit is based on the total size of the session log. By removing old messages from the beginning (keeping the header intact), you free up space for new messages.

Risks:

  • Loss of context from deleted messages

  • Potential data corruption if done incorrectly

  • That's why git backup is ESSENTIAL

Pro tip: When context changes significantly, it's better to just start a new conversation. But if you're stuck and need to continue, this is your escape hatch.


Found this while debugging session issues. Use responsibly!

And also i tried different solution for it, but not good as expected for now @yemreak/claude-compact

🌐
Reddit
reddit.com › r/claudeai › claude pro usage limits are such a joke
r/ClaudeAI on Reddit: Claude Pro Usage limits are such a joke
January 30, 2025 -

What the fuck - I get to use it for like 1 hour before I'm kicked off for 4 whole hours. I am dying to pay for a premium version like chatgpt pro - or i can pay on a usage basis too.

I'd even rather have all my usage limit in one block so I can just do something else with my time vs. coming back every 4 hours for 45 mins.

Claude is still clearly superior vs. all the others, even O1 and deepseek, especially for non-coding work/thought partner work.

It kills me that I have superintelligence on demand but I'm cut off from it like 80% of the time - this is how people feel when their electricity is cut off I imagine.

🌐
Reddit
reddit.com › r/localllama › i'm tired of claude limits, what's the best alternative? (cloud based or local llm)
r/LocalLLaMA on Reddit: I'm tired of claude limits, what's the best alternative? (cloud based or local llm)
3 weeks ago -

Hello everyone I hope y'all having a great day.

I've been using Claude Code since they released but I'm tired of the usage limits they have even when paying subscription.

I'm asking here since most of you have a great knowledge on what's the best and efficient way to run AI be it online with API or running a local LLM.

I'm asking, what's the best way to actually run Claude at cheap rates and at the same time getting the best of it without that ridiculous usage limits?

Or is there any other model that gives super similar or higher results for "coding" related activities but at the same time super cheap?

Or any of you recommend running my own local llm? which are your recommendations about this?

I currently have a GTX 1650 SUPER and 16GB RAM, i know it's super funny lol, but just lyk my current specs, so u can recommend me to buy something local or just deploy a local ai into a "custom ai hosting" and use the API?

I know there are a lot of questions, but I think you get my idea. I wanna get started to use the """tricks""" that some of you use in order to use AI with the highest performace and at lowest rate.

Looking forward to hear ideas, recommendations or guidance!

Thanks a lot in advance, and I wish y'all a wonderful day :D

🌐
Reddit
reddit.com › r/claudeai › hitting claude limits almost immediately. it's useless now
r/ClaudeAI on Reddit: Hitting claude limits almost immediately. It's useless now
January 5, 2025 -

Recently, I'm having to compress 2MB files as hard as possible to even get more than 4 messages into a chat. It seems like claude is functionally useless for anything other than as an alternative to google.
I will litterally hit the limit if I attach more than 3 files to a chat. What is going on?
I'm cancelling my subscription and moving back to OpenAI, even though I hate it's guts

For context, I'm a software engineering student and this particular chat contained, I kid you not, three messages, a single 175KB file and I was trying on the 4th message to attach 2x 2MB pdf files, using 3.5 Sonnet. Compressing the files down to 600kb and it STILL won't work, even trying with a SINGLE file.

I'm getting "You message will exceed length limit, make a new chat". It's so damn awful

EDIT: So it turns out that Claude is absolute TRASH at pdfs, wasting all my tokens and all capacity on trying to process the company logo that appears on each of the 90 pages in the pdf. After fiddling around I finally got a different message specifying something like "this message exceeds image limits". What a shame

EDIT 2: People don't seem to understand that Claude advertises file uploads of 20 files, 30MB max EACH. Hitting the limit with a 600kb file should not be possible and is an enormous oversight

🌐
Reddit
reddit.com › r/claudeai › [ removed by moderator ]
What is going on with the Usage Limits on Claude?
August 26, 2025 - I used to be able to edit for a full five hour block with rarely hitting the limit, but now if I really sit down and hammer out the words, it limits me. I just try to schedule my writing blocks around a rollover so I have 2 blocks to use (if I can). But it's more annoying than just using it when I want/can/am motivated to write. ... Next month I'm trying codex, I've been using copilot chat (with chat gpt-5) in vs code and the complain I have is that is slow, but it works very well ... I’m on the $20/month plan as well. And often I can only get in 5 messages max on Claude opus 4.1 until I reach the limit.
🌐
Reddit
reddit.com › r/claudeai › 🤬 pro user here - “claude hits the maximum length” after one message. this is insane.
r/ClaudeAI on Reddit: 🤬 Pro user here - “Claude hits the maximum length” after ONE message. This is insane.
November 3, 2025 -

UPDATE: I switched to Claude Code CLI and the token consumption is now way more reasonable.

After hitting the same frustrating wall with Claude Desktop + MCP filesystem, someone recommended trying Claude Code instead.

What changed:

  • No need to the filesystem MCP, claude code read/write directly from your computer

  • Same tasks

  • 3–5x less token consumption on average

  • No more random "max length" errors on brand new chats

The paradox: MCP is the reason I chose Claude in the first place. The ability to connect to filesystems, databases, Notion, etc. is too powerful to ignore but the token management makes it almost unusable for real work.

If Anthropic fixes MCP integration and token optimization , they’ll easily dominate the market.

MCP is revolutionary, the model is brilliant, but the UX is holding it back.

Anthropic is sitting on a goldmine !! Fix the token management and Claude becomes the undisputed #1.

-------------------------------------------------------------------

ORIGINAL POST

I’m on Claude Pro, and honestly, in 20 years of using paid software, I’ve never been this frustrated.
The model itself is absolutely brilliant but using Claude is just a p*** in the a**.

Here’s what happened:

  • I opened a brand-new chat inside a folder (the folder has a short instruction and 3 small chats).

  • Sent one single request asking Claude to analyze a README through the MCP filesystem.

  • Claude reads the environment variables, then instantly throws:“Claude hits the maximum length for this conversation.”

Like… what?!

  • Brand new chat

  • Claude Sonnet

  • 30% session usage

  • 20% of my weekly limit And it just dies.

Is the folder context included in the token count?
Or are the MCP env vars blowing the context window? Because this behavior makes absolutely no sense.

The model is extraordinary, but the user experience is pure madness.
How can a Pro user hit a max length after one request? This shouldn’t even be possible.

Anyone else seeing this nonsense?

🌐
Reddit
reddit.com › r/claudeai › mini guide on how to manage your usage limits more effectively
r/ClaudeAI on Reddit: Mini guide on how to manage your usage limits more effectively
April 19, 2025 -

I mainly use Claude for programming, I am subbed to Claude pro, used Claude Sonnet daily on my development workflow (for personal and work) and through out my experience, it is really rare for me to hit usage limits, last time I ever hit usage limit was back on 27th March. I will share my experience on how I manage to avoid hitting limits unlike most other people

Please read and follow my tips before posting another complain about usage limits

1. Claude is not a continuous conversational LLM unlike ChatGPT

Unlike ChatGPT, it is not meant to chat continuously on the same conversation. ChatGPT has something what I call "overflowing context", this means that ChatGPT will forget conversations on the start of the chat the more messages you sent. To put it simply, after you have sent 10 messages, the 11th message you sent, ChatGPT will forget the 1st message you sent to him, 12th, forget 2nd. If your chat context is larger, expect it to forget more messages

2. Don't do everything at once, break down your task into smaller ones and work your way up

Almost all of my chats with Claude only has 4-5 messages. It is enough to complete nearly all of my work. More than 9 10 of my chats follow this 4-5 messages rule. For example, focus on implementing one module at a time, if your module is complex, one function at a time.

3. Edit your messages instead of following up

Got an unsatisfactory answer? More than 90% of the time it is because of your questions / tasks are vague. So edit your previous message to be more specific. Following up means you are going to send the entire conversation history to Claude, which consumes more usage tokens compared to editing your message. "Prompt Engineering" is just the buzzword for structuring a clear and concise question. Know how to ask better questions and give clearer task, will yield better results.

4. For Pro / Max users, don't use Project context, use MCP

Some people would argue with me about this, but honestly I have not found a way to utilize its intended purpose effectively, so I suggest no one should upload files to the project context if you want to manage your usage limits effectively. What I do with Projects is just separate my work projects and instructions.

For example Project A is for brand A that uses TS node, Project B is for brand B that uses Python. If you want to have context for specific projects, your only choice is MCP. This is an example of my workflow with MCP

MCP workflow

Hope this helps

🌐
Reddit
reddit.com › r/claudeai › update on usage limits
r/ClaudeAI on Reddit: Update on Usage Limits
October 1, 2025 -

We've just reset weekly limits for all Claude users on paid plans.

We've seen members of this community hitting their weekly usage limits more quickly than they might have expected. This is driven by usage of Opus 4.1, which can cause you to hit the limits much faster than Sonnet 4.5.

To help during this transition, we've reset weekly limits for all paid Claude users.

Our latest model, Sonnet 4.5 is now our best coding model and comes with much higher limits than Opus 4.1. We recommend switching your usage over from Opus, if you want more usage. You will also get even better performance from Sonnet 4.5 by turning on "extended thinking" mode. In Claude Code, just use the tab key to toggle this mode on.

We appreciate that some of you have a strong affinity for our Opus models (we do too!). So we've added the ability to purchase extra usage if you're subscribed to the Max 20x plan. We’ll put together more guidance on choosing between our models in the coming weeks.

We value this community’s feedback. Please keep it coming – we want our models and products to work well for you.

🌐
Reddit
reddit.com › r/claudeai › megathread for claude performance and usage limits discussion - starting august 31
r/ClaudeAI on Reddit: Megathread for Claude Performance and Usage Limits Discussion - Starting August 31
August 31, 2025 -

Latest Performance Report: https://www.reddit.com/r/ClaudeAI/comments/1n4o701/claude_performance_report_with_workarounds_august/

Full record of past Megathreads and Reports : https://www.reddit.com/r/ClaudeAI/wiki/megathreads/


Why a Performance Discussion Megathread?

This Megathread should make it easier for everyone to see what others are experiencing at any time by collecting all experiences. Most importantly, this will allow the subreddit to provide you a comprehensive periodic AI-generated summary report of all performance issues and experiences, maximally informative to everybody. See the previous period's performance report here https://www.reddit.com/r/ClaudeAI/comments/1n4o701/claude_performance_report_with_workarounds_august/

It will also free up space on the main feed to make more visible the interesting insights and constructions of those using Claude productively.

What Can I Post on this Megathread?

Use this thread to voice all your experiences (positive and negative) as well as observations regarding the current performance of Claude. This includes any discussion, questions, experiences and speculations of quota, limits, context window size, downtime, price, subscription issues, general gripes, why you are quitting, Anthropic's motives, and comparative performance with other competitors.

So What are the Rules For Contributing Here?

All the same as for the main feed (especially keep the discussion on the technology)

  • Give evidence of your performance issues and experiences wherever relevant. Include prompts and responses, platform you used, time it occurred. In other words, be helpful to others.

  • The AI performance analysis will ignore comments that don't appear credible to it or are too vague.

  • All other subreddit rules apply.

Do I Have to Post All Performance Issues Here and Not in the Main Feed?

Yes. This helps us track performance issues, workarounds and sentiment and keeps the feed free from event-related post floods.