I will mainly use it alongside cursor, I love claude’s models especially in anything that requires any tool use its so good and seamless
I will use it alongside cursor just want to know what I will get and if it will be good enough, I can do with 40-50 messages every 5 hours that’s really good for me especially alongside cursor
And before you say “just get the 100$ one this one won’t do anything” I can’t afford it
i’m CONSIDERING upgrading to the $100-200 max subscription just because of how costly using claude is on cursor (and far worse through that damned api — still can’t believe designing a new glassmorphic card costed $4[?!?!] for one prompt).
i kind of want to test around with claude code more; i’ve used it in the past (wasn’t extremely impressed but keep in mind i’m building more simple webapps and whatnot). when i am doing something more novel, generally a massive amount of context or manual programming is still required, even if it’s integrating an API that the AI is not familiar with.
don’t send a firebomb through my window, but i kind of like gemini and the huge context window for their cli/vscode extension is awesome. i haven’t hit any usage caps, afaik it’s free but i also pay for gemini and it’s super cheap.
i do this partially for a living, so i don’t mind paying for a good tool, but i don’t want to throw $200 at something if my cursor, windsurf and gemini subscription will work fine. i am doing this 8-12+ hours a day generally speaking, so if it’s that large of a step up, i’m game.
main question: has anyone actually tried using the normal $20 subscription for claude code? will i get anything out of it other than seeing if paying for claude is right for me?
i despise burning money and buying stuff i won’t use, and i am not super apt on feeding in to anthropic’s greed…
please, if you’ve tried the different pricing tiers, give me an example of how much you can use of both sonnet and opus on each
Videos
The last time I was a premium member of Cloud was in June 2024, but I am interested in returning. Is it really worth it? Are the paid models stronger, or is the difference only in usage limits?
In Max 100$ subscription we get 5x the tokens compared to pro 20$ plan. But when we buy two pro plans it would get upto 80% of tokens of max plan. At 40% of cost. So cost vs usage which do you think is better?
I absolutely love Claude Code and have been a Max subscriber for a while. Regardless, the buzz around the new weekly limit and release made me curious whether Claude's $200/month Max subscription was actually a good deal compared to paying for API usage, so I built a network instrumentation tool to capture and analyze my actual Claude Code usage.
Methodology:
- Captured network logs during 1% of my weekly rate limit (I'm still early in my weekly reset so didn't want to spend too much)
- I'm using Sonnet only for this instrumentation as I don't see the difference between Sonnet 4.5 and Opus 4.1
- Analyzed token usage and calculated costs using official pricing
- Projected monthly costs at full usage
The Results, for 1% of weekly limit:
- 299 total API requests
- 176 Sonnet requests (164K tokens + 13.2M cache reads)
- 123 Haiku requests (50K tokens - mostly internal operations)
- Total cost: $8.43
This is around $840/week with Sonnet, which I believe isn't even half the previous limit.
Monthly projection (full usage):
- Claude API: $3,650/month
- OpenAI API (GPT-5 + mini): $1,715/month
Key Findings
Claude Max is 18.3x cheaper than paying for Claude API directly
GPT-5 is 2.1x cheaper than Claude API at the token level
TL;DR: Is this still a good deal? If Claude is still the best model for coding, I would say yes. But compared to ChatGPT Pro subscription, the weekly limit hits hard. Will I keep my Claude subscription for now? Yes. Will that change soon if Anthropic still isn't transparent and doesn't improve their pricing? Of course.
Interesting Notes
- Haiku is used internally by Claude Code for things like title generation and topic detection - not user-facing responses
- Cache reads are HUGE (13.2M tokens for Sonnet) and significantly impact costs
If you are curious about the analysis, I open-sourced the entire analysis here https://github.com/AgiFlow/claude-instrument
--- Edited: Published a separated post on how I use Claude Code. This is part of the reason why I like Sonnet 4.5 which is amazing when it come to instruction following.
i love gemini cli and still use it as well, but man claude code is really nice. i can ADHDmaxx my side projects and spin up research experiments so easily now
I hear use cases on the 100$ max subscription that sonnet is almost limitless for claude code, but has anyone actually tried the 20$ pro subscription yet and felt any limits? How long does it take for me to get rate limited on a medium/large scaled laravel/react app if I try to use sonnet semi-regularly? Of course if I give it the right files that I need for the job where I can use it, but I need to know if it is really worth using sonnet for the pro subscription or should I go for the max subscription.
Thanks!
Today, I just saw this article about claude code and see that they added claude code to pro plan. But you will only get 10-40 prompts every 5 hours. What do you guys think?
Hi everyone, I'm a developer who has been using Claude Code Max ($200 plan) for 3 months now. With renewal coming up on the 21st, I wanted to share my honest experience.
Initial Experience (First 1-2 months): I was genuinely impressed. Fast prototyping, reasonable code architecture, and great ability to understand requirements even with vague descriptions. It felt like a real productivity booster.
Recent Changes I've Noticed (Past 2-3 weeks):
Performance degradation: Noticeable drop in code quality compared to earlier experience
Unnecessary code generation: Frequently includes unused code that needs cleanup
Excessive logging: Adds way too many log statements, cluttering the codebase
Test quality issues: Generates superficial tests that don't provide meaningful validation
Over-engineering: Tends to create overly complex solutions for simple requests
Problem-solving capability: Struggles to effectively address persistent performance issues
Reduced comprehension: Missing requirements even when described in detail
Current Situation: I'm now spending more time reviewing and fixing generated code than the actual generation saves me. It feels like constantly code-reviewing a junior developer's work rather than having a reliable coding partner.
Given the $200/month investment, I'm questioning the value proposition and currently exploring alternative tools.
Question for the community: Has anyone else experienced similar issues recently? Or are you still having a consistently good experience with Claude Code?
I'm genuinely curious if this is a temporary issue or if others are seeing similar patterns. If performance improves, I'd definitely consider coming back, but right now I'm not seeing the ROI that justified the subscription cost.
I know, I probably shouldn't say anything because this is absolutely subsidized launch pricing to drive up interest and I'm going to jinx it and they'll eventually slow down the gravy train but damn. I saw someone else post their $20 in 2 days breaking even and thought I might as well share my own experience - I broke even day 1. I've actually only gotten rate limited once, and it was for about an hour and a half on that first day when I burned $30 in equivalent API use.
I'm a heavy roo code user via API and get everything for free at work so I generally look for the right tool for the job more than anything else, and while I still think roo modes shine where claude code hasn't quite nailed yet, it's a very solid product. In my own time, I had been going more gemini heavy in roo because sonnet struggles with big context and have mad love for that beautiful month of free 2.5 pro exp... and I was willing to overlook a lot of the 05-06 flaws. Jury is still out on 06-05, but I decided to give the $20 plan a shot and see if claude code would cut my API bills and damn. It did almost immediately. First day was 06/06, the 06/01 and 06/05 were using my direct anthropic API. This is not an ad, it's good shit and you might as well get some VC funded discount claude code usage while it's still out there.
I will mainly use it alongside cursor, I love claude's models especially in anything that requires any tool use its so good and seamless I will use it alongside cursor just want to know what I will get and if it will be good enough, I can do with 40-50 messages every 5 hours that's really good for me especially alongside cursor And before you say "just get the 100$ one this one won't do anything" I can't afford it
I am currently using Claude API pay per usage and spends around $60 + VAT per month. Recently Claude started support claude code with Pro subscription, but anyone using it Claude Code with the Pro subscription? How suitable is this in my scenario? Claude says the Pro plan suitable for light coding task with less than 1000 lines of code, in that case I cannot use it as I am using React JS front end and PHP backend. I have got a big project but at a time Claude need a to have smaller context not entire project as I am making progressive changes. Also I don't work continuosly for long time (max 2hours) pro might suit me. So asking if anyone switched from usage based to Pro subscription.
If yes, what’s the limit of tokens in the subscription? If not, then how much to bundle both?
So what is the verdict on usage, is it a good deal or great deal?
How aggressively can you use it?
Would love to hear from people who have actually purchased and used the two.
I see people suggesting API subscription instead of normal web subscription can you please tell us the benefits
It just works.
No awkward small talk, no endless friction. I chat with it like I’d talk to a real teammate..
Complete thoughts, half-baked ideas, even when I’m pissed off and rambling. No need to rephrase everything like I’m engineering a scientific prompt. It gets it. Then it builds.
I dropped Claude for a couple months when the quality dipped (you probably noticed it too). Tried some alternatives. Codex was solid when it first came out, but something was missing. Maybe it was the slower pace, or just how much effort it took to get anywhere. Nothing gave me the same sense of momentum I’d had with Claude.
Fast-forward to this week: My Claude membership lapsed on the 1st. Cash flow has been tight approaching christmas, so I held off renewing the max plan.
In the meantime, I leaned on Cursor (which I already pay for), Google’s Antigravity, and Grok’s free model via Cursor—spreading out options to keep things moving. All useful in their way. But I was neck-deep in a brutal debugging session on a issue that demanded real understanding and iteration. Using Codex and GPT-5.1 (via Cursor Plus, full access to everything).
Should’ve been plenty. Nope. It felt broken for momentum—told me something flat-out couldn’t be done, multiple times. I even pointed it to the exact docs proving it could. Still, pushback. Slow, and weirdly toned.
This wasn’t a one-off; new chats, fresh prompts, every angle I could try. The frustration built fast. I don’t feel I have time for essay-length prompts just to squeeze out a single non-actionable answer just for some poetic, robotic deflection..
On Cursor, the “Codex MAX High Reasoning” model—supposedly their top-tier, free for a limited time? Sick, right? Ha, far from it. Feels like arguing with a smiling bureaucrat who insists you’re wrong. (For this specific case) , Endless back-and-forth, “answers” instead of solutions.
Look, I’ve been deep in this AI-for-dev workflow for a year now.. theres no more one offs or other models to try out in this space. The differences are crystal clear. The fix for my two-hour headache? Cursor’s free Auto mode. No “frontier model” hype, no hand-holding. I was just fed up, flipped it on, and boom. it spotted the issue and nailed it. First try.
That was the breaking point. Thought about the last few weeks with my basic GPT sub on my phone for daily use: it ain’t the same.
I’ve cycled through them all: Claude, Codex, GPT-5.1, Cursor’s party pack, Gemini, Grok. Each shines in their own way.
Gemini’s solid but bombs on planning, tool use, and gets stuck in loops constantly. Gpt is cringe. Only way I can put it. Grok is fire for speed and unfiltered chats.
When you’re building and can’t afford to micromanage your AI? Claude reacts. It helps. Minimal babysitting required. Meanwhile, GPT-5.1? Won’t generate basic stuff half the time. Used to crank out full graphics, life advice, whatever—now it dodges questions it once handled effortlessly. (The refusal policy creep is absurd.) Even simple tasks are hit or miss. No flow, just this nagging sense it’s trapped in an internal ethics loop. The inconsistency has tanked my trust. it’s too good at sounding confident now, which makes the letdowns sting more. One case: Instead of fixing the obvious code smell staring it in the face, it’ll spit back, “I added a helper to your x.ts file so that bla bla bla.” Cute, but solve the damn problem… instead of acting like its normal.
Yeah, it’s evolving, they all are. but after testing everything, Claude’s still the undisputed king for coding. (Speech aside: I stick with GPT-4o for brainstorming; it’s weirdly less locked down than 5.1 and crushes creativity.)
Bottom line: Claude isn’t flawless, and this isn’t some promo speech or AI rat race hype. But from everything this past year, and for anyone whos interested in knowing the differences or, needing a partner that moves with you instead of against you. It’s Claude every time. So yeah, I’m renewing. And I’ll keep paying unless something truly better crashes the party.
Cheers antrhopic, renewing my membership feels like christmas lol.
Problem
I've seen number of posts people asking bigger hourly/weekly limits for Claude Code or Codex.
$20 is not enough and $200 is 10x as much with limits they would not use. No middle option.
Meanwhile there's very simple solution and it's even better then $100 plan they are asking for.
Solution
Just subscribe to both Anthropic $20 plan and OpenAI $20 plan.
And to Google $20 as well when Gemini 3 is out so you can use Gemini CLI.
That would still be $60, almost half of $100 that you are willing to pay.
Not that it's just cheaper, you also get access to best coding models in the world from best AI companies in the world.
Claude gets stuck at a task and cannot solve it? Instead of yelling about model degradation, bring GPT5-codex to solve it. When GPT5 gets stuck, switch to Claude again. Works every time.
You won't be limited by model from a single company.
What? You don't want to manage both `CLAUDE.md` and `AGENTS.md` files? Create symlink between them.
Yes, also for me limits used to be a problem but not anymore and I'm very curious what Gemini 3 will bring to the table. Hopefully it will be available in Gemini CLI covered by $20 plan.