Videos
I posted yesterday about how the recent wave of random Codex quota resets (4 times in 10 days) was arbitrarily pushing back our official renewal dates.
Well, the plot thickens. I was happily coding now, and I just noticed my plan got hit with another reset.
But here's the kicker: my next renewal date isn't just pushed back a few days this time. The entire cycle seems to have changed. Instead of my usual 1-week reset schedule, my dashboard is now showing that my limit window becomes 2 weeks. (From the previous post you can see my reset was supposed to be Mar 16, now it says Mar 17, but the limit becomes 2 weeks)
There had been too many resets, and it starts to get confusing.
Is this just a gnarly UI bug from all the recent backend churn, or did they silently change the quota cycle for everyone? What are your dashboards showing today?
If you were running low on your weekly quota, check again - OpenAI reset it early. Multiple people confirmed it on r/codex too.
Caught it live on my quota tracker, usage went from 30% to 0% well before the scheduled reset.
Built an open-source tool to track these things across providers: https://github.com/onllm-dev/onwatch
I hit my weekly limit with usage of about 10 hours of PLUS on 5.2 HIGH.
I don't know why they call it "weekly" exactly.
yeah, " we didnt change anything" from 24% remaining to 100% again.
I've been using a different AI coding tool on a $200/month plan for a while now. Generally I use around 50-60% of my weekly limit, so I'm a fairly active but not extreme user.
I've been hearing a lot of good things about Codex lately and I'm really interested in giving it a serious try. Before I make the switch though, I wanted to understand the limits better.
For those of you on the Pro plan ($200/mo) - how does Codex handle the rate limits in practice? The official docs say 300-1,500 messages per 5 hours, but that's a pretty wide range. What does real-world usage look like for someone doing regular feature development and bug fixing?
Also - is the $20/mo Plus plan actually enough for regular coding work, or do you hit the limits too quickly and end up needing Pro anyway? Would love to hear from people on both plans.
Debating whether to get several Plus accounts or just a single Pro. I usually exhaust my 5h limit after 2:30-3 hours (I usually work with one CLI, rarely ever more than one), and during the second day I already run out of my weekly limit.. so technically even 2-3 accounts are fine although it's horrible UX to logout/login to accounts all the time. But I'll probably start using several CLIs if I actually end up having much higher limits.
I can't find any info regarding a weekly limit for the Pro plan.
Lately getting some useful things done with Codex, plus I am interested in subagent, so I am making a switch for this month to have some more fun.
On switching just now, the reset date is the same but the remaining weekly quota increased. From the number, Pro gives roughly 8x the Plus's quota. I think this isn't bad given it is a bit faster plus I have access to Pro in chat.
This is the saddest way of them introducing this... 1 Prompt was literally 5% of weekly usage data for me, and the prompt literally failed. Realistically, you can expect 10 half-way working outputs with this. As a paying plus user. Per week. This is such a joke and it's just sad... Please make this somewhat realistic. I'm looking for alternatives now although I really liked codex. But the only other option they offer is another 40€ for another 1000. I don't need 1000, but 10 is a joke. At least offer a smaller increment.
Did anyone even think this through? And apparently, cloud prompts consume 2-4x more limits. How about you explain this before introducing the limits? This is really a horrible way to introduce these new limits...
They said it's done and fixed https://github.com/openai/codex/issues/13568#event-23526129171
But something doesn't feel right, maybe it's the review, or maybe it's 5.4. I never use xhigh either, it's either high or medium. No 2x speed no extra context
EDIT: It seems like not just me problem, just found out this issue being posted https://github.com/openai/codex/issues/14593 so if anyone can share, please do so