Usage limits feel terrible compared to Codex
Claude Code vs OpenAI Codex?
Is it just me, or is OpenAI Codex 5.2 better than Claude Code now?
Codex Rate Limits Discussion Thread
Can I use both Codex and Claude Code together?
Why are Claude Code usage limits getting worse?
Is Codex still better than Claude Code in 2026?
Videos
I’ve been using Codex extensively for my software development job but decided to try claude code today with claude Pro subscription.
I tested it on common task that I has to do. In a working folder I have 11 repositories and each owns some part of a product I work on. I gave a task to explain the purpose of repositories and connections between them. Then I gave a task to fix deployment (add missing code) in one of the repositories.
The claude gave better description of repositories but nothing too crazy or deep. The code they suggested for debugging was the same. But claude literally used 30% of 5-hours usage while Codex only 3%.
Is there anything I do incorrectly or limits for claude are actually so low ?
EDIT: This is only asking about Opus 4.5 on Max 20x versus OpenAI's smartest model—so GPT-5.2-Codex set to Extra High or GPT-5.2 set to Extra High, whichever one is smarter.
TLDR: How do usage limits in the Codex harness with OpenAI's smartest model option compare to using Opus 4.5 in CC?
I'm looking into the expensive subscriptions. Can anyone with recent experience (e.g. the past few weeks) using both harnesses clarify some things?
-
Usage limits. Does Max 20x or Codex Pro have higher limits with their respective smartest model?
-
"Better"? When people say "codex is better than CC", I never know if they're talking about the Codex harness with...
-
GPT-5.2-Codex
-
GPT-5.1-Codex-Max (which OpenAI says is their "best model for agentic coding" as of 2 weeks ago)
-
GPT-5.2
-
GPT-5.2 Pro
-
Any one of these with "high" VS "extra high"
Is it just me, or are you also noticing that Codex 5.2 (High Thinking) gives much better output?
I had to debug three issues. Opus 4.5 used 50% of the session usage. Nothing was fixed.
I switched to Codex 5.2 (High Thinking). It fixed all three bugs in one shot.
I also use Claude Code for my local non-code work. Codex 5.2 has been beating Claude for the last few days.
Gemini 3 Pro is giving the worst responses. The responses are not acceptable or accurate at all. I do not know what happened. It was probably at its best when it launched. Now its responses feel even worse than 2.0 Flash.
I'm currently using OpenAI Plus ($20/mo) with the VSCode Codex plugin that taps into my ChatGPT Plus subscription. I get:
5-hour rolling limit Weekly rolling limit
I prefer Sonnet's code quality over GPT models, but I'm trying to understand the value proposition of switching to Anthropic Pro + Claude Code.
My questions:
-
With Claude Code on the Pro plan ($20/mo), I understand you get ~10-40 prompts every 5 hours shared across all Claude usage. Is this right?
-
If I burn through 40 prompts in one heavy coding session, am I locked out for just 5 hours, or longer?
-
For those who've used both: Is the code quality improvement worth having significantly more restrictive usage limits compared to OpenAI Plus + VSCode Copilot?
Am I missing something about how Claude Code usage works that makes it more competitive with what I currently have?
I had a bad experience with Cursor where I blasted through my entire monthly token limit in one day of coding. Trying to avoid a similar situation. Kind of hard to believe that openai is just giving me free compute, what am I not seeing here?
Cheers!