I am/was a huge fan of Claude Code and found it the absolute best implementation of gen AI in coding until the last 1-2 weeks. I'm not sure what has happened, the quality is generally still very high but the usage limits have become absolutely beyond a joke, basically unusably restrictive.
I can code on GPT 5.3 Extra High for hours on end without a single thing getting in my way but I can give Claude one reasonably complex prompt and by the time it is done, I have used about 50-70% of my 5h limit. Two prompts and I'm done, 2 days and that's it for the week.
Am I the only one that has noticed an absolutely huge difference in what you can get done within your subscription tier lately?
Videos
I'm almost done with my subscriptions for Google and Cursor, and I'm looking for a new main AI code model. I was debating between Claude and Codex.
I saw that Codex has improved a lot recently, and I want to know what I should do. Specifically:
-
Is Codex Max good enough now for creating full apps?
-
How are the usage limits assuming I go for the most capable plan?
-
How has it actually changed or improved over time between version 5, 5.1, and Max?
My usage: I create apps that require excellent frontend work and good connections between APIs and different pieces, especially for Salesforce.
Any advice is appreciated.
Like the title says, I've been a Claude Code user and a fan of it, but after hearing the founder of OpenClaw say that he used Codex and he preferred it more, I decided to try it myself and I was pleasantly surprised at the experience just wondering if there are other reasons that you all like Codex over other AI coding tools since im still new to Codex. Any personal favorite features you have much appreciated <3
What do yall think of the new Codex app?
So I have been coding with Claude Code (Max 5x) using the VScode extension, and honestly it seems to handle codebases below a certain size really well.
I saw a good amount of positive reviews about Codex, so I used my Plus plan and started using Codex extension in VScode on Windows.
I do not know if I've set it up wrongly, or I'm using it wrongly - but Codex seems just "blah". I've tried gpt-5 and gpt-5-codex medium and it did a couple of things out of place, even though I stayed on one topic AND was using less than 50% tokens. It duplicated elements on the page (instead of updating them) or deleted entire files instead of editing them, changed certain styles and functionality when I did not ask it to, wiped out data I had stored locally for testing (again I didn't ask it to), and simply took too much time, and also needed me to approve for the session seemingly an endless number of times.
While I am not new to using tools (I've used CC and GitHub copilot previously), I recognise CC and Codex are different and will have their own strengths and weaknesses. Claude was impressive (until the recent frustrating limits) and it could tackle significant tasks on its own, and it had days when it would just forget too many things or introduce too many bugs, and other better days.
I am not trying to criticise anyone setup/anything, but I want to learn. Since, I have not yet found Codex's strengths, so I feel I am doing something wrong. Anyone has any tips for me, and maybe examples to share on how you used Codex well?
I've been using the skidrow-codex website to check the lastest codex releases but the site got shutted down. Anyone knows the official codex site?
Steven Heidel, who works on APIs at OpenAI, revealed that the new drag-and-drop Agent Builder, which was recently released, was built end-to-end in just under six weeks. “Thanks to Codex writing 80% of the PRs.”
“It’s difficult to overstate how important Codex has been to our team’s ability to ship new products,” said Heidel.
I don't know what you all are seeing in Codex, but if Claude Code was magical, Codex really makes me feel uncomfortable and stupid, almost hating vibe coding all of a sudden. If skill issue is a thing, I never had skill issues with CC, but Codex is really bad for no-coders. I'm already planning to refund/cancel GPT-Pro that I bought today to run full testing and keep my CC, crossing my fingers that it stays decent and that Anthropic fixes it.
I loved Claude Code so much that I even introduced it to normie entrepreneurs to implement vibe coding at their companies, and they are loving it. I would have never suggested anyone "normal" use Codex for what it is today.
-
While I understand a bit of development, Claude Code made me speedrun 20x my knowledge every day I used it. Codex doesn't say anything about what it's going to do, and generates text that is very unpleasant to read - all in one block of often confusing and underspecified final reports.
-
Zero steering, while having no idea what it is doing. While Claude was trying to hammer a nail with a few misses, Codex is hammering with an electric hammer with my eyes folded. Can't learn, can't understand if my question was correct, just need to wait for the final outcome.
-
Slow. Reasoning might be decent but it's also very slow. When it doesn't get it or overthinks, it's frustrating. Takes a long time to one-shot, sometimes in the wrong direction.
-
Zero creative understanding. I've literally struggled and lost time in new sessions giving commands like "merge" that Claude clearly understands, getting me a "merged all your repository into one txt document, here you have it" absurd type of outcomes. It really doesn't get 1-2/10 commands.
-
No plan mode: Man, I hate not planning. Over the past weeks before things got a bit rough, I was having love sessions where with CC we were planning for 40 minutes and then it executed everything in 10. Codex just doesn't have that: one shot, adapt, one shot, adapt.
-
No resume: For someone who vibe-coded from the beach using cellphone/iPad/Mac on a Hetzner server, not having resume capabilities feels really like a big struggle. Yes, I used to fear the compacting of a sentence, but I used to continue for days on a 5-times compacted conversation, having multiple at a time, and it was a joy.
-
UI/UX is very bad overall. I don't like how it talks, how it processes requests, how steering gets too long, how it doesn't teach me anything on the way.
More and more thoughts are growing in me, but this is the experience of someone having spent 16 hours a day in Claude Code for the past weeks and who tried Codex for the past 24 hours with huge frustration and disappointment.
What's your experience trying out Codex for real, and am I the only one really disliking it or is it really a skill issue of having to step up while CC was forgiving and welcoming?
Get started with Codex: OpenAl's coding agent, in this step-by-step onboarding walkthrough. You'll learn how to install Codex, set up the CLI and VS Code extension, configure your workflow, and use Agents.md and prompting patterns to write, review & reason across a real codebase.
This video covers:
Installing Codex (CLI + IDE)
Setting up a repo and getting your first runs working
Writing a great Agents.md (patterns + best practices)
Configuring Codex for your environment
Prompting patterns for more consistent results
Tips for using Codex in the CLI and IDE
Advanced workflows: headless mode + SDK
Source: OpenAi YT
I was a fanboy of claude! So biased! Would do anything to code with claude code, idk why i had this opinion that gpt is so generic and its boring to code with. I had this impression since the gpt5.1 release that was the worst model imo.
So 2 days ago i noticed they are giving free month trial, and i was like "umm okay I'll give it a shot".
And rn im so amazed by gpt5.3 codex..... Bro wtf? Since 2 days working on it, very big plan in my android app! It is delivering it flawlessly. It does big phases in 1 go! The result is insanely excellent.
I've tried to do this plan with Gemini 3.1 and opus 4.6 in Antigravity (different IDE) and i reverted my files 2 or 3 times because they keep breaking my functions and files during implementation.
I just feel so happy and grateful haha, its like i found a gem. I needed this so bad! It's a time saver! And always delivering the task with 0 compilation errors or bugs. And the plan im doing is insanely complicated. Wow😲
Edit: i never let gpt do anything Ui related because i know claude is superior in this area.
There's a wide consensus on reddit (or at least it appears to me that way) that Claude is superior. I'm trying to piece together why this is so.
Let's compare the latest models that were each released within minutes of each other - Codex 5.3 xhigh vs Opus 4.6. I have a plus plan on both - the 20 usd/mo one - so I regularly use both and compare them against each other.
In my observation, i've noticed that:
-
While claude is faster, it runs into usage limits MUCH quicker.
-
Performance overall is comparable. Codex 5.3 xhigh just runs until it's satisfied it's done the job correctly.
-
For very long usage episodes, the drawback of xhigh is that the earlier context will wind up pruned. I haven't experimented much with using high instead of xhigh for these occasions.
-
Both models are great at one-shotting tasks. However Codex 5.3 xhigh seems to have a minor edge in doing it in a way that aligns with my app's best practices because of its tendency to explore as much as it thinks it needs. I use the same claude.md/agents.md file for both. Opus 4.6 seems more interested in finishing the task asap, and while it does a great job generally, occasionally I need to tell it something along the lines of "please tweak your implementation to make it follow the structure of this other similar implementation from another service".
I'm working on a fairly complex app (both backend + frontend), and in my experience the faster speed of Claude, while nice, isn't anywhere close to enough by itself to make it superior to Codex. Overall, the performance is what has the highest weightage, and it's not clear to me that Claude edges ahead here.
Interested to hear from others who've compared both. I'm not sure if there's something I could be doing differently to better use either Claude or Codex.
Hey folks, Codex was just announced in ChatGPT, and it seems great. I am a Software Dev and it can really accelerate my projects.
I’ve been a pro user, but switched to Plus as it didn’t feel like there was enough benefit. Now, it feels like Codex is making it worth it again.
I know it’s coming to Plus later on, but inevitably there’ll be restrictions. For one such as myself (where coding is my career), I feel very justified in $200 a month.
What do you think?
A project with 200k lines of code, so a medium to large project, thought about it for 20 minutes but worked better than a mid-level developer. Added a feature of medium complexity, added tests and covered edge cases I wouldn't have thought of, complete documentation (something developers don't really do because they don't like it)... and the API worked right away. It also integrated the existing modules very well.
If you still think AI is still weak at development, try Codex.
PS: I know I'm going to get downvotes from skeptics, but that doesn't change reality - if you don't advance towards senior, you're going to have a hard time in the coming years.
PS 2: The company is a product company (4 well-known products in total), 200 developers, 800+ employees on 4 continents, we have a contract with OpenAI, all developers have received access. No developers have been fired, but junior and mid-level positions have been reduced quite a bit. On the other hand, 150 support employees were fired and replaced by AI.
PS 3: I see a lot of fear, otherwise I don't understand where all the hate for AI comes from. I think it was the same when cars appeared. You can stay in denial as long as you want, but it's naive to ignore the progress made so far and that it's increasingly used by companies. Even if it didn't progress at all and the AI bubble burst, as long as it saves time, companies will still produce more with a smaller number of developers.
PS 4: I can't say the product, but you might be using it. The code is from a new version of a smaller product of the 4, but it's still used by > 1 million customers.
Logically, the AI didn't write 200k lines of code, if you didn't understand even that, you can panic that you're going to lose your job, but it added a feature in 20 minutes that would have taken someone a few days.
I'm building a language learning platform mostly with Claude Code though I do use Gemini CLI and ChatGPT for some things. But CC is the main developer. Today I wanted to test Codex and wow, I'm loving it. Compared to CC, it is much more moderate, when you ask it to refactor something or modify the UI of a feature it does exactly what you asked, it doesn't go overoboard, it doesn't do something you didn't ask and it does it incrementally so you can always ask it to go one step further. All I've had it do so far has gone smoothly, without getting stuck on a loop, and even the design aspect is very good. I asked to re-design an admin feature and give me 5 designs and I loved all of them. If you haven't tried it, I'd give it a try. It's a great addition to your AI team!
I thought I would sub to the $200 and pass `gpt-5-pro` but Codex said that it is an unsupported model.
Major question: if I just use Codex with `gpt-5`, do I expect the GPT Pro stuff to kick in and blow my mind away?
Of course I need to be smart with my prompts and what I'm asking it to do.
For context, I work with backends and frontends and devops, what is the craziest thing you have made Pro and Codex do for you recently with GPT-5?