🌐
Nxcode
nxcode.io › home › resources › news › claude code vs codex cli 2026: which terminal ai coding agent wins?
Claude Code vs Codex CLI 2026: Which Terminal AI Coding Agent Wins? | NxCode
2 weeks ago - Codex CLI for autonomous tasks, DevOps, and cost-sensitive workflows. March 2026 — Terminal-based AI coding agents have become the default tool for serious developers. The two dominant players — Anthropic's Claude Code and OpenAI's Codex CLI — both operate from the command line, both handle multi-file edits autonomously, and both promise to transform how you write software.
🌐
Zenvanriel
zenvanriel.com › ai-engineer-blog › claude-code-vs-openai-codex-cli-comparison
Claude Code vs OpenAI Codex Mastery-Driven CLI Comparison
2 days ago - Claude Code treats every request like a collaboration. The CLI encourages context-rich prompts, retrieval of multiple files, and incremental improvements. It often asks clarifying questions before executing a change, keeping you in the loop ...
Discussions

Codex CLI vs Claude Code (adding features to a 500k codebase)
Former Claude Code user for a few months on Max 20x, fairly heavy user too. Loved it at the time, but feels like at least during part of last month the quality of the model responses degraded. I found myself having to regularly steer Claude into not making changes I didn't actually agree on (yes I use the plan mode, it's highly valuable). Claude also often told me that code was production ready when it wasn't, it either failed to compile or had some kind of flaw that needed addressing. Found out about a $1 Teams plan offer for ChatGPT so figured it would be a great opportunity to check out Codex CLI and GPT_5. Suffice to say it impressed me. I tell it what I want, it just does that. Most tasks I've thrown at it are usually completed and successful in one or two shots. If I'm possibly wrong or there's a reason to debate something first then it usually does so, while Claude would've often said "you're absolutely right, ..." - blindly agreeing with me regardless. GPT-5 also makes far less assumptions compared to Claude, regularly replying with open questions if it has any. After it completes a task GPT_5 will usually follow up with an idea or suggestion related to what we had done, which I also found useful. The biggest challenge I've given it so far was to refactor a long overdue and messy .cs file that contained about 3k LOC. I've tried this with various other AI LLMs, including Claude Code (which couldn't read the entire file as it was over 25k tokens), but they just ultimately make bugs and mess things up when trying to do so. I didn't think GPT-5 would be any different, but my god, it surprised me again. I planned with it, did it in small bits and pieces at a time, and a day or so later I'm now down to around 1k LOC for that file. It seems to be working fine too. I've been using Claude primarily since Sonnet 3.5, and GPT models before Sonnet 3.5, but it looks like I'm back with OpenAI again unless Anthropic "wow" me back. For Codex CLI, I would recommend checking out the "just-every/code" fork. Much nicer UI, /plan, /solve, /code commands, multiple themes, integrated browser capability, can resume previous conversations. More on reddit.com
🌐 r/ChatGPTCoding
75
102
September 5, 2025
Codex Vs Claude code
From the perspective of using the pure model, I think GPT5 has fully reached the level of sonnet4, and in some cases even surpasses it. As for Codex, I’ve tried both Codex CLI and Codex in VS Code. They already have a certain degree of usability, but indeed lack quite a few features, and the gap with Claude Code is still significant. Moreover, I don’t understand why Codex’s MCP doesn’t adopt the common approach. More on reddit.com
🌐 r/ClaudeCode
80
41
August 30, 2025
Which CLI AI coding tool to use right now? Codex CLI vs. Claude Caude vs. sth else?
Codex CLI all the way. It's not exactly clear whether GPT-5-Codex-medium and GPT-5-Codex-high actually perform better than Sonnet and Opus yet, the benchmarks aren't in. If history is any guide, Claude is probably marginally better (like we're talking within a few percentage points on the SWE Rebench – say Codex does 46%, Claude might do 48%) but on the flipside, Claude is many, many times more expensive than Codex CLI. That is to say the subscription costs the same, but you will be rate limited far more often by Claude, effectively getting fewer prompts out of your subscription in a given month. Also, I believe GPT-5-Codex currently benchmarks far above the competition if we restrict the benchmarks only to agentic coding (i.e. vibecoding without a human also writing code) instead of a broader spectrum of pair programming, code completions, etc. Some would argue Claude designs better frontends, which I suppose can in some ways be considered true, but the downside is that you're getting generic frontend #1928482. You should always consider design as an involved process, even with AI, because they crucially have neither eyes nor human aesthetic sensibilities. An AI does not care if a design is not visually cohesive if it looks correct in the CSS. So to address your questions by number: They're almost exactly equal in capability. Claude is (probably, we don't know with the new Codex model yet) a tiny bit better, but the downside is you're getting much less bang for your buck for that marginal improvement. An improvement you're honestly unlikely to notice anyway. Neither provider currently numerically lists their rate limits, they seem to be based a lot on traffic and demand. I.e. you'll get more usage during low traffic hours than during a surge. What is absolutely undeniable, however, is that Codex CLI currently offers the highest quota of the two. By Anthropic's own math, you get about 45 requests per 5 hours on the Pro plan on the high end (short conversations, simple requests, low demand), whereas on the comparable Codex CLI Plus you get anywhere from 50 to 150 in that same timespan for actually demanding requests. I don't know what the limits are for the Max versions, but I assume there's some kind of logical scaling up, so presumably Codex would still be far cheaper if measured by subscriptionCost / MaxPossibleRequestsPerMonth. Though use case will ultimately determine whether that difference ends up mattering to you. I use Github Copilot for a lot of work stuff, but in my free time I use ChatGPT Plus (not even Max) and I have never, not once, been rate limited in the Codex CLI despite throwing some very heavy shit at it. You could stay in an IDE if you wanted to. There's both a Claude and a Codex extension for VSCode. What I do is honestly just code in the terminal for the most part, while I run my server in a separate terminal tab, and then I just refresh (or hot-reload) the localhost server in my browser and see the software progress as I go. This is, of course, not possible if you're doing split backend and frontend development (which can often be helpful), but then you could, for example, surface a very barebones skeleton UI just to test the backend functionality and replace it with a frontend once you're sure the backend works. If you really want a completely visual editor (Lovable-style, code well hidden), I would strongly suggest you don't, of course, but it is possible to do in a better way. As of yesterday, Convex just made Chef (Lovable but better and made by a reputable company prioritizing security above all else) open source and self-hostable. So that's an option now. Strongly advise against this route because you will learn nothing at all, but if you must, go with Chef above the competition. Bringing your own API key is much cheaper anyway. No. Go with Codex CLI. If you want to cut costs, you could go with an open-source or free (with data sharing) model (open source examples could be Kimi or GLM 4.5, while free proprietary models could be something like Sonoma Sky Alpha or Deepseek 3.1. Keep in mind unless you self-host these, you will 100% be data-sharing, because that's the only reason you're getting the free compute power). You can access those through OpenRouter, but to avoid rate limiting you have to top up a minimum of $11 worth of credits in your OpenRouter wallet (won't be spent, it's probably an anti-abuse guard). More on reddit.com
🌐 r/vibecoding
24
12
September 19, 2025
A few thoughts on Codex CLI vs. Claude Code
I’m starting to really like Codex CLI w GPT-5. It took me some time to get the settings right but now it’s working quite well. Claude can go off the rails easily and often and also be lazy and cheat. But GPT-5 seems to be well balanced and not go too crazy in either direction. I wish there was a $100 plan like Claude. More on reddit.com
🌐 r/ClaudeAI
128
196
August 18, 2025
People also ask

Is Claude Code better than Codex CLI for coding?
Claude Code produces higher quality code (67% win rate in blind tests) and scores 80.9% on SWE-bench Verified. However, Codex CLI leads Terminal-Bench 2.0 at 77.3% and is 4x more token-efficient. Claude Code excels at complex refactors and frontend work, while Codex CLI is better for DevOps and autonomous tasks.
🌐
nxcode.io
nxcode.io › home › resources › news › claude code vs codex cli 2026: which terminal ai coding agent wins?
Claude Code vs Codex CLI 2026: Which Terminal AI Coding Agent Wins?
Can I use Claude Code and Codex CLI together?
Yes, many developers use a hybrid workflow. Claude Code handles architecture design, complex features, and frontend/UI tasks where code quality matters most. Codex CLI handles code review, security scanning, autonomous implementation, and DevOps tasks where speed and efficiency matter more.
🌐
nxcode.io
nxcode.io › home › resources › news › claude code vs codex cli 2026: which terminal ai coding agent wins?
Claude Code vs Codex CLI 2026: Which Terminal AI Coding Agent Wins?
Which is cheaper, Claude Code or Codex CLI?
Both start at $20/month. Claude Code Pro gives ~44,000 tokens per 5-hour window, which runs out quickly on complex tasks. Codex CLI with ChatGPT Plus gives 33-168 messages depending on model, and is 4x more token-efficient. For budget-conscious developers, Codex CLI offers better value at the $20 tier.
🌐
nxcode.io
nxcode.io › home › resources › news › claude code vs codex cli 2026: which terminal ai coding agent wins?
Claude Code vs Codex CLI 2026: Which Terminal AI Coding Agent Wins?
🌐
DataCamp
datacamp.com › blog › codex-vs-claude-code
Codex vs. Claude Code: AI Coding Assistants ComCodex vs. Claude Code: AI Coding Assistants Comparedpared | DataCamp
March 4, 2026 - Explore what’s new in Claude Code 2.1 by running a set of focused experiments on an existing project repository within CLI and web workflows. ... Learn to use OpenAI Codex CLI to build a website and deploy a machine learning model with a custom user interface using a single command.
🌐
Northflank
northflank.com › blog › claude-code-vs-openai-codex
Claude Code vs OpenAI Codex: which is better in 2026? | Blog — Northflank
... The major difference between ... is this: Claude Code emphasizes a developer-in-the-loop, local workflow using the terminal, while OpenAI's Codex agent is designed for both local and autonomous, cloud-based task delegation that can handle ...
🌐
Medium
bytebridge.medium.com › opencode-vs-claude-code-vs-openai-codex-a-comprehensive-comparison-of-ai-coding-assistants-bd5078437c01
OpenCode vs Claude Code vs OpenAI Codex: A Comprehensive Comparison of AI Coding Assistants | by ByteBridge | Medium
February 5, 2026 - OpenCode is not far behind; in fact, one author notes OpenCode “has more features: sub-agents, custom hooks, lots of configuration” and generally is very similar to Claude Code in capability .
🌐
Builder.io
builder.io › blog › codex-vs-claude-code
Codex vs Claude Code: which is the better AI coding agent?
September 28, 2025 - Claude Code in particular has closed a lot of the gap since this article was first published — better UX, a VS Code extension, a web IDE, and a more polished desktop app. If you prefer Claude Code or Cursor, I completely respect that.
🌐
Reddit
reddit.com › r/chatgptcoding › codex cli vs claude code (adding features to a 500k codebase)
r/ChatGPTCoding on Reddit: Codex CLI vs Claude Code (adding features to a 500k codebase)
September 5, 2025 -

I've been testing OpenAI's Codex CLI vs Claude Code in a 500k codebase which has a React Vite frontend and a ASP .NET 9 API, MySQL DB hosted on Azure. My takeaways from my use cases (or watch them from the YT video link in the comments):

- Boy oh boy, Codex CLI has caught up BIG time with GPT5 High Reasoning, I even preferred it to Claude Code in some implementations

- Codex uses GPT 5 MUCH better than in other AI Coding tools like Cursor

- Vid: https://youtu.be/MBhG5__15b0

- Codex was lacking a simple YOLO mode when I tested. You had to acknowledge not running in a sandbox AND allow it to never ask for approvals, which is a bit annoying, but you can just create an alias like codex-yolo for it

- Claude Code actually had more shots (error feedback/turns) than Codex to get things done

- Claude Code still has more useful features, like subagents and hooks. Notifications from Codex are still in a bit of beta

- GPT5 in Codex stops less to ask questions than in other AI tools, it's probably because of the released official GPT5 Prompting Guide by OpenAI

What is your experience with both tools?

Top answer
1 of 5
30
Former Claude Code user for a few months on Max 20x, fairly heavy user too. Loved it at the time, but feels like at least during part of last month the quality of the model responses degraded. I found myself having to regularly steer Claude into not making changes I didn't actually agree on (yes I use the plan mode, it's highly valuable). Claude also often told me that code was production ready when it wasn't, it either failed to compile or had some kind of flaw that needed addressing. Found out about a $1 Teams plan offer for ChatGPT so figured it would be a great opportunity to check out Codex CLI and GPT_5. Suffice to say it impressed me. I tell it what I want, it just does that. Most tasks I've thrown at it are usually completed and successful in one or two shots. If I'm possibly wrong or there's a reason to debate something first then it usually does so, while Claude would've often said "you're absolutely right, ..." - blindly agreeing with me regardless. GPT-5 also makes far less assumptions compared to Claude, regularly replying with open questions if it has any. After it completes a task GPT_5 will usually follow up with an idea or suggestion related to what we had done, which I also found useful. The biggest challenge I've given it so far was to refactor a long overdue and messy .cs file that contained about 3k LOC. I've tried this with various other AI LLMs, including Claude Code (which couldn't read the entire file as it was over 25k tokens), but they just ultimately make bugs and mess things up when trying to do so. I didn't think GPT-5 would be any different, but my god, it surprised me again. I planned with it, did it in small bits and pieces at a time, and a day or so later I'm now down to around 1k LOC for that file. It seems to be working fine too. I've been using Claude primarily since Sonnet 3.5, and GPT models before Sonnet 3.5, but it looks like I'm back with OpenAI again unless Anthropic "wow" me back. For Codex CLI, I would recommend checking out the "just-every/code" fork. Much nicer UI, /plan, /solve, /code commands, multiple themes, integrated browser capability, can resume previous conversations.
2 of 5
27
Gpt5 is definitely smarter model. CC has better scaffolding. However, codex is open source, so it will catch up fast.
Find elsewhere
🌐
OpenReplay
blog.openreplay.com › openai-codex-vs-claude-code-cli-ai-tool
OpenAI Codex vs. Claude Code: Which CLI AI tool is best for coding?
July 3, 2025 - Claude Code uses a client-server model functioning as both an MCP (Model Context Protocol) server and client, with a context window of up to 200,000 tokens. It connects directly to Anthropic’s API without intermediate servers. OpenAI Codex CLI implements a local-first architecture, originally built with Node...
🌐
Composio
composio.dev › content › claude-code-vs-openai-codex
Claude Code vs. OpenAI Codex | Composio
And at the same time, Claude Code has been evolving day by day, to a perfect AI Agent with a list of features like subagents, slash commands, MCP support, and so much more. While I still prefer Claude Code, I thought ...
🌐
Lowcode
lowcode.agency › blog › claude-code-vs-codex-cli
Claude Code vs OpenAI Codex CLI: Which Coding Agent Is Better?
2 days ago - The benchmark numbers are close, but they measure different things, and the gap widens in production. Codex CLI's o3 model scores 71.7% on SWE-bench Verified (OpenAI, April 2026) for isolated repository tasks.
Address   601 Brickell Key Dr #700, 33131, Miami
🌐
Reddit
reddit.com › r/claudecode › codex vs claude code
r/ClaudeCode on Reddit: Codex Vs Claude code
August 30, 2025 -

For those who have already tested the Codex, what do you think?

Top answer
1 of 5
20
From the perspective of using the pure model, I think GPT5 has fully reached the level of sonnet4, and in some cases even surpasses it. As for Codex, I’ve tried both Codex CLI and Codex in VS Code. They already have a certain degree of usability, but indeed lack quite a few features, and the gap with Claude Code is still significant. Moreover, I don’t understand why Codex’s MCP doesn’t adopt the common approach.
2 of 5
15
Codex web or Codex CLI? Here's my basic comparison, based on my experience: - Claude Code, sophisticated feature set, good UI, but Claude models appear to have some noticeable issues such as "you're absolutely right" where it blindly agrees with you without discussion and debate, where it can often, even with a plan, do extra things you didn't actually want, and it's not too difficult to run out of context if you have to steer Claude. Unfortunately effectively closed source. - Codex CLI, basic feature set but is improving, basic UI currently as well, however GPT-5 appears to adhere to my instructions much stronger than Claude does, even without a plan. If it believes I'm wrong about something, or needs to discuss something possibly for further clarification, it will do so and not make bold assumptions first. I don't have to regularly steer it like I do with Claude. I don't have to worry about the context window running out at the most inconvenient moment. It simply gets the tasks completed. It's also open source which means anyone can contribute to the code or fork their own version of Codex CLI. I've been on Claude Max 20x for a few months, loved it at the time, but I'm going to likely be cancelling very soon and switch to ChatGPT Pro instead.
🌐
DEV Community
dev.to › composiodev › claude-code-vs-open-ai-codex-which-one-is-best-for-pair-programming-2jhl
Claude Code Vs. Open AI Codex, which one is best for pair programming? 🎯 - DEV Community
May 30, 2025 - Ignoring the markdown in the output, I would like to go with OpenAI Codex as it provides more detailed explanations and describes the repository in a much better manner. However, if prompt rewriting is not an issue, I'd choose Claude Code due ...
🌐
Educative
educative.io › blog › claude-code-vs-openai-codex-cli
Claude Code vs. OpenAI Codex CLI: The right tool for developers
October 3, 2025 - By contrast, Codex command-line interface (CLI) is OpenAI’s lightweight terminal tool for AI-assisted coding. The philosophy behind Codex CLI is modular and flexible. Rather than trying to be an all-in-one coding environment, it focuses on being a highly capable code generator and problem solver that integrates into your existing workflow. Its open-source nature means developers can modify, extend, and customize it to fit their needs—something impossible with Claude Code’s more closed ecosystem.
🌐
Apidog
apidog.com › blog › claude-vs-codex-comparison-2026
Claude Code vs OpenAI Codex in 2026: Anthropic vs OpenAI for AI coding
2 days ago - Claude Code is better for production systems and complex codebases; Codex is better for rapid prototyping and parallel workflows. Both cost $20/month base. Claude Code (Anthropic) and OpenAI Codex represent the two dominant AI coding agent ...
🌐
Reddit
reddit.com › r/vibecoding › which cli ai coding tool to use right now? codex cli vs. claude caude vs. sth else?
r/vibecoding on Reddit: Which CLI AI coding tool to use right now? Codex CLI vs. Claude Caude vs. sth else?
September 19, 2025 -

I have used mostly Windsurf and Kilo Code to build around 8 projects, the most complicated one is a flutter iOS & Android app with appr. 750 test users using firebase as backend and Gemini Flash 2.5 for AI functionalities.

Now I would like to start learning CLI AI coding tools. 2 months ago the choice would have been an obvious Claude Code (I have the pro subscription), but I've seen the hype around OpenAI's Codex CLI these days.

Would be great to hear from your experience:

  1. What is the difference between these 2 right now besides the LLM models?

  2. What are the usage limits for a mix of planning / coding / debugging usage? (for Claude Pro and OpenAI Plus sub)

  3. Any tipps for switching from editor based coding to terminal based? I am slightly hesitant because I am a visual person and am afraid that I will lose the overview using the terminal. Or do you guys use terminal and editor at the same time?

  4. Are there any other options you recommend?

Top answer
1 of 7
9
Codex CLI all the way. It's not exactly clear whether GPT-5-Codex-medium and GPT-5-Codex-high actually perform better than Sonnet and Opus yet, the benchmarks aren't in. If history is any guide, Claude is probably marginally better (like we're talking within a few percentage points on the SWE Rebench – say Codex does 46%, Claude might do 48%) but on the flipside, Claude is many, many times more expensive than Codex CLI. That is to say the subscription costs the same, but you will be rate limited far more often by Claude, effectively getting fewer prompts out of your subscription in a given month. Also, I believe GPT-5-Codex currently benchmarks far above the competition if we restrict the benchmarks only to agentic coding (i.e. vibecoding without a human also writing code) instead of a broader spectrum of pair programming, code completions, etc. Some would argue Claude designs better frontends, which I suppose can in some ways be considered true, but the downside is that you're getting generic frontend #1928482. You should always consider design as an involved process, even with AI, because they crucially have neither eyes nor human aesthetic sensibilities. An AI does not care if a design is not visually cohesive if it looks correct in the CSS. So to address your questions by number: They're almost exactly equal in capability. Claude is (probably, we don't know with the new Codex model yet) a tiny bit better, but the downside is you're getting much less bang for your buck for that marginal improvement. An improvement you're honestly unlikely to notice anyway. Neither provider currently numerically lists their rate limits, they seem to be based a lot on traffic and demand. I.e. you'll get more usage during low traffic hours than during a surge. What is absolutely undeniable, however, is that Codex CLI currently offers the highest quota of the two. By Anthropic's own math, you get about 45 requests per 5 hours on the Pro plan on the high end (short conversations, simple requests, low demand), whereas on the comparable Codex CLI Plus you get anywhere from 50 to 150 in that same timespan for actually demanding requests. I don't know what the limits are for the Max versions, but I assume there's some kind of logical scaling up, so presumably Codex would still be far cheaper if measured by subscriptionCost / MaxPossibleRequestsPerMonth. Though use case will ultimately determine whether that difference ends up mattering to you. I use Github Copilot for a lot of work stuff, but in my free time I use ChatGPT Plus (not even Max) and I have never, not once, been rate limited in the Codex CLI despite throwing some very heavy shit at it. You could stay in an IDE if you wanted to. There's both a Claude and a Codex extension for VSCode. What I do is honestly just code in the terminal for the most part, while I run my server in a separate terminal tab, and then I just refresh (or hot-reload) the localhost server in my browser and see the software progress as I go. This is, of course, not possible if you're doing split backend and frontend development (which can often be helpful), but then you could, for example, surface a very barebones skeleton UI just to test the backend functionality and replace it with a frontend once you're sure the backend works. If you really want a completely visual editor (Lovable-style, code well hidden), I would strongly suggest you don't, of course, but it is possible to do in a better way. As of yesterday, Convex just made Chef (Lovable but better and made by a reputable company prioritizing security above all else) open source and self-hostable. So that's an option now. Strongly advise against this route because you will learn nothing at all, but if you must, go with Chef above the competition. Bringing your own API key is much cheaper anyway. No. Go with Codex CLI. If you want to cut costs, you could go with an open-source or free (with data sharing) model (open source examples could be Kimi or GLM 4.5, while free proprietary models could be something like Sonoma Sky Alpha or Deepseek 3.1. Keep in mind unless you self-host these, you will 100% be data-sharing, because that's the only reason you're getting the free compute power). You can access those through OpenRouter, but to avoid rate limiting you have to top up a minimum of $11 worth of credits in your OpenRouter wallet (won't be spent, it's probably an anti-abuse guard).
2 of 7
3
Most people would say Claude Code. Except for the people who post on r/ClaudeAI and r/Anthropic , they seem to fucking hate it.
🌐
Reddit
reddit.com › r/claudeai › a few thoughts on codex cli vs. claude code
r/ClaudeAI on Reddit: A few thoughts on Codex CLI vs. Claude Code
August 18, 2025 -

Opus 4.1 is a beast of a coding model, but I'd suggest to any Claude Max user to at least try Codex CLI for a day. It can also use your ChatGPT subscription now and I've been getting a ton of usage out of my Plus tier. Even with Sonnet, Claude Pro would have limited me LONG ago.

A few thoughts:

  • While I still prefer CC + Opus 4.1 overall, I actually prefer the code that Codex CLI + GPT-5 writes. It's closer to the code I'd also write.

  • I've used CC over Bedrock and Vertex for work and the rate limits were getting really ridiculous. Not sure this also happens with the Anthropic API, but it's really refreshing how quick and stable GPT-5 performs over Codex CLI.

  • As of today Claude Code is a much more feature rich and complete tool compared to Codex. I miss quite a few things coming from CC, but core functionality is there and works well.

  • GPT-5 seems to have a very clear edge on debugging.

  • GPT-5 finds errors/bugs while working on something else, which I haven't noticed this strongly with Claude.

  • Codex CLI now also supports MCP, although support for image inputs doesn't seem to work.

  • Codex doesn't ship with fetch or search, so be sure to add those via MCP. I'm using my own

  • If your budget ends at $20 per month, I think ChatGPT might be the best value for your money

What's your experience?

🌐
Morph
morphllm.com › comparisons › codex-vs-claude-code
Codex vs Claude Code (2026): Benchmarks, Agent Teams & Limits Compared
February 28, 2026 - Anthropic is valued at $380B with $14B ARR. OpenAI is pushing Codex onto non-Nvidia hardware. Both companies consider coding agents their primary growth vector. Claude uses 3-4x more tokens but produces more thorough output
🌐
Reddit
reddit.com › r/claudeai › codex vs claude: my initial impressions after 6 hours with codex and months with claude.
r/ClaudeAI on Reddit: Codex Vs Claude: My initial impressions after 6 hours with Codex and months with Claude.
September 2, 2025 -

I'm not ready to call Codex  a "Claude killer" just yet, but I'm definitely impressed with what I've seen over the past six hours of use.

I'm currently on Anthropic's $200/month plan (Claude's highest tier) and ChatGPT's $20 plus plan. Since this was my first time trying ChatGPT, I started with the Plus tier to get a feel for it. There is also a $200 pro tier available for Chatgpt   This past week, Claude has been underperforming significantly, and I'm not alone in noticing this. After seeing many users discuss ChatGPT's coding capabilities, I decided to give Codex a shot, and I was impressed. I had two persistent coding issues that Claude couldn't resolve and ChatGPT fixed both of them easily, in one prompt.  There are also a few other things I like about Codex so far. It has Better listening skills. It pays closer attention to my specific requests, it admits mistakes, it collaborates better on troubleshooting by asking clarifying questions about my code, and its response is noticeably quicker than Claude Opus.  However, ChatGPT isn't perfect either. I'm currently dealing with a state persistence issue that neither AI has been able to solve. Additionally, since I've only used ChatGPT for six hours, compared to months with Claude, I may have given it tasks it excels at. Bottom line: I'm genuinely impressed with ChatGPT's performance, but I'm not abandoning Claude just yet. However, if you haven't tried ChatGPT for coding, I'd definitely recommend giving it a shot – it performed exceptionally well for my specific use cases. It may be that going forward I use both to finish my projects.

Edit: to install make sure you have node.js installed and your computer then run

npm install -g @openai/codex

You can also install using homebrew by running.

brew install codex

🌐
Reddit
reddit.com › r/claudecode › claude code vs codex vs opencode, which one is actually worth using?
r/ClaudeCode on Reddit: Claude Code vs Codex vs OpenCode, which one is actually worth using?
1 week ago -

From what I’ve seen so far, Claude Code seems to have the best overall reviews in terms of quality and performance. The main downside for me is that it’s locked behind a company and not open source (I know about the leak, but I’m more interested in something officially open and actively maintained).

Codex, on the other hand, looks really appealing because it’s open source and allows for forks, which gives it a lot more flexibility and long-term potential.

Then there’s OpenCode, probably the most interesting of the three. It has a huge community and a lot of momentum, but I’m not sure if it’s actually on par with the others in real-world use.

Curious to hear your thoughts, how do these compare in practice? Is OpenCode actually competitive, or is it more hype than substance?

Oh and by Claude i'm referring to the open sourced forks that are comming, which we don't know if will be updated or etc, not using the proprietary one ever

🌐
The New Stack
thenewstack.io › home › testing openai codex and comparing it to claude code
Testing OpenAI Codex and Comparing It to Claude Code - The New Stack
June 28, 2025 - To start an interactive session, just use the command codex: It actually has a better starting summary than its agentic competition, like Claude Code, because it immediately states that it makes suggestions and seeks approval before doing anything ...