I'm a die-hard fan of OpenCode - because of free model, how easy it is to use subagents, and just because it's nice. But I wonder if anyone finds GPT models better in Codex? I cannot imagine why they could possibly work better there, but maybe models are just trained that way, so they "know" the tools etc? Anyone noticed anything like that?
Hi everyone! I came across Opencode and decided to try it out—was curious. I chose Codex (I have a subscription). I was genuinely surprised by how easy it was to communicate in planning mode with gpt-5.2-low: discussing tasks, planning, and clarifying details felt much smoother. Before, using the extension or the CLI was pretty tough—the conversation felt “dry.” But now it feels like I’m chatting with Claude or Gemini. I entered the exact same command—the answers are essentially the same, but Opencode explains it much more clearly. Could someone tell me what the secret is?
edit#1:
Second day testing the opencode + gpt-5.2-medium setup, and the difference is huge. With Codex CLI and the extension, it was hard for me to properly discuss tasks and plan — the conversation felt dry. Here, I can spend the whole day calmly talking things through and breaking plans down step by step. It genuinely feels like working with Opus, and sometimes even better. I’m using it specifically for planning and discussion, not for writing code. I don’t fully understand how opencode achieves this effect — it doesn’t seem like something you can get just by tweaking rules. With Codex CLI, it felt like talking to a robot; now it feels like talking to a genuinely understanding person.
Videos
Was wondering of peoples opinions on using the Codex app from people who have used opencode in the past. I basically exclusively use OpenAI models for my workflow through opencode and wondering what makes it worth the switch to the codex app
Recently opencoded added chatgpt support (not just api key, subscription models work too). Anybody used that? How does opencode cli perform against codex cli?
Fellow Codex users, anyone using codex in OpenCode or https://github.com/code-yeongyu/oh-my-opencode? I want to know what the general consensus is on this, whether it’s advised or if you think just using Codex cli is possibly better. Im seeing lots of hype with OpenCode so want to hear people’s thoughts and if they’ve tried it. (Also if you use codex with it does it charge to your api key or you can use your weekly codex limit from chatgpt plan?) Thanks.
Hi,
What are the advantages of using Codex GPT-5.3 (high) inside opencode then using Codex in a traditional way in terminal. It's for inside VSC and mostly projects that revolves around php, js and/or Laravel projects to give you guys a bit more context.
Talking about context, does working with Codex inside change anything with the context window?
I know the biggest advantage for opencode is that you can switch models but apart from that I'm wondering what opencode can do more in terms of advantages than just using codex in terminal in VSC (instead of opencode inside terminal in VSC).
Thank you all!
PS: I'm not a native English speaker and didn't use AI to rewrite my text so hopefully it was understandable :)
Like the title suggested which one gives better performance from yall experience?
Hey everyone,
I’ve been seeing a lot of people moving from Claude code to Codex, so I’m thinking about giving it a try.
For those who’ve used it:
Do you prefer Codex CLI or GPT-5.2 Codex on OpenCode?
What’s the best way to use Codex day-to-day (workflow, setup, tips)?
Thanks!
Hey :),
ist codex plus or opencode go a better deal for my money ? I don't won't to spent more than 20 $ for my cli ai agent. What's the best deal for my money ? I don't want to vibe code I use ai only for questions debugging and simple task :)
Thanks and Best regards :)
switched to ChatGPT Pro not too long ago and i genuinely love codex - simple tool, does what it needs to do, no fluff
but opencode is on another level as a harness. subagents, grep tools, proper file navigation - it's a much more serious setup for real engineering work
and the fact that you're letting us use it freely with the same limits as codex is huge. props for not gatekeeping it unlike, well, you know who
appreciate it OpenAI, this is how you treat your users
I’m trying to figure out what the differences between opencode and cc are when it come to actual output, not the features they have per se and how we can make the most use of these features depending on usecases.
I had a recent task to investigate an idea I had and create an MVP for it. So starting with a clean slate I gave the same prompt in opencode using Claude sonnet 4.7 and also GLM4.7. And in Claude it was sonnet 4.5.
The output from Claude code was way more general and it came back with questions slightly relaxant but not directly part of the main prompt. Clarifying them gave a broader scope to the task.
Opencode on the other hand, directly provided suggestions for implementation with existing libraries and tools. This was the same/similar output for both the models.
I’m interested to know what workflows others have and how they choose the best tool for the job. Or if you have any special prompts that you use would love to heard from you.
I’m really interested in the project since I love open source, but I’m not sure what are the pros of using OpenCode.
I love using Codex with the VSC extension and I’m not sure if i can have the same dev experience with Open Code.
Hi all. Can anyone tell me the benefit of using codex via oauth in opencode CLI over just using codex CLI?
At the moment my workflow is to chat through my ideas with ChatGPT. Formulate a plan and then hand that off to Codex with guardrails. Codex makes the changes to my codebase, produces a diff and a summary which ChatGPT checks and if we’re happy, I commit and push. All in a Linux VM using codex in VScode IDE.
So, what would OC bring to the table!?
So far I’ve made an off-market property sourcing app using python to make API calls to enrich a duckdb database, surface it in streamlit and pump out communications and business information material. It’s all been mega new to me. I can’t code and hadn’t even touched AI never mind heard of python before sep 24 which is why I need to source lots and lots of advice using a chatbot before committing to a certain direction.
This is just the beginning for me and I read non-stop on the subject. It’s all incredibly exciting and I’m obsessed with the possibilities for this app and beyond.
Opus 4.6 + Claude Code is insane, it 1 shots complicated changes across the code bases I work on professionally.
Locally, I was using the codex cli, but the results were always meh. Recently moved to use my ChatGPT Plus Subscription with OpenCode to use 5.3-codex, and the harness is soooo much better than the codex CLI, Mac App, or VS Code extension.
Results are for certain always higher quality, it feels like the OpenCode is able to somehow provide much better context.
The one thing I haven't been able to figure out is - How can I set the reasoning level for 5.3-codex via OpenCode.
A few weeks ago my "Private-Reddit-Alter-Ego" started and participated in some discussions about subagents, prompts and harnesses. In particular, there was a discussion about the famous "oh-my-opencode" plugin and its value. Furthermore I discussed with a few people about optimizing and shortening some system prompts - especially for the codex model.
Someone told me - if I wanted to complain about oh-my-opencode, I shall go and write a better harness. Indeed I started back in summer with an idea, but never finished the prototype. I got a bit of sparetime so I got it running and still testing it. BTW: My idea was to have controlled and steerable subagents instead of fire-and-forget-style-text-based subagents.
I am a big fan of benchmarking and quantitative analysis. To clarify results I wrote a small project which uses the opencode API to benchmark different agents and prompts. And a small testbed script which allows you to run the same benchmark over and over again to get comparable results. The testdata is also included in the project. It's two projects, artificial code generated by Gemini and a set of tasks to solve. Pretty easy, but I wanted to measure efficiency and not the ability of an agent to solve a task. Tests are included to allow self-verification as definition of done.
Every model in the benchmark had solved all tasks from the small benchmark "Chimera" (even Devstral 2 Small - not listed). But the amount of tokens needed for these agentic tasks was a big surprise for me. The table shows the results for the bigger "Phoenix-Benchmark". The Top-Scorer used up 180k context and 4M tokens in total (incl cache) and best result was about 100k ctx and 800k total.
Some observations from my runs:
- oh-my-opencode: Doesn't spawn subagents, but seems generous (...) with tokens based on its prompt design. Context usage was the highest in the benchmark.
- DCP Plugin: Brings value to Opus and Gemini Flash – lowers context and cache usage as expected. However, for Opus it increases computed tokens, which could drain your token budget or increase costs on API.
- codex prompt: The new codex prompt is remarkably efficient. DCP reduces quality here – expected, since the Responses API already seems to optimize in the background.
- coded modded: The optimized codex prompt with subagent-encouragement performed worse than the new original codex prompt.
- subagents in general: Using task-tool and subagents don't seem to make a big difference in context usage. Delegation seems a bit overhyped these days tbh.
Even my own Subagent-Plugin (will publish later) doesn't really make a very big difference in context usage. The numbers of my runs still show that the lead agent needs to do significant work to get its subs controlled and coordinated. But - and this is not really finished yet - it might get useful for integrating locally running models as intelligent worker nodes or increasing quality by working with explicit finegrained plans. E.g. I made really good progress with Devstral 2 Small controlled by Gemini Flash or Opus.
That's it for now. Unfortunately I need to get back into business next week and I wanted to publish a few projects so that they don't pile up on my desk. In case anyone likes to do some benchmarking or efficiency analysis, here's the repository: https://github.com/DasDigitaleMomentum/opencode-agent-evaluator
Have Fun! Comments, PRs are welcome.
EDIT: Here you find a Opencode-Only implementation of my subagent framework: https://www.reddit.com/r/opencodeCLI/comments/1reu076/controlled_subagents_for_implementation_using/
Fala devs! Claude Code virou bagunça por conta dos limites de uso e o Codex é lento e meio burro na minha opinião.
Vocês tem usado alguma alternativa open source pros seus projetos que chegue perto desses dois agentes?
Which is better for coding: Claude Code, Codex, OpenCode, or OpenClaw?
And which cloud-based open source Ollama model works best with the strongest of these coding tools?
Like the title says, I've been a Claude Code user and a fan of it, but after hearing the founder of OpenClaw say that he used Codex and he preferred it more, I decided to try it myself and I was pleasantly surprised at the experience just wondering if there are other reasons that you all like Codex over other AI coding tools since im still new to Codex. Any personal favorite features you have much appreciated <3