Is OpenAI’s New $200/Month “Pro” Subscription Really Worth It?
OpenAI plans to slowly raise prices to $44 per month ($528 per year)
GPT-3 Pricing to be Reduced
What pricing would you like to see with Codex?
Videos
Just saw the announcement for OpenAI’s new “Pro” plan at a whopping $200 a month. Supposedly, this gives you unlimited access to their new o1-feature, which is basically a fancy method of internally refining responses through multiple iterations until it reaches four separate outputs considered “correct.” But here’s the catch: this process can take up to ten minutes. That’s right - ten minutes for what boils down to something I could replicate with a bit of clever prompting myself.
They’ve also removed all the usual limitations, including that much-hyped Advanced Voice Mode, which was previously capped under the regular Plus subscription. The normal Plus plan already includes these features, just with usage caps like daily message limits, a 45-minute voice mode cap, and shorter “reasoning time” for the o1 feature. Now, for $200 a month, you supposedly can do it all limitlessly. But seriously, is that worth the price hike?
Honestly, you can pull off the same reflection-based improvements using the API for far less. The research behind this “reflection” technique has been public for ages. In fact, you can do something as simple as asking, “Why was the last response incorrect?” and get a refined answer without shelling out an extra $200. If you’re working in any specialized domain, you’re better off implementing your own reflection system - or even juggling two Plus accounts or a team plan for a fraction of the cost. The so-called unlimited Advanced Voice Mode doesn’t justify that price tag either. It’s glitchy, tends to interrupt you, and you often have to start over from scratch. It’s not even supported in GPTs, and there’s no web search yet. Come on, if I’m paying $200 a month, I’d expect a rock-solid experience.
The entire direction OpenAI is taking feels off. They keep stacking on new features - some half-baked, others outright buggy - just to appear like they’re on the cutting edge, but it’s starting to feel like an overstuffed mess. Every new update chips away at reliability. They flaunt GPTs with large character limits (up to 8,000), but stability nosedives around 4,000 characters. By 8,000, the model is basically forgetting basic instructions you’ve hammered into it repeatedly. It’s like they’re trying to wring every last drop out of their existing architecture, and we’re the guinea pigs stuck with the fallout.
Instead of rushing out these undercooked features, OpenAI should focus on transparency and quality. Show us where GPT-5 is at. Offer real demos and progress updates. Fix your bugs. Strengthen your support systems. As someone who’s spent years professionally testing software, I know how to report bugs properly - yet reporting issues to OpenAI’s support is like shouting into the void. They don’t listen, and when they do, they can’t even distinguish between model and API issues. They’ve brushed me off, ignored legitimate bug reports, and even botched a bug bounty. It’s a joke.
Don’t get me wrong: I love ChatGPT. It’s an incredible product. But as long as OpenAI continues to milk it for every cent without ensuring quality, stability, and proper support, the entire experience will degrade. For $200 a month, I’d expect revolutionary improvements, not a messy bundle of half-working features that I can replicate myself more cheaply and reliably. OpenAI, if you’re reading this: slow down, clean up your act, and remember why people fell in love with ChatGPT in the first place.