🌐
Composio
composio.dev › blog › claude-4-5-opus-vs-gemini-3-pro-vs-gpt-5-codex-max-the-sota-coding-model
Claude 4.5 Opus vs. Gemini 3 Pro vs. GPT-5.2-codex-max: The SOTA coding model - Composio
Claude Opus 4.5: Safest overall pick from this run. It got closest in both tests and shipped working demos, even if there were rough edges (hardcoded values, weird similarity matching).
🌐
AceCloud
acecloud.ai › blog › claude-opus-4-5-vs-gemini-3-pro-vs-sonnet-4-5
Claude Opus 4.5 Vs Gemini 3 Pro Vs Sonnet 4.5 Comparison Guide
November 25, 2025 - Pick Gemini 3 Pro if you need very strong multimodal performance, a 1M-token context window by default, and tight integration with Google tools and Search. Pick Claude Opus 4.5 if you care most about frontier coding performance, deep reasoning ...
🌐
Reddit
reddit.com › r/singularity › gemini 3 pro vision benchmarks: finally compares against claude opus 4.5 and gpt-5.1
r/singularity on Reddit: Gemini 3 Pro Vision benchmarks: Finally compares against Claude Opus 4.5 and GPT-5.1
3 weeks ago -

Google has dropped the full multimodal/vision benchmarks for Gemini 3 Pro.

Key Takeaways (from the chart):

  • Visual Reasoning (MMMU Pro): Gemini 3 hits 81.0% beating GPT-5.1 (76%) and Opus 4.5 (72%).

  • Video Understanding: It completely dominates in procedural video (YouCook2), scoring 222.7 vs GPT-5.1's 132.4.

  • Spatial Reasoning: In 3D spatial understanding (CV-Bench), it holds a massive lead (92.0%).

This Vision variant seems optimized specifically for complex spatial and video tasks, which explains the massive gap in those specific rows.

Official 🔗 : https://blog.google/technology/developers/gemini-3-pro-vision/

🌐
Reddit
reddit.com › r/geminiai › comparing claude opus 4.5 vs gpt-5.1 vs gemini 3 - coding task
r/GeminiAI on Reddit: Comparing Claude Opus 4.5 vs GPT-5.1 vs Gemini 3 - Coding Task
November 28, 2025 -

I Ran all three models for a coding task just to see how they behave when things aren’t clean or nicely phrased.

The goal was just to see who performs like a real dev.

here's my takeaway

Opus 4.5 handled real repo-issues the best. It fixed things without breaking unrelated parts and didn’t hallucinate new abstractions. Felt the most “engineering-minded

GPT-5.1 was close behind. It explained its reasoning step-by-step and sometimes added improvements I never asked for. Helpful when you want safety, annoying when you want precision

Gemini solved most tasks but tended to optimize or simplify decisions I explicitly constrained. Good output, but sometimes too “creative.”

On Refactoring and architecture-level tasks:
Opus delivered the most complete refactor with consistent naming, updated dependencies, and documentation.
GPT-5.1 took longer because it analyzed first, but the output was maintainable and defensive.
Gemini produced clean code but missed deeper security and design patterns.

Context windows (because it matters at repo scale):

  • Opus 4.5: ~200K tokens usable, handles large repos better without losing track

  • GPT-5.1: ~128K tokens but strong long-reasoning even near the limit

  • Gemini 3 Pro: ~1M tokens which is huge, but performance becomes inconsistent as input gets massive

What's your experience been with these three? Used these frontier models Side by Side in my Multi Agent AI setup with Anannas LLM Provider & the results were interesting.

Have you run your own comparisons, and if so, what setup are you using?

🌐
LLM Stats
llm-stats.com › models › compare › claude-opus-4-5-20251101-vs-gemini-3-pro-preview
Claude Opus 4.5 vs Gemini 3 Pro
In-depth Claude Opus 4.5 vs Gemini 3 Pro comparison: Latest benchmarks, pricing, context window, performance metrics, and technical specifications in 2025.
🌐
Substack
natesnewsletter.substack.com › p › claude-opus-45-loves-messy-real-world
I Tested Opus 4.5 Early—Here's Where It Can Save You HOURS on Complex Workflows + a Comparison vs. Gemini 3 and ChatGPT 5.1 + a Model-Picker Prompt + 15 Workflows to Get Started Now
November 25, 2025 - I tested Opus 4.5 vs. Gemini 3 vs. ChatGPT 5.1 on real-world business tasks: here's what I found, plus a complete breakdown of which model I'd use for complex workflows plus a custom model-picker!
🌐
Reddit
reddit.com › r/vibecoding › claude opus 4.5 or gemini 3? which one do you like more? and why?
r/vibecoding on Reddit: Claude Opus 4.5 or Gemini 3? Which one do you like more? And why?
3 weeks ago - I use it for planning, frontend and review. Implementation is so much better with opus. ... Opus 4.5 for everything and it's not even close. ... Opus all the way. Gemini thinks too much and I'm impatient ...
Find elsewhere
🌐
Vertu
vertu.com › best post › claude opus 4.5 vs gemini 3 pro: the week that changed ai forever
Claude Opus vs 4.5Gemini 3 Pro: Pricing Revolution & AI Benchmark Battle | Google & Anthropic
November 25, 2025 - Gemini 3 Pro is described as the best model in the world for multimodal understanding, with significant improvements to reasoning across text, images, audio, and video. It achieved breakthrough scores including 37.5% on Humanity's Last Exam ...
🌐
Reddit
reddit.com › r/claudecode › claude code with opus 4.5 v/s gemni-cli with gemini-3-pro-preview
r/ClaudeCode on Reddit: Claude Code with Opus 4.5 v/s Gemni-cli with gemini-3-pro-preview
3 weeks ago -

I have 1 free month for Google AI Pro, so I am trying to use Gemini whenever I hit the limit of Claude. I was quite happy when I saw that gemini-cli now has gemini-3-pro-preview, which many people and benchmarks say is as good as Opus 4.5.

The usage limit is quite generous for Pro users; I am able to work for quite long sessions with it (until it hits the limit and proposes to go back to 2.5 Pro).

For simple tasks, it takes longer. But it can get the job done.

However, when my project becomes more complex, I start seeing the problem: it takes lots of time to do a simple thing, sometimes forgets things here and there, and struggles with a simple task.

After about ten minutes of back-and-forth where It could not fix a bug, I switched to Claude Code (Opus 4.5)—and voila—it was fixed in 30 seconds.

The problem could be the context window size; bigger does not mean better—it seems that gemini-3-pro choked on its own context mess. Context flop/dump is true.

So, for reliable results, Claude models or Claude Code is still the best at the moment, in my opinion.

P.S. I have not tested intensively with Antigravity yet!

🌐
Geeky Gadgets
geeky-gadgets.com › home › ai › claude opus 4.5 vs gemini 3 pro: who wins the coding tests?
Claude Opus 4.5 vs Gemini 3 Pro: Who Wins the Coding Tests? - Geeky Gadgets
November 27, 2025 - Efficiency gains are a standout feature, with Claude Opus 4.5 achieving high performance while using significantly fewer tokens, making it cost-effective and scalable for large-scale projects.
🌐
Macaron
macaron.im › blog › claude-opus-4-5-vs-chatgpt-5-1-vs-gemini-3-pro
Full Technical Comparison: Claude Opus 4.5 vs. ChatGPT 5.1 vs. Google Gemini 3 Pro - Macaron
Overall, on standard benchmarks like MMLU and PiQA all three are tightly clustered at ~90% accuracy[5], but for “frontier” reasoning tests (complex math, logic puzzles), Gemini 3 Pro has an edge with its “PhD-level” performance[10]. Code ...
🌐
Data Studios
datastudios.org › post › google-gemini-3-vs-claude-opus-4-5-vs-chatgpt-5-1-full-report-and-comparison-of-models-features
Google Gemini 3 vs. Claude Opus 4.5 vs. ChatGPT 5.1: Full Report and Comparison of Models, Features, Performance, Pricing, and more
1 month ago - However, Gemini 3’s overall logical ... while maintaining coherence in its answers. Claude Opus 4.5 (Anthropic) is designed from the ground up for deep, stable reasoning....
🌐
Reddit
reddit.com › r/bard › it seems opus 4.5 is just too amazing even compared to gemini 3
r/Bard on Reddit: It seems opus 4.5 is just too amazing even compared to gemini 3
November 24, 2025 - They're both great SOTA models ... like Claude is better for agentic coding and Gemini is better for multimodality and it's cheaper. ... I was testing Gemini 3 Pro and Sonnet 4.5 side by side yesterday, and to my shock, Sonnet 4.5 is a lot better on instructions following, creativity, and doesn't hallucinate as much. If even Sonnet can go toe to toe with Gemini 3 Pro, Opus 4.5 is likely way ahead of Gemini 3 in terms of intelligence and capability outside of vision based tasks...
🌐
Bind AI IDE
blog.getbind.co › 2025 › 12 › 12 › gpt-5-2-vs-claude-opus-4-5-vs-gemini-3-0-pro-which-one-is-best-for-coding
GPT-5.2 Vs Claude Opus 4.5 Vs Gemini 3.0 Pro – Which One Is Best For Coding?
3 weeks ago - Claude Opus 4.5 excels in scenarios requiring deep code review and security analysis. Development teams report the model catches edge cases and potential vulnerabilities that other models miss, making it valuable for security-critical applications ...
🌐
Vertu
vertu.com › best post › gemini 3 pro vs claude opus 4.5: the ultimate 2025 ai model comparison
Gemini 3 Pro vs Claude Opus 4.5: Benchmarks, Coding, Multimodal, and Cost Comparison
3 weeks ago - Massive Context: 1 million token ... context. ... Coding Consistency: While strong, trails Claude Opus 4.5 on pure software engineering benchmarks like SWE-bench Verified (76.2% vs 80.9%)...
🌐
Reddit
reddit.com › r/claudeai › google’s new gemini 3 pro vision benchmarks officially recognize "claude opus 4.5" as the main competitor
r/ClaudeAI on Reddit: Google’s new Gemini 3 Pro Vision benchmarks officially recognize "Claude Opus 4.5" as the main competitor
3 weeks ago -

Google just released their full breakdown for the new Gemini 3 Pro Vision model. Interestingly, they have finally included Claude Opus 4.5 in the direct comparison, acknowledging it as the standard to beat.

The Data (from the chart):

  • Visual Reasoning: Opus 4.5 holds its own at 72.0% (MMMU Pro), sitting right between the GPT class and the new Gemini.

  • Video Understanding: While Gemini spikes in YouCook2 (222.7), Opus 4.5 (145.8) actually outperforms GPT-5.1 (132.4) in procedural video understanding.

  • The Takeaway: Google is clearly viewing the Opus 4.5 as a key benchmark alongside GPT-5 series.

Note: Posted per request to discuss how Claude's vision capabilities stack up against the new Google architecture.

Source:Google Keyword

🔗: https://blog.google/technology/developers/gemini-3-pro-vision/

🌐
Reddit
reddit.com › r/bard › is it just me or is claude 4.5 better than gemini pro 3 on antigravity
r/Bard on Reddit: Is it just me or is Claude 4.5 better than Gemini Pro 3 on Antigravity
1 month ago -

Gemini 3 Pro is quite slow and keeps making more errors compared to Claude Sonnet 4.5 on Antigravity. It was fine at the start, but the more I used it, it is creating malformed edits and isn't able to even edit a single file?

I don't know if this is a bug or whether it's just that bad. Is anyone else facing problems?

Edit: FYI, I'm experiencing this both on the Low and High version on Fast. It is SO slow. It is taking up to few minutes just to give me an initial response.

🌐
Vertu
vertu.com › best post › gpt-5.2 codex vs gemini 3 pro vs claude opus 4.5: coding comparison guide
GPT-5.2 Codex vs Gemini 3 Pro vs Claude Opus 4.5
5 days ago - Winner: GPT-5.2 Codex, though neither GPT nor Opus achieved full LeetCode acceptance. Key Insight: Despite being the cheapest, Gemini 3 Pro delivered the best results in 2 out of 3 tests. Claude Opus 4.5's premium pricing is not justified by these test results, especially for frontend/UI work.