Gemini 3 pro is pretty much unusable on Antigravity unfortunately. I think it works better in other tools which is strange. Answer from danihend on reddit.com
🌐
Composio
composio.dev › blog › claude-4-5-opus-vs-gemini-3-pro-vs-gpt-5-codex-max-the-sota-coding-model
Claude 4.5 Opus vs. Gemini 3 Pro vs. GPT-5-codex-max: The SOTA coding model - Composio
WebDev Arena: Gemini 3 Pro reaches ... Opus 4.5(Claude): Outstanding at strategy and design, but its solutions tend to be elaborate, slower to integrate, and prone to practical hiccups once they hit the metal....
🌐
AceCloud
acecloud.ai › blog › claude-opus-4-5-vs-gemini-3-pro-vs-sonnet-4-5
Claude Opus 4.5 Vs Gemini 3 Pro Vs Sonnet 4.5 Comparison Guide
November 25, 2025 - Pick Gemini 3 Pro if you need very strong multimodal performance, a 1M-token context window by default, and tight integration with Google tools and Search. Pick Claude Opus 4.5 if you care most about frontier coding performance, deep reasoning ...
🌐
Reddit
reddit.com › r/singularity › gemini 3 pro vision benchmarks: finally compares against claude opus 4.5 and gpt-5.1
r/singularity on Reddit: Gemini 3 Pro Vision benchmarks: Finally compares against Claude Opus 4.5 and GPT-5.1
3 weeks ago -

Google has dropped the full multimodal/vision benchmarks for Gemini 3 Pro.

Key Takeaways (from the chart):

  • Visual Reasoning (MMMU Pro): Gemini 3 hits 81.0% beating GPT-5.1 (76%) and Opus 4.5 (72%).

  • Video Understanding: It completely dominates in procedural video (YouCook2), scoring 222.7 vs GPT-5.1's 132.4.

  • Spatial Reasoning: In 3D spatial understanding (CV-Bench), it holds a massive lead (92.0%).

This Vision variant seems optimized specifically for complex spatial and video tasks, which explains the massive gap in those specific rows.

Official 🔗 : https://blog.google/technology/developers/gemini-3-pro-vision/

🌐
Reddit
reddit.com › r/geminiai › comparing claude opus 4.5 vs gpt-5.1 vs gemini 3 - coding task
r/GeminiAI on Reddit: Comparing Claude Opus 4.5 vs GPT-5.1 vs Gemini 3 - Coding Task
1 month ago -

I Ran all three models for a coding task just to see how they behave when things aren’t clean or nicely phrased.

The goal was just to see who performs like a real dev.

here's my takeaway

Opus 4.5 handled real repo-issues the best. It fixed things without breaking unrelated parts and didn’t hallucinate new abstractions. Felt the most “engineering-minded

GPT-5.1 was close behind. It explained its reasoning step-by-step and sometimes added improvements I never asked for. Helpful when you want safety, annoying when you want precision

Gemini solved most tasks but tended to optimize or simplify decisions I explicitly constrained. Good output, but sometimes too “creative.”

On Refactoring and architecture-level tasks:
Opus delivered the most complete refactor with consistent naming, updated dependencies, and documentation.
GPT-5.1 took longer because it analyzed first, but the output was maintainable and defensive.
Gemini produced clean code but missed deeper security and design patterns.

Context windows (because it matters at repo scale):

  • Opus 4.5: ~200K tokens usable, handles large repos better without losing track

  • GPT-5.1: ~128K tokens but strong long-reasoning even near the limit

  • Gemini 3 Pro: ~1M tokens which is huge, but performance becomes inconsistent as input gets massive

What's your experience been with these three? Used these frontier models Side by Side in my Multi Agent AI setup with Anannas LLM Provider & the results were interesting.

Have you run your own comparisons, and if so, what setup are you using?

🌐
Substack
natesnewsletter.substack.com › p › claude-opus-45-loves-messy-real-world
I Tested Opus 4.5 Early—Here's Where It Can Save You HOURS on Complex Workflows + a Comparison vs. Gemini 3 and ChatGPT 5.1 + a Model-Picker Prompt + 15 Workflows to Get Started Now
November 25, 2025 - I tested Opus 4.5 vs. Gemini 3 vs. ChatGPT 5.1 on real-world business tasks: here's what I found, plus a complete breakdown of which model I'd use for complex workflows plus a custom model-picker!
🌐
Geeky Gadgets
geeky-gadgets.com › home › ai › claude opus 4.5 vs gemini 3 pro: who wins the coding tests?
Claude Opus 4.5 vs Gemini 3 Pro: Who Wins the Coding Tests? - Geeky Gadgets
November 27, 2025 - Efficiency gains are a standout feature, with Claude Opus 4.5 achieving high performance while using significantly fewer tokens, making it cost-effective and scalable for large-scale projects.
🌐
Macaron
macaron.im › blog › claude-opus-4-5-vs-chatgpt-5-1-vs-gemini-3-pro
Full Technical Comparison: Claude Opus 4.5 vs. ChatGPT 5.1 vs. Google Gemini 3 Pro - Macaron
Overall, on standard benchmarks like MMLU and PiQA all three are tightly clustered at ~90% accuracy[5], but for “frontier” reasoning tests (complex math, logic puzzles), Gemini 3 Pro has an edge with its “PhD-level” performance[10]. Code ...
Find elsewhere
🌐
Reddit
reddit.com › r/bard › is it just me or is claude 4.5 better than gemini pro 3 on antigravity
r/Bard on Reddit: Is it just me or is Claude 4.5 better than Gemini Pro 3 on Antigravity
1 month ago -

Gemini 3 Pro is quite slow and keeps making more errors compared to Claude Sonnet 4.5 on Antigravity. It was fine at the start, but the more I used it, it is creating malformed edits and isn't able to even edit a single file?

I don't know if this is a bug or whether it's just that bad. Is anyone else facing problems?

Edit: FYI, I'm experiencing this both on the Low and High version on Fast. It is SO slow. It is taking up to few minutes just to give me an initial response.

🌐
Reddit
reddit.com › r/vibecoding › claude opus 4.5 or gemini 3? which one do you like more? and why?
r/vibecoding on Reddit: Claude Opus 4.5 or Gemini 3? Which one do you like more? And why?
3 weeks ago - I use it for planning, frontend and review. Implementation is so much better with opus. ... Opus 4.5 for everything and it's not even close. ... Opus all the way. Gemini thinks too much and I'm impatient ...
🌐
Vertu
vertu.com › best post › claude opus 4.5 vs gemini 3 pro: the week that changed ai forever
Claude Opus vs 4.5Gemini 3 Pro: Pricing Revolution & AI Benchmark Battle | Google & Anthropic
November 25, 2025 - Gemini 3 Pro is described as the best model in the world for multimodal understanding, with significant improvements to reasoning across text, images, audio, and video. It achieved breakthrough scores including 37.5% on Humanity's Last Exam ...
🌐
Data Studios
datastudios.org › post › google-gemini-3-vs-claude-opus-4-5-vs-chatgpt-5-1-full-report-and-comparison-of-models-features
Google Gemini 3 vs. Claude Opus 4.5 vs. ChatGPT 5.1: Full Report and Comparison of Models, Features, Performance, Pricing, and more
1 month ago - However, Gemini 3’s overall logical ... while maintaining coherence in its answers. Claude Opus 4.5 (Anthropic) is designed from the ground up for deep, stable reasoning....
🌐
Bind AI IDE
blog.getbind.co › 2025 › 12 › 12 › gpt-5-2-vs-claude-opus-4-5-vs-gemini-3-0-pro-which-one-is-best-for-coding
GPT-5.2 Vs Claude Opus 4.5 Vs Gemini 3.0 Pro – Which One Is Best For Coding?
2 weeks ago - Claude Opus 4.5 excels in scenarios requiring deep code review and security analysis. Development teams report the model catches edge cases and potential vulnerabilities that other models miss, making it valuable for security-critical applications ...
🌐
LLM Stats
llm-stats.com › models › compare › claude-opus-4-5-20251101-vs-gemini-3-pro-preview
Claude Opus 4.5 vs Gemini 3 Pro
In-depth Claude Opus 4.5 vs Gemini 3 Pro comparison: Latest benchmarks, pricing, context window, performance metrics, and technical specifications in 2025.
🌐
Vertu
vertu.com › best post › gemini 3 pro vs claude opus 4.5: the ultimate 2025 ai model comparison
Gemini 3 Pro vs Claude Opus 4.5: Benchmarks, Coding, Multimodal, and Cost Comparison
3 weeks ago - Massive Context: 1 million token ... context. ... Coding Consistency: While strong, trails Claude Opus 4.5 on pure software engineering benchmarks like SWE-bench Verified (76.2% vs 80.9%)...
🌐
Reddit
reddit.com › r/claudeai › google’s new gemini 3 pro vision benchmarks officially recognize "claude opus 4.5" as the main competitor
r/ClaudeAI on Reddit: Google’s new Gemini 3 Pro Vision benchmarks officially recognize "Claude Opus 4.5" as the main competitor
3 weeks ago -

Google just released their full breakdown for the new Gemini 3 Pro Vision model. Interestingly, they have finally included Claude Opus 4.5 in the direct comparison, acknowledging it as the standard to beat.

The Data (from the chart):

  • Visual Reasoning: Opus 4.5 holds its own at 72.0% (MMMU Pro), sitting right between the GPT class and the new Gemini.

  • Video Understanding: While Gemini spikes in YouCook2 (222.7), Opus 4.5 (145.8) actually outperforms GPT-5.1 (132.4) in procedural video understanding.

  • The Takeaway: Google is clearly viewing the Opus 4.5 as a key benchmark alongside GPT-5 series.

Note: Posted per request to discuss how Claude's vision capabilities stack up against the new Google architecture.

Source:Google Keyword

🔗: https://blog.google/technology/developers/gemini-3-pro-vision/

🌐
Glbgpt
glbgpt.com › hub › gemini-3-pro-vs-claude45
Gemini 3 Pro vs Claude 4.5: I Tested Both for Coding – Here’s the Surprising Winner
If you just want the short answer: for most real-world coding work today, Claude 4.5 is still the more reliable all‑around coding assistant, especially for complex reasoning, planning, and backend logic.
🌐
Vertu
vertu.com › ai tools › gemini 3 pro vision vs claude opus 4.5: complete benchmark comparison 2025
Gemini 3 Pro vs Claude Opus 4.5: Coding, Vision, Agentic Workflows & Benchmarks
3 weeks ago - Verdict: Claude Opus 4.5 is the undisputed coding champion across multiple benchmarks. Verdict: Gemini 3 Pro dominates vision-centric benchmarks where Claude wasn't designed to compete.
🌐
Glbgpt
glbgpt.com › hub › claude-opus-4-5-vs-gemini-3
Claude Opus 4.5 vs Gemini 3: Which AI Model Is Better in 2025? - Global GPT
Additionally, the model performs best when integrated within Google’s ecosystem, which may limit flexibility for some standalone environments. Claude Opus 4.5 pushes Anthropic’s reasoning capabilities forward with extended thinking, more ...