🌐
AIMLAPI
aimlapi.com › comparisons › gemini-1-5-vs-chatgpt-4o
Gemini 1.5 Pro VS ChatGPT-4o
These tests are the hardest we've ... always ready to solve the tasks you provide. We'd recommend ChatGPT-4o for language comprehension, and Gemini 1.5 for coding....
🌐
Beebom
beebom.com › chatgpt-4o-vs-gemini-1-5-pro
ChatGPT 4o vs Gemini 1.5 Pro: It's Not Even Close | Beebom
October 15, 2025 - We ran the classic reasoning test on ChatGPT 4o and Gemini 1.5 Pro to test their intelligence. OpenAI’s ChatGPT 4o aced it while the improved Gemini 1.5 Pro model struggled to understand the trick question.
🌐
AI Pro
ai-pro.org › learn-ai › articles › a-battle-of-cutting-edge-ai-technologies-gemini-1-5-pro-vs-chatgpt-4o
A Battle Between Gemini 1.5 Pro vs ChatGPT 4o
November 18, 2024 - Gemini 1.5 Pro showcases significant advancements in context management and multimodal functionality, positioning it as a compelling option in the AI landscape. On the other hand, ChatGPT 4o—where 'o' stands for 'omni'—has gained widespread ...
🌐
Reddit
reddit.com › r/chatgpt › gpt-4o vs gemini 1.5 pro: ultimate head to head comparison
r/ChatGPT on Reddit: GPT-4o vs Gemini 1.5 Pro: Ultimate Head to Head Comparison
January 25, 2024 - I heard good news about the latest May 14 gemini pro 1.5 update that just got on llmsys arena. I went to test it hoping it would be decent and up there with GPT-4o or · Claude Opus…. it isn’t, it sucks… once again. ... Google might be in real trouble here. Their search history engine is so fucking miserable to use, and every website is just straight hostile to the user at this point I find myself relying on Chatgpt for almost everything I used to google.
🌐
Reddit
reddit.com › r/chatgpt › chatgpt 4o vs gemini 1.5 pro: it's not even close
r/ChatGPT on Reddit: ChatGPT 4o vs Gemini 1.5 Pro: It's Not Even Close
January 20, 2024 - However it seems a lot of users report that 4o falls short in complex tasks compared to 4 ... I really think this is 5, but they didn't want to market it as 5 for, well, reasons. ... This test is essentially only evaluating logic and problem-solving ...
🌐
CNET
cnet.com › tech › services & software › gpt-4o and gemini 1.5 pro: how the new ai models compare
GPT-4o and Gemini 1.5 Pro: How the New AI Models Compare - CNET
May 25, 2024 - GPT-4o will be available in 50 languages. Gemini 1.5 Pro is available in 35. But given Google's 18-year history with Google Translate, it potentially has a lot more data to train its models in multilingual capabilities. One last similarity: Both models recently introduced functionality to become more conversational...
🌐
Reddit
reddit.com › r/artificial › chatgpt 4o vs gemini 1.5 pro: it's not even close
r/artificial on Reddit: ChatGPT 4o vs Gemini 1.5 Pro: It's Not Even Close
November 14, 2023 - Factual Language: It excels at providing summaries of factual topics and following your instructions carefully. Context Understanding: It can handle complex instructions and remember information over longer conversations. ... Agree. Having the large context window sets Gemini appart and more useful than 4o. ... The logic and reason of Gemini is absolutely terrible. It’s o my good thing is its context length. Which is great but it’s basically gpt3.5 with infinite context
🌐
Live Chat AI
livechatai.com › llm-comparison › gemini-1-5-pro-vs-gpt-4o
Gemini 1.5 Pro vs ChatGPT-4o - Top Differences & Comparison
After using both, here's the takeaway: Gemini 1.5 Pro excels with its one-million-token context window and Mixture-of-Experts (MoE) architecture, perfect for large-scale data tasks like video, audio, and extensive code.
🌐
PromptLayer
blog.promptlayer.com › gemini-1-5-pro-vs-chatgpt-4o-choosing-the-right-model
Gemini 1.5 Pro vs ChatGPT 4o: Which Model is Best?
November 8, 2024 - In this automatic speech recognition task, Gemini 1.5 Pro achieves a lower word error rate, indicating better performance than GPT-4o. ... Want to compare models yourself? PromptLayer lets you compare models side-by-side in an interactive view, ...
Find elsewhere
🌐
Medium
medium.com › @lars.chr.wiik › gpt-4o-vs-gpt-4-vs-gemini-1-5-performance-analysis-6bd207a2c580
GPT-4o vs. GPT-4 vs. Gemini 1.5 ⭐ — Performance Analysis | by Lars Wiik | Medium
June 7, 2024 - As we can derive from the graph, GPT-4o has the lowest error rate of all the models with only 2 mistakes. We can also see that Palm 2 Unicorn, GPT-4, and Gemini 1.5 were close to GPT-4o — showcasing their strong performance.
🌐
Tom's Guide
tomsguide.com › ai
I tested ChatGPT-4.5 vs. Gemini Pro 2.5 with 5 prompts — and the results surprised me | Tom's Guide
April 11, 2025 - I tested ChatGPT-5.2 vs Gemini 3.0 with 7 real-world prompts — here's the winner · Elon Musk's AI vs. Google's AI with 9 challenging prompts — here's the clear winner · I just tested ChatGPT-5.1 vs.
🌐
G2
learn.g2.com › gemini-vs-chatgpt
I Tested Gemini vs. ChatGPT and Found the Clear Winner
July 18, 2025 - ChatGPT is great for writing, brainstorming, and coding, while Gemini excels in real-time research, multimodal processing (text, images, video), and handling longer conversations.
🌐
AI Pro
ai-pro.org › learn-ai › articles › pitting-giants-chatgpt-4o-vs-gemini-advanced
Pitting ChatGPT 4o vs Gemini Advanced
November 20, 2024 - Subscribers also benefit from 2 TB of Google One storage and early access to new features within the Gemini ecosystem. With ChatGPT Plus, you'll experience enhanced interactions thanks to its exclusive features.
🌐
OpenAI Developer Community
community.openai.com › chatgpt
Comparison between GPT 4o and Gemini 1.5 pro - ChatGPT - OpenAI Developer Community
June 24, 2024 - Comparison between GPT 4o and Gemini 1.5 pro which is the best and cost efficient? is there any models by google to compared with openai?
🌐
Medium
medium.com › @neltac33 › gemini-1-5-pro-vs-gpt-4o-a-head-to-head-showdown-29c4cc837e7b
Gemini 1.5 Pro vs. GPT-4o: A Head-to-Head Showdown | by Ahmed Bahaa Eldin | Medium
May 17, 2024 - Gemini 1.5 Pro is known for its exceptional performance in text generation, summarization, and translation tasks. It also features advanced few-shot learning capabilities, allowing it to adapt quickly to new tasks with minimal training data.
🌐
TechRadar
techradar.com › gemini
Google Gemini 2.5 Flash promises to be your favorite AI chatbot, but how does it compare to ChatGPT 4o? | TechRadar
May 22, 2025 - The preferences are more about the peripherals and specific features. GPT-4o has by far the more powerful image generator, but it's also a lot slower. If speed matters more, though, go with Gemini.
🌐
Medium
medium.com › @stephane.giron › gemini-1-5-pro-and-flash-reasoning-capacities-versus-chatgpt-4o-and-openai-o1-1035a9ffa580
Gemini 1.5 Pro and Flash reasoning capacities versus ChatGPT 4o and OpenAI o1 | by Stéphane Giron | Medium
November 14, 2024 - In the video both OpenAI models provide a wrong answer but Gemini 1.5 Pro and Flash answer well. During my tests I asked several times the question and found that Gemini could answer also “c. Dolphin” in one attempt. This was a bit surprising, I made others tests for OpenAI and with adirect ChatGPT 4o call to API return the good answer with temperature 0.5.
🌐
Reddit
reddit.com › r/technology › chatgpt 4o vs gemini 1.5 pro: it's not even close
r/technology on Reddit: ChatGPT 4o vs Gemini 1.5 Pro: It's Not Even Close
October 12, 2022 - Earlier, when ChatGPT was first released, we all knew that Google's competitor to ChatGPT was going to be released. There was tons of discussion that sooner or later it would come and Google's product would be far superior and better, yet here we are: Google is struggling to even match the GPT while GPT 4 Omni has yet again completely revamped and changed the game of AIs. Gemini is still so far behind currently, and the gap isn't closing soon.
🌐
Reddit
reddit.com › r/openai › i compared gemini 2.5 pro preview 03-25 with gpt 4o
r/OpenAI on Reddit: I compared Gemini 2.5 Pro Preview 03-25 with GPT 4o
December 17, 2024 -

I wanted to create some unique codes for some work, but they had to be little senseful like a combination of city names etc, so I thought that 2000 codes would require a good context window and hence gave the task to Gemini 2.5 pro on Google Ai Studio. I asked it to create 2k codes but it only did 1410 and then after prompt it said it is generating 590 more it created around 700 more.

I gave the same prompts in the same sequence to Gpt 4o on a plus plan, it gave me a csv which had 2000 codes. On Gemini btw I had to download the 2 text files.

The best part, the codes by gpt didn't had any duplicate but the one with Gemini had like 5-6 repeated ones.

If curious
Gemini Token count - 21,299 / 1,048,576

Do you guys also had similar experience that proved gpt to be better?

🌐
Reddit
reddit.com › r/geminiai › gemini 2.5 vs chatgpt o4
r/GeminiAI on Reddit: Gemini 2.5 vs Chatgpt o4
April 2, 2025 -

Gemini 2.5 vs ChatGPT 4o – Tested on a Real Renovation Project (with Results)

I recently compared Gemini 2.5 Pro and ChatGPT 4o on a real apartment renovation (~75 m²). I gave both models the same project scope (FFU) for a full interior renovation: flooring, kitchen, bathroom, electrical, demolition, waste handling, and so on.

The renovation is already completed — so I had a final cost to compare against.

🟣 ChatGPT 4o:

Instantly read and interpreted the full FFU

Delivered a structured line-by-line estimate using construction pricing standards

Required no extra prompting to include things like demolition, site management, waste and post-cleanup

Estimated within ~3% of the final project cost

Felt like using a trained quantity surveyor

🔵 Gemini 2.5 Pro:

Initially responded with an estimate of 44,625 SEK for the entire renovation

After further clarification and explanations (things ChatGPT figured out without help), Gemini revised its estimate to a range of 400,000–1,000,000 SEK

The first estimate was off by over 90%

The revised range was more realistic but too wide to be useful for budgeting or offer planning

Struggled to identify FFU context or apply industry norms without significant guidance

🎯 Conclusion

Both models improved when fed more detail — but only one handled the real-life FFU right from the start. ChatGPT 4o delivered an actionable estimate nearly identical to what the renovation actually cost.

Gemini was responsive and polite, but just not built for actual estimating.

Curious if others working in construction, architecture or property dev have run similar tests? Would love to hear your results.

EDIT:

Some have asked if this was just a lucky guess by ChatGPT – totally fair question.

But in this case, it's not just a language model making guesses from the internet. I provided both ChatGPT and Gemini with a PDF export of AMA Hus 24 / Wikells – a professional Swedish construction pricing system used by contractors. Think of it as a trade-specific estimation catalog (with labor, materials, overhead, etc.).

ChatGPT used that source directly to break down the scope and price it professionally. Gemini had access to the exact same file – but didn’t apply it in the same way.

A real test of reasoning with pro tools.