I know we all have ill feelings about Elon, but can we seriously not take one second to validates its performance objectively.
People are like "Well, it is still worse than o3", we do not have access to that yet, it uses insane amounts of compute, and the pre-training only stopped a month ago, there is still much much potential to train the thinking models to exceed o3. Then there is "Well, it uses 10-15x more compute, and it is barely an improvement, so it is actually not impressive at all". This is untrue for three reason.
Firstly Grok-3 is definitely a big step up from Grok 2.
Secondly scaling has always been very compute-intensive, there is a reason that intelligence had not been a winning evolutionary trait for a long time and still is. It is expensive. If we could predictably get performance improvements like this for every 10-15x scaling in compute, then we would have Superintelligence in no time, especially considering how now three scaling paradigms stack on top of each other: Pre-Training, Post-Training and RL, inference-time-compute.
Thirdly if you look at the LLaMA paper in 54 days of training with 16000 H100, they had 419 component failures, and the small XAI team is training on 100-200 thousands ~h100's for much longer. This is actually quite an achievement.
Then people are also like "Well, GPT-4.5 will easily destroy this any moment now". Maybe, but I would not be so sure. The base Grok 3 performance is honestly ludicrous and people are seriously downplaying it.
When Grok 3 is compared to other base models, it is waay ahead of the pack. People got to remember the difference between the old and new Claude 3.5 sonnet was only 5 points in GPQA, and this is 10 points ahead of Claude 3.5 Sonnet New. You also got to consider the controversial maximum of GPQA Diamond is 80-85 percent, so a non-thinking model is getting close to saturation. Then there is Gemini-2 Pro. Google released this just recently, and they are seriously struggling getting any increase in frontier performance on base-models. Then Grok 3 just comes along and pushes the frontier ahead by many points.
I feel like a part of why the insane performance of Grok 3 is not validated more is because of thinking models. Before thinking models performance increases like this would be absolutely astonishing, but now everybody is just meh. I also would not count out Grok 3 thinking model getting ahead of o3, given its great performance gains, while still being in really early development.
The grok 3 mini base model is approximately on par with all the other leading base-models, and you can see its reasoning version actually beating Grok-3, and more importantly the performance is actually not too far off o3. o3 still has a couple of months till it gets released, and in the mean time we can definitely expect grok-3 reasoning to improve a fair bit, possibly even beating it.
Maybe I'm just overestimating its performance, but I remember when I tried the new sonnet 3.5, and even though a lot of its performance gains where modest, it really made a difference, and was/is really good. Grok 3 is an even more substantial jump than that, and none of the other labs have created such a strong base-model, Google is especially struggling with further base-model performance gains. I honestly think this seems like a pretty big achievement.
Elon is a piece of shit, but I thought this at least deserved some recognition, not all people on the XAI team are necessarily bad people, even though it would be better if they moved to other companies. Nevertheless this should at least push the other labs forward in releasing there frontier-capabilities so it is gonna get really interesting!
Videos
I dont often use the A word, but this time it's very much deserved.
I'm not a shill... but I love the flowing conversation and research ability the model has. It's truly been worth the wait. I'm hooked and can't wait to see the future of the AI. Top that with a very decent number of free messages you can use to test out the viability of Premium
I previously left a critical review of Grok 2, it only felt fair to express my excitement for 3 after trying it.
No politics here, btw... just appreciation of a brilliant AI that also isn't censored up the wazoo
I tested both Grok 3 and Grok 3 THINK on coding, math, reasoning and common sense. Here are a few early observations:
- The non-reasoning model codes better than the thinking model
- The reasoning model is very fast, it looked slightly faster than Gemini 2.0 Flash Thinking, which in itself is quite fast
- Grok 3 THINK is very smart and approaches problems like DeepSeek R1 does, even uses "Wait, but..."
- G3-Think doesn't seem to load balance, it thinks unnecessarily long at times for easy questions, like R1 does
- Grok 3 didn't seem significantly better than existing top models like Claude 3.5 Sonnet or o3-mini, though we'll finalize testing after API access
- G3-Think is not deterministic, it failed 2 our of 3 attempts at a hard coding problem, each having different results (Exercism REST API challenge):
> Either it has a higher than normal temperature setting,
> introduces regressions in the "daily improvements" Elon Musk mentioned,
> or is load balancing different versions
> Coding Challenge GitHub repo: https://github.com/exercism/python/blob/main/exercises/practice/rest-api
> Coding Challenge: https://exercism.org/tracks/python/exercises/rest-api
- For those who just want to see the entire test suite: https://youtu.be/hN9kkyOhRX0
What are your initial impressions of Grok 3?
So, I know that it's free now on X but I didn't have time to try it out yet, although I saw a script to connect grok 3 into SillyTavern without X's prompt injection. Before trying, I wanted to see what's the consensus by now. Btw, my most used model lately has been R1, so if anyone could compare the two.
When Grok 3 launched, Elon hyped it up—but didn't give us a 100% proof it was better than the other models. Fast forward two months, xAI has opened up its API, so we can finally see how Grok truly performs.
Independent tests show Grok 3 is a strong competitor. It definitely belongs among the top models, but it's not the champion Musk suggested it would be. Plus, in these two months, we've seen Gemini 2.5, Claude 3.7, and multiple new GPT's arrive.
But the real story behind Grok is how fast xAI execution is:
In about six months, a company less than two years old built one of the world's most advanced data centers, equipped with 200,000 liquid-cooled Nvidia H100 GPUs.
Using this setup, they trained a model ten times bigger than any of the previous models.
So, while Grok 3 itself isn't groundbreaking in terms of performance, the speed at which xAI scaled up is astonishing. By combining engineering skill with a massive financial push, they've earned a spot alongside OpenAI, Google, and Anthropic.
See more details and thoughts in my full analysis here.
I'd really love your thoughts on this—I'm a new author, and your feedback would mean a lot!
So I've been using claude 3.7 sonnet pro for a month, paid plan. . Decided to try grok 3. For code generation it has been excellent. I'm almost ready to cancel claude and go super grok. Seems to make a lot less mistakes or go down rabbit holes,
There’s been a lot of debate about whether Gork-3 outperforms ChatGPT-4o. Some claim it has better contextual memory and real-time awareness, while others argue that ChatGPT-4o excels in reasoning, coding, and accuracy.
From my experience, ChatGPT-4o is reliable for structured tasks, while Gork-3 seems more creative but sometimes inconsistent. Have you tried Gork-3? How does it compare to ChatGPT-4o? Let’s discuss!
-100K Nvidia H100 GPUs, by far the most compute power of any AI model. (A single H100 costs $30,000.)
-200 million GPU hours for training.
-Trained on the largest synthetic dataset.
-Uses test-time compute like O1 and O3.
-Likely was several billion dollars to train.
-It performed well on benchmarks. Yet, many users report that models over a year old still outperform it in various tasks.
I was actually one of the few people optimistic about Grok 3 because the sheer amount of compute that went into it has implications for the future of LLMs as a whole.
DeepMind flopped with Gemini 2.0 Pro (they realized months ago that it couldn’t outperform Gemini 1.5, yet they released it anyway). Anthropic scrapped 3.5 Opus due to massive performance/cost issues in Fall 2024 and instead released a "new" 3.5 Sonnet, forcing them back to the drawing board. OpenAI kept delaying GPT-4.5/Orion.
Were the LLM critics right all along? Models like Gemini 2, Grok 3, and GPT-5 were supposed to generate tens of thousands of lines of clean, bug-free code and create highly creative, coherent 300+ page novels in one shot. Yet these SOTA models will still refuse to generate anything more than 5-10 pages in length, and when you try to force them, they lose coherency and begin to hallucinate.
No one is rushing to use these next-generation models. People forgot Gemini 2.0 even exists. It remains to be seen if GPT5 can meet the hype.
But I am starting to suspect that GPT5 might yet be another slight incremental upgrade over the likes of Gemini 2.0 Pro and Grok 3.
OpenAI has used such graphs before so it’s not the worst sin, but it does go to show the o3 family is still in a league of its own.
Elon is bragging about his AI. So is it any good at complex code?
They seem very bad at delivering . Very good at talking about delivering.
When Grok 3 launched, Elon hyped it up—but didn't give us a 100% proof it was better than the competition. Fast forward two months, xAI has opened up its API, so we can finally see how Grok truly performs.
Independent tests show Grok 3 is a strong competitor. It definitely belongs among the top models, but it's not the champion Musk suggested it would be. Plus, in these two months, we've seen other models like Gemini 2.5, Claude 3.7, and GPT-4.5 arrive.
But the real story behind Grok is how fast xAI execution is:
In about six months, a company less than two years old built one of the world's most advanced data centers, equipped with 200,000 liquid-cooled Nvidia H100 GPUs.
Using this setup, they trained a model ten times bigger than any of the previous models.
So, while Grok 3 itself isn't groundbreaking in terms of performance, the speed at which xAI scaled up is astonishing. By combining engineering skill with a massive financial push, they've earned a spot alongside OpenAI, Google, and Anthropic.
If you're interested, you can read my full analysis here.
I'd really love your thoughts on this—I'm a new author, and your feedback would mean a lot!
in fact, i don't see any posts or announcements about this demo from official xai sources.
edit: maybe i was not clear. i am not claiming it doesn't exist. i'm claiming the likelihood that it's very good is low considering how no one from the company (including official sources) is hyping it up other than elon.
https://x.com/lmarena_ai/status/1891706264800936307
Ranked #1 across all categories (including even in coding and creative writing)
96% on AIME, 85% on GPQA,
Karpathy says it's equal to the $200/month O1 Pro:
I like that the model will attempt to solve the Riemann hypothesis when asked to, similar to DeepSeek-R1 but unlike many other models that give up instantly (o1-pro, Claude, Gemini 2.0 Flash Thinking) and simply say that it is a great unsolved problem. I had to stop it eventually because I felt a bit bad for it, but it showed courage and who knows, maybe one day...The impression overall I got here is that this is somewhere around o1-pro capability, and ahead of DeepSeek-R1
Summary. As far as a quick vibe check over ~2 hours this morning, Grok 3 + Thinking feels somewhere around the state of the art territory of OpenAI's strongest models (o1-pro, $200/month), and slightly better than DeepSeek-R1 and Gemini 2.0 Flash Thinking. Which is quite incredible considering that the team started from scratch ~1 year ago, this timescale to state of the art territory is unprecedented. Do also keep in mind the caveats - the models are stochastic and may give slightly different answers each time, and it is very early, so we'll have to wait for a lot more evaluations over a period of the next few days/weeks. The early LM arena results look quite encouraging indeed. For now, big congrats to the xAI team, they clearly have huge velocity and momentum and I am excited to add Grok 3 to my "LLM council" and hear what it thinks going forward.
https://x.com/karpathy/status/1891720635363254772
I wonder how Claude 4 compares.
GROK 3 just launched.Here are the Benchmarks.Your thoughts?