Is Claude Code Sonnet 4.5 With 1M Context Actually Better Than 200k?
Context Length limits finally SOLVED!
The context window simply needs to be larger with Opus as the default
Has anyone noticed how Sonnet 4.5 context window is smaller in desktop app?
Videos
I'm on the 20x Max plan. I get that Opus will use tokens faster, and Anthropic acknowledged this by increasing the total token usage to be something equivalent to the same amount of usage you'd get with Haiku (whether or not that is really true remains to be seen). However, they didn't raise the context window token limit of 200k (I don't have access to the 1M limit).
I just used my first prompt (which was a pretty standard one for me) to help find an issue that threw an error on my front-end, and after its response (which wasn't that helpful), I'm already down to 9% context remaining before auto-compacting.
If Anthropic is going to acknowledge that token consumption will be higher with Opus and scale some of the limits up accordingly, they really should increase the context limit as well.
With the introduction of Opus 4.5, Anthropic just updated the Claude Apps (Web, Desktop, Mobile):
For Claude app users, long conversations no longer hit a wall—Claude automatically summarizes earlier context as needed, so you can keep the chat going.
This is so amazing and was my only gripe I had with Claude (besides limits), and why I kept using ChatGPT (for the rolling context window).
Anyone as happy as I am?