Hi everyone! I made a simple Claude code status line ruby script that shows what model you’re using, git status, project folder, total cost (calculated and configurable), and input/output tokens. It’s nothing fancy, just a fun exercise to practice some Ruby scripting and figured I’d share.
Gist is here: https://gist.github.com/justindell/bdb5c5ecf2549d116813f0817673b5fb
Videos
This is an automatic post triggered within 15 minutes of an official Claude system status update.
Incident: Elevated errors on claude.ai
Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/s5f75jhwjs6g
This is an automatic post triggered within 15 minutes of an official Claude system status update.
Incident: Elevated error rates on Sonnet 4.5
Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/8d87293jq0mk
In the latest version 1.0.72, Claude Code has added a /statusline command that you can use to display information.
If you want to display the current directory's git branch, enter in the command line:
/statusline show the current git branch
If you want to simultaneously display the current directory, git branch, and the model in use, enter in the command line:
/statusline Show the current directory, git branch, and model in use.
After entering this, you can see that Claude will analyze this instruction, then modify the ~/.claude/settings.json file to add the corresponding command.
I have obtained the prompt for the statusline command, see the bottom of the page.
This is an automatic post triggered within 15 minutes of an official Claude system status update.
Incident: Elevated errors on claude.ai
Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/qj71q3gqvvlk
I customized my Claude Code status bar to include:
Branch and model info
Token cost and duration
Lines added/removed
Random quotes ✨
Small touch, but it makes coding so much more enjoyable!
Update:
I built a Claude Code plugin that adds a fully customizable statusline! Please use this!
/plugin marketplace add setouchi-h/cc-marketplace # Install statusline plugin /plugin install statusline@cc-marketplace # First-time installation /statusline:install-statusline # Force reinstall (overwrites existing script) /statusline:install-statusline --force
https://github.com/setouchi-h/cc-marketplace
Yes, Opus is top-tier for coding, but what really makes it exceptional is the tooling, system prompts, context management, and agentic patterns used by Anthropic in Claude Code. And when I say Claude Code, I don’t mean the models themselves—I mean the agentic workflow. This tool was clearly designed by coding experts, for coding. Most of the time it knows exactly which tool to use, where to look in the codebase, which approach to pick, how to structure folders, and how to test properly.
That said, the best coding model right now is GPT-5.2-Codex. When you combine it with Claude Code via a proxy, the result is insanely good. I tested Codex on the CLI today and was genuinely blown away. I don’t know how long it’s been performing at this level, but I’m here to say it clearly: Claude Code is no longer the king.
This is an automatic post triggered within 15 minutes of an official Claude system status update.
Incident: Elevated errors on Sonnet 4.5
Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/55p5hxp0wf3h
This is an automatic post triggered within 15 minutes of an official Claude system status update.
Incident: Elevated errors across many models
Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/9g6qpr72ttbr
This is an automatic post triggered within 15 minutes of an official Claude system status update.
Incident: Elevated error rates to Sonnet 4.5 on Claude Code
Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/4bvvh93qjl25
This is an automatic post triggered within 15 minutes of an official Claude system status update.
Incident: Elevated errors on Claude.ai
Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/h46s65s799ch
I've been using Claude Code for two months so far and have never hit the limit. But yesterday it stopped working and gave a cooldown for 4 days. If its limit resets every 5 hours, why a cooldown for 4 days? I tried usage-based pricing, and it charged $10 in 10 minutes. Is there something wrong with new update of Claude code?
I've been using Claude Code for the past two weekends and I'm absolutely blown away by what it can do! Over the last two weekends I've crushed through 230M tokens (about $140 worth of API credit) building some web applications. Personally, having tried Replit, Bolt, Loveable, Cursor and Windsurf, I feel like I enjoy using Claude Code a whole lot more.
Wanted to see how others feel about it? What do you like or don't like?
This is an automatic post triggered within 15 minutes of an official Claude system status update.
Incident: Elevated errors for requests to Claude 4 Sonnet
Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/zdxjv49ydg0f
I dont see many people talk about it.
I recently got the max plan (just to test things out). Omfg this thing feels like a true Agent system and am totally changing the way I approach coding and just doing any digital things.
I gave it a narly project to do a BI workflow/data analytics project that I had been working on. It read through my spec, understood the data schema, ran more things by itself to understand more of the data, and outputted a python code that satisfied my spec. What took me a long ass time to do (ie copy pasting data to a webui, asking ai to understand the data and write the sql i want), now it just does it all by itself.
I hooked up Notion MCP and gave a DB of projects I want it to work on (i've written some high level specs), and it automatically went thru all of it and punched it out and updated the project status.
Its unreal. I feel like this is a true agentic program that can really run on its own and do things well.
How come no ones is talking about!??
I am a senior dev of 10 years, and have been using claude code since it's beta release (started in December IIRC).
I have seen countless posts on here of people saying that the code they are getting is absolute garbage, having to rewrite everything, 20+ corrections, etc.
I have not had this happen once. And I am curious what the difference is between what I am doing and what they are doing. To give an example, I just recently finished 2 massive projects with claude code in days that would have previously taken months to do.
A C# Microservice api using minimal apis to handle a core document system at my company. CRUD as well as many workflow oriented APIs with full security and ACL implications, worked like a charm.
Refactoring an existing C# API (controller MVC based) to get rid of the mediatr package from within it and use direct dependency injection while maintaining interfaces between everythign for ease of testing. Again, flawless performance.
These are just 2 examples of the countless other projects im working on at the moment where they are also performing exceptionally.
I genuinely wonder what others are doing that I am not seeing, cause I want to be able to help, but I dont know what the problem is.
Thanks in advance for helping me understand!
Edit: Gonna summarize some of the things I'm reading here (on my own! Not with AI):
- Context is king!
- Garbage in, Garbage out
- If you don't know how to communicate, you aren't going to get good results.
- Statistical Bias, people who complain are louder than those who are having a good time.
- Less examples online == more often receiving bad code.
There has been a lot of buzz that Claude code is now “much worse” than “a few days ago” - I subscribed to x20 last Friday, and have been finding amazing success with it so far, with about $750 in api calls over 4 days.
Opus 50% warning hits around $60 in token usage, but I have never been rate limited yet.
Opus output has been so far very good, and I’m very happy with the output so far. All the talk about “how it used to be so much better”, at least for me, is hard to see.
Am I crazy?
Hey Folks,
Just wanted to quickly report that Claude Code is running perfectly again for me! Had some issues with it over the past few days/weeks, but after updating normally today, everything is working as it should.
Anthropic announced today that they've fixed various bugs, and I can confirm - it's definitely noticeable. The performance is back and commands are executing correctly.
Has anyone else had similar experiences?
Did you also have problems with Claude Code recently?
Is it working better for you after the update?
What specific improvements have you noticed?
Curious to hear about your experiences!