🌐
GitHub
gist.github.com › agokrani › 919b536246dd272a55157c21d46eda14
Claude Code System Prompt · GitHub
@pmarreck yes, there are lots of prompts that are getting generated dynamically. However, there is a part of the prompt that is static and system_reminder tags are getting injected everywhere those are hardcoded.
🌐
Reddit
reddit.com › r/claudeai › understanding claude code's 3 system prompt methods (output styles, --append-system-prompt, --system-prompt)
r/ClaudeAI on Reddit: Understanding Claude Code's 3 system prompt methods (Output Styles, --append-system-prompt, --system-prompt)
October 14, 2025 -

Uhh, hello there. Not sure I've made a new post that wasn't a comment on Reddit in over a decade, but I've been using Claude Code for a while now and have learned a lot of things, mostly through painful trial and error:

  • Days digging through docs

  • Deep research with and without AI assistance

  • Reading decompiled Claude Code source

  • Learning a LOT about how LLMs function, especially coding agents like CC, Codex, Gemini, Aider, Cursor, etc.

Anyway I ramble, I'll try to keep on-track.

What This Post Covers

A lot of people don't know what it really means to use --append-system-prompt or to use output styles. Here's what I'm going to break down:

  • Exactly what is in the Claude Code system prompt for v2.0.14

  • What output styles replace in the system prompt

  • Where the instructions from --append-system-prompt go in your system prompt

  • What the new --system-prompt flag does and how I discovered it

  • Some of the techniques I find success with

This post is written by me and lightly edited (heavily re-organized) by Claude, otherwise I will ramble forever from topic to topic and make forever run-on sentences with an unholy number of commas because I have ADHD and that's how my stream of consciousness works. I will append an LLM-generated TL;DR to the bottom or top or somewhere for those of you who are already fed up with me.

How I Got This Information

The following system prompts were acquired using my fork of the cchistory repository:

  • Original repo: https://github.com/badlogic/cchistory (broken since October 5th, stopped at v2.0.5)

  • Original diff site: https://cchistory.mariozechner.at/

  • My working fork: https://github.com/AnExiledDev/cchistory/commit/1466439fa420aed407255a54fef4038f8f80ec71

    • ⚠️ Grab from main at your own peril, I am planning a rewrite so it isn't just a monolithic index.js; then write full unit tests

    • You need to set output style in settings.json (in .claude) to test output styles if using my fork, possibly using the custom binary flag as well

The Claude Code System Prompt Breakdown

Let's start with the Claude Code System Prompt. I've used cchistory to generate the system prompt here: https://gist.github.com/AnExiledDev/cdef0dd5f216d5eb50fca12256a91b4d

Lot of BS in there and most of it is untouchable unless you use the Claude Agent SDK, but that's a rant for another time.

Output Styles: What Changes

I generated three versions to show you exactly what's happening:

  1. With an output style: https://gist.github.com/AnExiledDev/b51fa3c215ee8867368fdae02eb89a04

  2. With --append-system-prompt: https://gist.github.com/AnExiledDev/86e6895336348bfdeebe4ba50bce6470

  3. Side-by-side diff: https://www.diffchecker.com/LJSYvHI2/

Key differences when you use an output style:

  • Line 18 changes to mention the output style below, specifically calling out to "help users according to your 'Output Style'" and "how you should respond to user queries."

  • The "## Tone and style" header is removed entirely. These instructions are pretty light. HOWEVER, there are some important things you will want to preserve if you continue to use Claude Code for development:

    • Sections relating to erroneous file creation

    • Emojis callout

    • Objectivity

  • The "## Doing tasks" header is removed as well. This section is largely useless and repetitive. Although do not forget to include similar details in your output style to keep it aligned to the task, however literally anything you write will be superior, if I'm being honest. Anthropic needs to do better here...

  • The "## Output Style: Test Output Style" header exists now! The "Test Output Style" is the name of my output style I used to generate this. What is below the header is exactly as I have in my test output style.

Important placement note: You might notice the output style is directly above the tools definition, which since the tools definitions are a disorganized, poorly written, bloated mess, this is actually closer to the start of the system prompt than the end.

Why this matters:

  • LLMs maintain context best from the start and ending of a large prompt

  • Since these instructions are relatively close to the start, adherence is quite solid in my experience, even with context windows larger than >180k tokens

  • However, I found instruction adherence to begin to degrade after >120k tokens, sometimes as early as >80k tokens in the context

--append-system-prompt: Where It Goes

Now if you look at the --append-system-prompt example we see once again, this is appended DIRECTLY above the tools definitions.

If you use both:

  • Output style is placed above the appended system prompt

Pro tip: In my VSC devcontainer, I have it configured to create a Claude command alias to append a specific file to the system prompt upon launch. (Simplified the script so you can use it too: https://gist.github.com/AnExiledDev/ea1ac2b744737dcf008f581033935b23)

Discovering the --system-prompt Flag (v2.0.14)

Now, primarily the reason for why I have chosen today to finally share this information is because v2.0.14's changelog mentions they documented a new flag called "--system-prompt." Now, maybe they documented the code internally, or I don't know the magic word, but as far as I can tell, no they fucking did not.

Where I looked and came up empty:

  • claude --help at the time of writing this

  • Their docs where other flags are documented

  • Their documentation AI said it doesn't exist

  • Couldn't find any info on it anywhere

So I forked cchistory again since my old fork I had done similar but in a really stupid way so just started over, fixed the critical issues, then set it up to use my existing Claude Code instance instead of downloading a fresh one which satisfied my own feature request from a few months ago which I made before deciding I'd do it myself. This is how I was able to test and document the --system-prompt flag.

What --system-prompt actually does:

The --system-prompt flag finally added SOME of what I've been bitching about for a while. This flag replaces the entire system prompt except:

  • The bloated tool definitions (I get why, but I BEG you Anthropic, let me rewrite them myself, or disable the ones I can just code myself, give me 6 warning prompts I don't care, your tool definitions suck and you should feel bad. :( )

  • A single line: "You are a Claude agent, built on Anthropic's Claude Agent SDK."

Example system prompt using "--system-prompt '[PINEAPPLE]'": https://gist.github.com/AnExiledDev/e85ff48952c1e0b4e2fe73fbd560029c

Key Takeaways

Claude Code's system prompt is finally, mostly (if it weren't for the bloated tool definitions, but I digress) customizable!

The good news:

  • With Anthropic's exceptional instruction hierarchy training and adherence, anything added to the system prompt will actually MOSTLY be followed

  • You have way more control now

The catch:

  • The real secret to getting the most out of your LLM is walking that thin line of just enough context for the task—not too much, not too little

  • If you're throwing 10,000 tokens into the system prompt on top of these insane tool definitions (11,438 tokens for JUST tools!!! WTF Anthropic?!) you're going to exacerbate context rot issues

Bonus resource:

  • Anthropic token estimator (actually uses Anthropic's API see https://docs.claude.com/en/api/messages-count-tokens): https://claude-tokenizer.vercel.app/

TL;DR (Generated by Claude Code, edited by me)

Claude Code v2.0.14 has three ways to customize system prompts, but they're poorly documented. I reverse-engineered them using a fork of cchistory:

  1. Output Styles: Replaces the "Tone and style" and "Doing tasks" sections. Gets placed near the start of the prompt, above tool definitions, for better adherence. Use this for changing how Claude operates and responds.

  2. --append-system-prompt: Adds your instructions right above the tool definitions. Stacks with output styles (output style goes first). Good for adding specific behaviors without replacing existing instructions.

  3. --system-prompt (NEW in v2.0.14): Replaces the ENTIRE system prompt except tool definitions and one line about being a Claude agent. This is the nuclear option - gives you almost full control but you're responsible for everything.

All three inject instructions above the tool definitions (11,438 tokens of bloat). Key insight: LLMs maintain context best at the start and end of prompts, and since tools are so bloated, your custom instructions end up closer to the start than you'd think, which actually helps adherence.

Be careful with token count though - context rot kicks in around 80-120k (my note: technically as early as 8k, but starts to become more of a noticable issue at this point) tokens even though the window is larger. Don't throw 10k tokens into your system prompt on top of the existing bloat or you'll make things worse.

I've documented all three approaches with examples and diffs in the post above. Check the gists for actual system prompt outputs so you can see exactly what changes.

[Title Disclaimer: Technically there are other methods, but they don't apply to Claude Code interactive mode.]

If you have any questions, feel free to comment, if you're shy, I'm more than happy to help in DM's but my replies may be slow, apologies.

Discussions

Why are we paying for system prompt?
I think you should read more about how Claude works, and context for LLMs in general. The base premise of your complaint reads like you don’t understand how the subscription works, let alone context windows. There are tools out there to edit the system prompt, but I suspect that you’d just be mad about performance after using them. More on reddit.com
🌐 r/ClaudeCode
112
30
October 28, 2025
Claude full system prompts with all tools is now ~25k tokens. In API costs it would literally cost $0.1 to say "Hi" to Claude.
I think the system prompt is essentially prompt cached which drastically reduces their costs. More on reddit.com
🌐 r/ClaudeAI
50
241
February 15, 2025
The Claude Code System Prompt Leaked : ArtificialInteligence
🌐 r/ArtificialInteligence
Claude Code- Ultra Efficient Audit Prompt : ClaudeAI
🌐 r/ClaudeAI
🌐
Claude Docs
platform.claude.com › docs › en › release-notes › system-prompts
System Prompts - Claude Docs
November 25, 2025 - Claude's web interface (Claude.ai) and mobile apps use a system prompt to provide up-to-date information, such as the current date, to Claude at the start of every conversation. We also use the system prompt to encourage certain behaviors, such ...
🌐
Medium
medium.com › coding-nexus › claude-codes-entire-system-prompt-just-leaked-10d16bb30b87
Claude Code’s entire system prompt just leaked. | by Civil Learning | Coding Nexus | Dec, 2025 | Medium
1 week ago - This isn’t just curiosity about prompt engineering. It’s a blueprint for how modern AI coding agents are constructed. Let’s break it down. ... Claude Code does not work that way. Instead, it dynamically assembles dozens of system prompts based on:
🌐
Mikhail
mikhail.io › 2025 › 09 › sonnet-4-5-system-prompt-changes
Claude Code 2.0 System Prompt Changes | Mikhail Shilkov
October 1, 2025 - Taken together, the deltas point to less prescriptive prompt text and more reliance on model behavior. The system prompt moves from rigid rules (“do not add comments”) toward guidelines (“briefly confirm”).
🌐
Claude Docs
platform.claude.com › docs › en › agent-sdk › modifying-system-prompts
Modifying system prompts - Claude Docs
You can use the Claude Code preset with an append property to add your custom instructions while preserving all built-in functionality. You can provide a custom string as systemPrompt to replace the default entirely with your own instructions.
🌐
Arize
arize.com › arize ai › claude.md: best practices for optimizing with prompt learning
CLAUDE.md: Best Practices Learned from Optimizing Claude Code with Prompt Learning
November 20, 2025 - Why did Claude Code take this approach, instead of the right one? (if Claude Code was wrong) With our training data fully built, we can now feed this to our meta prompt, asking it to generate an optimized prompt. The term “rules” here refers to anything you supply through CLAUDE.md, or --append-system-prompt.
🌐
Anthropic
anthropic.com › engineering › claude-code-best-practices
Claude Code: Best practices for agentic coding
At Anthropic, we occasionally run CLAUDE.md files through the prompt improver and often tune instructions (e.g. adding emphasis with "IMPORTANT" or "YOU MUST") to improve adherence. By default, Claude Code requests permission for any action that might modify your system: file writes, many bash commands, MCP tools, etc.
Find elsewhere
🌐
Substack
prompthub.substack.com › p › dissecting-the-claude-4-system-prompt
Dissecting the Claude 4 System Prompt - by Dan Cleary
June 13, 2025 - Claude should let the person know ....com/en/docs/build-with-claude/prompt-engineering/overview**’. Obviously, we’re huge fans of prompt engineering. But it is interesting to see that they felt the need to hard-code this information into the system message....
🌐
GitHub
github.com › Piebald-AI › claude-code-system-prompts
GitHub - Piebald-AI/claude-code-system-prompts: All parts of Claude Code's system prompt, 20 builtin tool descriptions, sub agent prompts (Plan/Explore/Task), utility prompts (CLAUDE.md, compact, statusline, magic docs, WebFetch, Bash cmd, security review, agent creation). Updated for each Claude Code version.
2 weeks ago - All parts of Claude Code's system prompt, 20 builtin tool descriptions, sub agent prompts (Plan/Explore/Task), utility prompts (CLAUDE.md, compact, statusline, magic docs, WebFetch, Bash cmd, security review, agent creation). Updated for each Claude Code version.
Starred by 2.2K users
Forked by 360 users
Languages   JavaScript
🌐
Claude
docs.claude.com › en › docs › agent-sdk › modifying-system-prompts
Modifying system prompts - Claude Docs
To use Claude Code’s system prompt (tool instructions, code guidelines, etc.), specify systemPrompt: { preset: "claude_code" } in TypeScript or system_prompt="claude_code" in Python.
🌐
Claude Docs
platform.claude.com › docs › en › build-with-claude › prompt-engineering › system-prompts
Giving Claude a role with a system prompt - Claude Docs
The right role can turn Claude from a general assistant into your virtual domain expert! System prompt tips: Use the system parameter to set Claude's role.
🌐
Reddit
reddit.com › r/claudecode › why are we paying for system prompt?
Why are we paying for system prompt? : r/ClaudeCode
October 28, 2025 - I’ve used these models with opencode and crush too, and Claude code seems to provide a better package all around. My assumption has always been that it’s system prompt magic working for Claude code, but I recently started playing with replacing the system prompt and honestly didn’t find its behavior very different.
🌐
Kirshatrov
kirshatrov.com › posts › claude-code-internals
Reverse engineering Claude Code • Kir Shatrov
Any Bash tool use is preceded by this this prompt: <policy_spec> # Claude Code Code Bash command prefix detection This document defines risk levels for actions that the Claude Code agent may take. This classification system is part of a broader safety framework and is used to determine when ...
🌐
Simon Willison
simonwillison.net › 2025 › May › 25 › claude-4-system-prompt
Highlights from the Claude 4 system prompt
May 25, 2025 - The “defined as a minor in their ... of the system prompt leaning on Claude’s enormous collection of “knowledge” about different countries and cultures. Claude does not provide information that could be used to make chemical or biological or nuclear weapons, and does not write malicious code, including ...
🌐
Medium
medium.com › data-science-in-your-pocket › claudes-system-prompt-explained-d9b7989c38a3
Claude’s System Prompt explained
May 10, 2025 - Example: ✅ “Write a Python function to reverse a string.” → “Would you like me to explain the code?” · If a user is unhappy, Claude politely directs them to feedback options (e.g., thumbs-down button). Example: ❌ “This answer is wrong!” → Claude: “I appreciate your feedback. You can click the thumbs-down button to report this to Anthropic.” · Claude’s not magic — it’s method. And the better you get at prompting, the more it feels like a wise collaborator instead of a souped-up autocomplete.