Not sure how you feel about it but Gemini CLI feels like garbage at the moment compared to Claude Code. It's slow, it doesn't listen to instructions or use tools as well as Claude.
But it has that huge context window we all love.
So I just added instructions to CLAUDE.md to have Claude use the Gemini CLI in non-interactive mode (passing the -p param with a prompt to just get a response back from the CLI) when it needs to gather information about a large part of the codebase.
That way you get the best of both worlds, Claude doesn't waste context and Gemini doesn't waste your time.
Add this (or a modified version) to your CLAUDE.md and tell Claude to use gemini manually or it will do it on it's own as needed.
# Using Gemini CLI for Large Codebase Analysis When analyzing large codebases or multiple files that might exceed context limits, use the Gemini CLI with its massive context window. Use `gemini -p` to leverage Google Gemini's large context capacity. ## File and Directory Inclusion Syntax Use the `@` syntax to include files and directories in your Gemini prompts. The paths should be relative to WHERE you run the gemini command: ### Examples: **Single file analysis:** ```bash gemini -p "@src/main.py Explain this file's purpose and structure" Multiple files: gemini -p "@package.json @src/index.js Analyze the dependencies used in the code" Entire directory: gemini -p "@src/ Summarize the architecture of this codebase" Multiple directories: gemini -p "@src/ @tests/ Analyze test coverage for the source code" Current directory and subdirectories: gemini -p "@./ Give me an overview of this entire project" # Or use --all_files flag: gemini --all_files -p "Analyze the project structure and dependencies" Implementation Verification Examples Check if a feature is implemented: gemini -p "@src/ @lib/ Has dark mode been implemented in this codebase? Show me the relevant files and functions" Verify authentication implementation: gemini -p "@src/ @middleware/ Is JWT authentication implemented? List all auth-related endpoints and middleware" Check for specific patterns: gemini -p "@src/ Are there any React hooks that handle WebSocket connections? List them with file paths" Verify error handling: gemini -p "@src/ @api/ Is proper error handling implemented for all API endpoints? Show examples of try-catch blocks" Check for rate limiting: gemini -p "@backend/ @middleware/ Is rate limiting implemented for the API? Show the implementation details" Verify caching strategy: gemini -p "@src/ @lib/ @services/ Is Redis caching implemented? List all cache-related functions and their usage" Check for specific security measures: gemini -p "@src/ @api/ Are SQL injection protections implemented? Show how user inputs are sanitized" Verify test coverage for features: gemini -p "@src/payment/ @tests/ Is the payment processing module fully tested? List all test cases" When to Use Gemini CLI Use gemini -p when: - Analyzing entire codebases or large directories - Comparing multiple large files - Need to understand project-wide patterns or architecture - Current context window is insufficient for the task - Working with files totaling more than 100KB - Verifying if specific features, patterns, or security measures are implemented - Checking for the presence of certain coding patterns across the entire codebase Important Notes - Paths in @ syntax are relative to your current working directory when invoking gemini - The CLI will include file contents directly in the context - No need for --yolo flag for read-only analysis - Gemini's context window can handle entire codebases that would overflow Claude's context - When checking implementations, be specific about what you're looking for to get accurate results # Using Gemini CLI for Large Codebase Analysis When analyzing large codebases or multiple files that might exceed context limits, use the Gemini CLI with its massive context window. Use `gemini -p` to leverage Google Gemini's large context capacity. ## File and Directory Inclusion Syntax Use the `@` syntax to include files and directories in your Gemini prompts. The paths should be relative to WHERE you run the gemini command: ### Examples: **Single file analysis:** ```bash gemini -p "@src/main.py Explain this file's purpose and structure" Multiple files: gemini -p "@package.json @src/index.js Analyze the dependencies used in the code" Entire directory: gemini -p "@src/ Summarize the architecture of this codebase" Multiple directories: gemini -p "@src/ @tests/ Analyze test coverage for the source code" Current directory and subdirectories: gemini -p "@./ Give me an overview of this entire project" # Or use --all_files flag: gemini --all_files -p "Analyze the project structure and dependencies" Implementation Verification Examples Check if a feature is implemented: gemini -p "@src/ @lib/ Has dark mode been implemented in this codebase? Show me the relevant files and functions" Verify authentication implementation: gemini -p "@src/ @middleware/ Is JWT authentication implemented? List all auth-related endpoints and middleware" Check for specific patterns: gemini -p "@src/ Are there any React hooks that handle WebSocket connections? List them with file paths" Verify error handling: gemini -p "@src/ @api/ Is proper error handling implemented for all API endpoints? Show examples of try-catch blocks" Check for rate limiting: gemini -p "@backend/ @middleware/ Is rate limiting implemented for the API? Show the implementation details" Verify caching strategy: gemini -p "@src/ @lib/ @services/ Is Redis caching implemented? List all cache-related functions and their usage" Check for specific security measures: gemini -p "@src/ @api/ Are SQL injection protections implemented? Show how user inputs are sanitized" Verify test coverage for features: gemini -p "@src/payment/ @tests/ Is the payment processing module fully tested? List all test cases" When to Use Gemini CLI Use gemini -p when: - Analyzing entire codebases or large directories - Comparing multiple large files - Need to understand project-wide patterns or architecture - Current context window is insufficient for the task - Working with files totaling more than 100KB - Verifying if specific features, patterns, or security measures are implemented - Checking for the presence of certain coding patterns across the entire codebase Important Notes - Paths in @ syntax are relative to your current working directory when invoking gemini - The CLI will include file contents directly in the context - No need for --yolo flag for read-only analysis - Gemini's context window can handle entire codebases that would overflow Claude's context - When checking implementations, be specific about what you're looking for to get accurate results
Videos
Google’s Gemini CLI finally feels like an AI that belongs in the terminal, but the real twist? Devs testing it side-by-side with Claude Code are noticing Claude quietly outperforms it in reasoning-heavy tasks; cleaner refactors, sharper edge-case spotting, and better repo-level understanding. Gemini CLI is fast and environment-aware, but Claude Code is acting like the senior engineer who already read your whole codebase twice.
Still, if you want a quick look at how Gemini CLI is evolving in real workflows, this breakdown helps: Gemini CLI
I have been using Claude Code for a while, and needless to say, it is very, very expensive. And Google just launched the Gemini CLI with a very generous offering. So, I gave it a shot and compared both coding agents.
I assigned them both a single task (Prompt): building a Python-based CLI agent with tools and app integrations via Composio.
Here's how they both fared.
Code Quality:
-
No points for guessing, Claude Code nailed it. It created the entire app in a single try. It searched the Composio docs and followed the exact prompt as stated and built the app.
-
Whereas Gemini was very bad, and it couldn't build a functional app after multiple iterations. It was stuck. And I had lost all hope in it.
-
Then, I came across a Reddit post that used Gemini CLI in non-interactive mode with Claude Code by adding instructions in CLAUDE md. It worked like a charm. Gemini did the information gathering, and Claude Code built the app like a pro.
-
In this way, I could utilise Gemini's massive 1m context and Claude's exceptional coding and tool execution abilities.
Speed:
-
Claude, when working alone, took 1h17m to finish the task, while the Claude+Gemini hybrid took 2h2m.
Tokens and Cost:
-
Claude Code took a total of 260.8K input and returned 69K tokens with a 7.6M read cache (CLAUDE md) - with auto-compaction. It costed $4.80
-
The Gemini CLI processed a total of 432K input and returned 56.4K tokens, utilising an 8.5M read cache (GEMINI md). It costed $7.02.
For complete analysis checkout the blog post: Gemini CLI vs. Claude Code
It was a bit crazy. Google has to do a lot of catch-up here; the Claude Code is in a different tier, with Cursor agents being the closest competitor.
What has been your experience with coding agents so far? Which one do you use the most? Would love to know some quirks or best practices in using them effectively, as I, like everyone else, don't want to spend fortunes.
I've been using Claude Code (Opus 4.5) a lot lately and noticed it sometimes goes off in weird directions on complex tasks. It's great at writing code (especially Opus 4.5), but architecture decisions can be hit or miss. Gemini 3 Pro is INCREDIBLE at this.
So I built a CLI wrapper around Gemini that integrates with Claude Code. The idea is Claude handles the implementation while Gemini provides strategic oversight.
Since Claude Code auto-compacts it can run for very long. The /fullauto command takes full use of this.
You can send a prompt, go to sleep, and it will be either done or still working when you come back. So only Claude subscription / Gemini API key rate-limiting will stop it.
The Oracle maintains a 5-exchange conversation history per project directory by default so Gemini has enough context to make useful suggestions without blowing up the context window. Claude can also edit this context window directly, or not use it (oracle quick).
It will auto install a slash command `/fullauto` mode. You give Claude a task and it autonomously consults Gemini at key decision points. Basically pair programming where both programmers are AIs. Example:
/fullauto Complete the remaining steps in plan.md
For /fullauto mode, Claude writes to FULLAUTO_CONTEXT.md in your project root. This works as persistent memory that survives conversation compactions.
/fullauto also instructs Claude on how to auto-adjust if the Oracle's guidance is misaligned.
It can also use the new Gemini 3 image recognition and Nano Banana Pro for generating logos, diagrams, etc.
When Claude runs oracle imagine it will use nano-banana-pro image generation, and if it's region blocked the CLI automatically spins up a cheap US server on Vast.ai, generates the image there, downloads it to your machine, and destroys the server (you need vast.ai API key for this).
Example uses Claude Code can do:
# Ask for strategic advice oracle ask "Should I use Redis or Memcached for session caching?" # Get code reviewed oracle ask --files src/auth.py "Any security issues here?" # Review specific lines oracle ask --files "src/db.py:50-120" "Is this query efficient?" # Analyze a screenshot or diagram oracle ask --image error.png "What's causing this?" # Generate images (auto-provisions US server if you're geo-restricted) oracle imagine "architecture diagram for microservices" # Quick one-off questions oracle quick "regex for email validation" # Conversation history (5 exchanges per project) oracle history oracle history --clear
I used this tool to create the repo itself. `/fullauto` orchestrated the whole thing.
Repo: https://github.com/n1ira/claude-oracle
Claude Code is very good at reasoning, structuring and generating an action plan. But it quickly consumes a lot of requests for simple tasks: launching a command, manipulating files, listing a directory, etc.
I'm thinking about an approach where Claude Code generates the steps, and a local agent based on Gemini CLI executes them. As Gemini CLI is free for up to 1000 requests/day, we could offload Claude and optimize the overall flow.
➡️ Claude = brain (analysis, plan) ➡️ Gemini = executor (simple commands, local manipulation)
Has anyone tested this type of architecture? Integrated via scripts, wrappers, MCP, etc. ? Any feedback on the stability, the limits, the real interest of this decoupling?
I developed and open sourced Zen MCP a little while ago primarily to supercharge our collective workflows; it's now helped thousands of developers (and non-developers) over the past few months. Originally, the idea was to connect Claude Code with other AI models to boost productivity and bring in a broader range of ideas (via an API key for Gemini / OpenRouter / Grok etc). Claude Sonnet could generate the code, and Gemini 2.5 Pro could review it afterward. Zen offers multiple workflows and supports memory / conversation continuity between tools.
These workflows are still incredibly powerful but with recent reductions to weekly quota limits within Claude Code, every token matters. I'm on the 20x Max Plan and saw a warning yesterday that I've consumed ~80% of my weekly quota by seemingly doing nothing. With Codex now becoming my primary driver, it's clearer than ever that there's tremendous value in bringing other CLIs into the workflow. Offloading certain tasks like code review, planning, or research to tools like Gemini lets me preserve my context (and weekly limits) while also taking advantage the other CLI's stronger capabilities.
Gemini CLI (although woefully bad on its own for agentic tasks; Gemini 2.5 Pro however is absolutely amazing in reasoning) offers up to 1000 free requests a day! Why not use the CLI directly for simpler things? Documentation? Code reviews? Bug hunting? Maybe even simple features / enhancements?
Zen MCP just landed an incredible update today to allow just that - you can now use Gemini CLI directly from within Claude Code (or Codex, or any tool that supports MCP) and maintain a single shared context. You can also assign multiple custom roles to the CLI (via a configurable system prompt). Incredibly powerful stuff. Not only does this help you dramatically cut down on Claude Code token usage, it also lets you tap into free credits from Gemini!
I'll soon be adding support for Codex / Qwen etc and even Claude Code. This means you’ll be able to delegate tasks across CLIs (and give them unique roles!) in addition to incorporating any other AI model you want: e.g. use the planner tool with GPT-5 to plan out something, get Gemini 2.5 Pro to nitpick and ask Sonnet 4.5 to implement. Then get Gemini CLI to code review and write units tests - all while staying in the same shared context and saving tokens, getting the best of everything! Sky's the limit!
Update: Also added support for Codex CLI. You can now use an existing Codex subscription and invoke code reviews from within ClaudeCode:
clink with codex cli and perform a full code review using the codereview role
Second Update: New tool added apilookup - ensures you always get current, accurate API/SDK documentation by forcing the AI to search for the latest information systematically (simply saying use latest APIs doesn't work - it'll still use APIs it's aware of at the time of its training cut-off date).
use apilookup how do I add glass look to a button in swift?
--
The video above was taken in a single take (trimmed frames to cut out wait times):
I cloned
https://github.com/LeonMarqs/Flappy-bird-python.git(which does not contain the scoring feature)Asked Claude Code to use the
consensusZen MCP tool to ask GPT-5 and Codex what they think would be nice to add quicklyAsked Claude Code to get Gemini CLI to perform the actual implementation (Gemini CLI received the full conversation + consensus + request + the prompt)
Tested if it works - and it does!