I am using Serena MCP, but I don't notice that Claude Code works better with it. In fact, anytime it calls Serena's tool, CC slows to a grind. I have my project indexed. Is it just me, or are MCPs just hype and not value adds?
I'm constantly hitting context limits faster with serena than without it
And honestly, I can't even tell if its actually helping/is beneficial at all
Anyone have any thoughts/experiences they'd like the share to help me understand if it's ACTUALLY helpful or not?
Thansk
Videos
Claude 4, in particular Opus, is amazing for coding. It has only two main downsides: high cost and a relatively small context window.
Fortunately, there is a free, open-source (MIT licensed) solution to help with both: the Serena MCP server, a toolbox that uses language servers (and quite some code on top of them) to allow an LLM to perform symbolic operations, including edits, directly on your codebase. You may have seen my post on it a while ago, when we had just published the project. It turns a vanilla LLM into a capable coding agent, or improves existing coding agents if included into them
Now, a few weeks and 1k stars later, we are nearing a first stable version. I have started evaluating it, and I'm blown away by the results so far! When using it on its own in Claude Desktop, it turns Claude into a careful and token-frugal agent, capable of acting on enormous projects without running into token limits. As a complement to an existing agentic solution, like Claude Code or some other coding agent, Serena significantly reduced costs in all my experiments while keeping or increasing the quality of the output.
None of it is surprising, of course. If you give me an IDE, I will obviously be better and faster at coding than if I had to code in something like word and use pure file-reads and edits. Why shouldn't the same hold for an LLM?
A quantitative evaluation on SWE-verified is on its way, but to just give a taste of what Serena can do, I created one PR on a benchmark task from sympy, with Opus running on Claude Desktop. It demonstrates how Opus intelligently uses the tools to explore, read and edit the codebase in the most token-efficient manner possible. For complete transparency, the onboarding conversation and the solution conversation are included. The same holds for Sonnet, but for Opus it's particularly useful, since due to its high cost, token efficiency becomes key.
Since Claude Code is now included into the pro subscription, the file-read based MCPs are largely obsolete for coding purposes (for example, the codemcp dev said he now stops the project). Not so for Serena, since the symbolic tools it offers give a valuable addition to Claude Code, rather than being replaced by it.
Even though sympy is a huge repository, the Opus+Serena combo went through it like a breeze. For anyone wanting to have cheaper and faster coding agents, especially on larger projects, I highly recommend looking into Serena! We are still early in the journey, but I think the promise is very high.
Thanks so much to /u/thelastlokean for raving about this.
I've been spending days writing my own custom scripts with grep, ast-grep, and writing tracing through instrumentation hooks and open telemetry to get Claude to understand the structure of the various api calls and function calls.... Wow. Then Serena MCP (+ Claude Code) seems to be built exactly to solve that.
Within a few moments of reading some of the docs and trying it out I can immediately see this is a game changer.
Don't take my word, try it out. Especially if your project is starting to become more complex.
https://github.com/oraios/serena
Ive been going through this subreddit abit on serena MCP and its often mentioned, same goes for youtube videos - often mentioned - even saw some cool guys just posting some own products they made just for this here today/yesterday.
Im right now trying to get around how to be able to approach large legacy files and it is a pain, and installed serena mcp with claude code, but honestly im unsure if im getting any actual benefit from it - its stated that i will save tokens and get a much better indexing of large codebases, and while i do notice maybe that instead of going filesystem it accesses it with index am simply not feeling the love as to feeling able to work specifically in the larger files or getting better overview than claude out of the box of the codebase.
If anyone ask - what MCP is musthave, Serena will be mentioned - and can find alot of youtube videos with that headline but anyone knows of someone who goes through this with actual large codebases spending time on showing the benefit in real life ? those ive gone through so far is saying 'its great' and show how to install it and then thats about it.
And note i am not dissing Serena at all - its seems to be extremely valuable and i might be using it wrong but would be great if anyone had some real handson with real large codebases or just large source files so i could be pointed in the direction of how i could utilize it.
Or should i go for other tools here mainproblem is ofcourse you can get really really stuck if you have really bad legacy code with huge source file or bad structured code and goal here is trying to be able to ex do some rough refactoring on single large files that goes way above the context window if CC etc.
Or if anyone had a consistant luck in moving through large codebases for refactoring and able to show some working prompting and tools for this ( i am already planning/documenting/subagenting/etc so really looking for some hands on proper practice/right tools ).
Note languages vary - anything from C#, Java, Js, to different web-frameworks.
Thanks !
I wanted to share my recent experience with two different AI-assisted development setups for a massive Laravel 12 project and get your thoughts. The project involves a migration of the old Laravel 8 to a new, fresh version of Laravel 12 by preserving dual Architecture with Modern Upgrades.
The old app has a package that contains extensive business logic (18+ models, 11+ controllers, complex validation rules)
Migration Strategy:
- Fresh Laravel 12 installation - Filament 3.3 installation - Basic package structure setup - Replace appzcoder/laravel-admin with Filament resources - UserResource, RoleResource, PermissionResource creation - RolePermissionSeeder with language permissions - Test user creation and authentication setup - Update composer.json for Laravel 12 compatibility - Replace deprecated packages with new ones - Update model factories and middleware registration - Fix Laravel 12 compatibility issues - Create a compatibility layer between Filament Shield and existing permissions - Update ApplicationPermission, AdminPermission, CheckRole middleware - Integrate URL-based permission system with Filament - Backup existing database - Run Laravel 12 migrations on fresh database - Create data migration commands for preserving existing data - Migrate users, roles, workers, workplaces, and all HR data - Create Filament pages linking to custom routes used by a custom-written Laravel extension - Update custom Package for Laravel 12 - Update navigation to show both systems - Comprehensive testing of all functionality - Performance optimization and bug fixes
The Contenders:
Claude Desktop app + Serena MCP
Codex + Serena MCP
I was initially using the Claude Desktop app with the Serena MCP, and for a while, it was a solid combination. However, recently I've hit some major productivity roadblocks. Claude started to "overthink" tasks, introducing features I never asked for and generating unnecessary markdown files outlining the tasks I had already explained. It felt like I was spending more time cleaning up after it than it was saving me.
The Game Changer: Codex + Serena MCP
On a whim, I switched to using Codex with the same Serena MCP setup, and the difference has been night and day. Here’s what stood out:
Codex gets it done in one shot. I've been consistently impressed with how Codex handles tasks. I provide my instructions, and it delivers the code exactly as requested, in a single pass. There's no back and forth, no need to correct extraneous additions. It's direct, efficient, and respects the scope of the task.
No unnecessary overhead. With Codex, I haven't had to deal with any of the "creative additions" I was experiencing with Claude. It doesn't add extra logic, features, or documentation that wasn't explicitly requested. This has been a massive time-saver and has made the development process much smoother.
In my experience, for a large, complex project like this, the straightforward, no-nonsense approach of Codex has been far more effective. It feels like a tool that's designed to be a precise instrument for developers, rather than a creative partner that sometimes goes off-script.
Has anyone else had similar experiences when comparing these (or other) AI models on large-scale projects? I'm curious to know if my experience is unique or if others have found certain models to be better suited for specific types of development workflows.
TL;DR: For my complex Laravel project, Codex + Serena MCP has been significantly more efficient and direct than Claude + Serena MCP. Codex completes tasks in one go without adding unrequested features, which has been a major boost to my productivity.
Background: I'm one of the devs of Serena MCP, and I recently got scared at realizing how easy it would be to deploy an attack.
Serena is backed by our company, a proper legal entity, so our users are safe. But I doubt that many have realized that fact, or frankly, that many cared.
By now we have thousands of users, the majority uses uvx, which automatically pulls everything from the main branch. Their MCP client automatically starts the server in their repo, many use Serena on private code.
If I wanted to hack them, I could push something on main that will send me their entire codebase (including any secrets). Hell, for those not using docker (likely the majority), it could send me anything from their computer! I could then force-push over that commit and pretend like nothing ever happened. It's honestly insane
Stay safe out there, and my recommendation is to only run MCP Servers from someone whom you could actually sue... Especially when using auto-updates, which seems to be the default nowadays.
I’ve been trying out Serena and quite like the improved token efficiency but one thing lowkey bothers me - when running CC without auto-accept it renders generally ok, readable diffs when it touches files. With Serena, however I only can see a tool call and can’t interactively review and effectively “micromanage” CC the same way. Is there something I could configure to get readable diffs with CC and Serena? How do you solve this and use Serena generally?
Background: I'm one of the devs of Serena MCP, and I recently got scared at realizing how easy it would be to deploy an attack.
Serena is backed by our company, a proper legal entity, so our users are safe. But I doubt that many have realized that fact, or frankly, that many cared.
By now we have thousands of users, the majority uses uvx, which automatically pulls everything from the main branch. They start the server in their repo, many use Serena on private code.
If I wanted to hack them, I could push something on main that will send me their entire codebase (including any secrets). Hell, for those not using docker (likely the majority), it could send me anything from their computer! I could then force-push over that commit and pretend like nothing ever happened. It's honestly insane.
The same is largely true when installing any python package (arbitrary code execution). But I feel like there people follow better standards for due diligence, and folks usually pin their versions. But for MCP, the prevailing attitude seems to be "anything goes". In parts that may be due to the many non-programmers or juniors using this technology.
Stay safe out there, and my recommendation is to only run MCP Servers from someone whom you could actually sue... Especially when using auto-updates, which seems to be the default nowadays.
What are you currently using for general improvement to your agents search / retrieval capabilities?
I've been using serena for the most part but I have had quite a few instances where it has unintentionally blown through my context (always conveniently when on Opus) with a bad pattern search which has not been great. I know that Serena is much more than this (especially in larger code bases with multiple languages), but I am trying to see if there's a better option out there. I've been hearing more about Codanna, but haven't seen much chatter around it.
Also, since the introduction of /context I am much more aware of how much context it's using at all times. I've heard of rolling a reduced MCP with only some of the features I use the most, but haven't dived into that as yet.
I’m using Claude code to build a web app that has a front end and backend in node js and typescript. My current codebase is growing and growing.
I have an architecture document that I have the AI keep up to date via a rule I set and that document generally helps it navigate the codebase.
But it’s just pointers to high level information. Not an index of all the types and things I’ve build with it.
Is there a good MCP that will help with this problem? Or others yall recommend?
Are people just making their own with vibe coding or are they using “off the shelf” MCPs. I realize they can be a little risky and I’m not skilled enough to be able to read them and know if they can be trusted.
I guess my question is - should I just make my own MCPs or get one and if so what do yall recommend for my specific issue and maybe other use cases I haven’t thought of?
I truly just don’t “get” MCPs too well.
Edit to add: Serena sounds like exactly what I am looking for has anyone tried it and have feedback ?
We've been working like hell on this one: a fully capable Agent, as good or better than Windsurf's Cascade or Cursor's agent - but can be used for free.
It can run as an MCP server, so you can use it for free with Claude Desktop, and it can still fully understand a code base, even a very large one. We did this by using a language server instead of RAG to analyze code.
Can also run it on Gemini, but you'll need an API key for that. With a new google cloud account you'll get 300$ as a gift that you can use on API credits.
Check it out, super easy to run, GPL license:
https://github.com/oraios/serena
I'd like to use Serena MCP with Claude Code but the need to manually "prime" it every use is easy to fumble. The Github readme states:
an alternative to the above is adding the instructions as part of the system prompt, then you will not need to run the command above or to remember re-running it after compacting. This can be achieved through starting claude code with claude --append-system-prompt $(uvx --from git+https://github.com/oraios/serena serena print-system-prompt). Note that this is experimental, Claude may not understand the instructions correctly in this way, and we haven't thoroughly tested the resulting behavior. Please report any issues you encounter.
Is anyone doing this, and has anyone tried using a hook for this instead?
I randomly saw a YouTube video (sorry forgot the name for credit) where a guy showed an MCP called Serena.
I gave it a go and just wow…..
Claude is better than it ever has been for me.
Not sure if it will work for everyone, but I highly recommend giving it a go.
Ps. I put a “mandatory: must use Serena for any file operations if it all possible” as the first line in my Claude.md
Hope this helps someone!
Edit: https://youtu.be/UqfxuQKuMo8?si=i5_eQKuRYDSZa5vk
Hi all,
I've been using the serena MCP with claude code inside VSCode, but it breaks the IDE integration workflow that allows me to see a diff view of the code changes. Does anyone have a "compromise" setup that gets most of the benefits of Serena without losing the diff view? Would it work to just remove the editing tools like regex replace, or at that point does Serena become a waste? Thanks!
I've been trying to get the Serena MCP server (https://github.com/oraios/serena) working with Claude Code running in Ubuntu WSL2, but I'm hitting a persistent connection issue. The server launches successfully but Claude Code never actually connects to it.
Environment Details:
OS: Windows 11 with WSL2 (Ubuntu 24)
Claude Code: v2.0.20 (running in WSL terminal)
Terminal: VS Code integrated terminal (working directory:
/mnt/d/Documents/Game Design Documents/Lianji)Serena: Installed via
uvxfrom snap:astral-uv 0.8.17Project: Unity/C# project on Windows filesystem mounted at
/mnt/d/...uvx location:
/snap/bin/uvx(snap package)Node version in WSL: v18.20.6
Configuration Files:
~/.claude/settings.json:
json
{
"feedbackSurveyState": {
"lastShownTime": 1754083318070
},
"$schema": "https://json.schemastore.org/claude-code-settings.json",
"mcpServers": {
"serena": {
"command": "/home/althrretha/.claude/start-serena.sh",
"args": []
}
}
}~/.claude/start-serena.sh:
bash
#!/bin/bash # Serena MCP Server Launcher for Claude Code (stdio mode) exec /snap/bin/uvx --from git+https://github.com/oraios/serena serena start-mcp-server --context ide-assistant --project "/mnt/d/Documents/Game Design Documents/Lianji" ``` (File has Unix line endings, chmod +x applied) **What I've tried:** 1. **Initial attempt:** Used Windows `uvx.exe` path (`/mnt/c/Users/.../uvx.exe`) with Windows-style paths - server couldn't find project due to path format mismatch between WSL and Windows 2. **WSL-native uvx:** Installed via `sudo snap install astral-uv --classic`, updated config to use `/snap/bin/uvx` with WSL paths - server starts successfully when run manually but Claude Code never connects 3. **Fixed line endings:** Initial wrapper script had CRLF line endings causing "required file not found" error - fixed with `sed -i 's/\r$//'` 4. **HTTP transport attempt:** Added `--transport streamable-http --port 9121` - same result (connection starts, never completes) 5. **Verified Ref MCP server works:** The built-in Ref server connects successfully via HTTP, confirming Claude Code's MCP system is functional **Current behavior:** From `~/.claude/debug/latest`: ``` [DEBUG] MCP server "serena": Starting connection with timeout of 30000ms [DEBUG] Writing to temp file: /home/althrretha/.claude.json.tmp.XXXX.XXXXXXXXX
Then... nothing. No completion message, no error, just timeout after 30 seconds.
Manual execution works perfectly:
bash
$ /home/althrretha/.claude/start-serena.sh INFO 2025-10-16 21:08:12,684 [MainThread] serena.agent:__init__:203 - Number of exposed tools: 19 INFO 2025-10-16 21:08:12,927 [MainThread] serena.cli:start_mcp_server:172 - Initializing Serena MCP server INFO [MainThread] serena.agent:setup_mcp_server:563 - MCP server lifetime setup complete
Serena logs confirm full initialization with language server running (C# LSP has expected MSBuild warnings in WSL but core tools are available).
Testing observations:
When Serena runs manually,
ps auxshows two processes: the uv tool wrapper and the Python serena processServer listens on stdio by default (no HTTP port opened unless explicitly configured)
Claude Desktop (non-WSL Windows app) connects to Serena successfully with same project path using Windows-style paths
Closing Claude Desktop before starting Claude Code session doesn't resolve the issue
Hypothesis: The stdio pipe communication between Claude Code (Node.js-based, running in WSL) and the spawned Serena process (Python via uvx) is failing to complete the MCP initialization handshake. The process launches but something in the inter-process communication breaks down, possibly related to:
WSL's stdin/stdout handling with snap-confined applications
File descriptor inheritance issues
Buffering problems in the pipe communication
Questions:
Has anyone successfully run stdio-based MCP servers with Claude Code in WSL2?
Is there a known workaround for snap-installed tools communicating via stdio with Node.js processes in WSL?
Should I try installing uvx via a different method (pip install?) to avoid snap confinement?
Are there any Claude Code debug flags that would give more visibility into why the MCP connection times out?
The fact that Claude Code successfully connects to the HTTP-based Ref server but fails with stdio-based Serena suggests the issue is specifically with stdio transport in my WSL environment.
Any insights appreciated!