Thanks so much to /u/thelastlokean for raving about this.
I've been spending days writing my own custom scripts with grep, ast-grep, and writing tracing through instrumentation hooks and open telemetry to get Claude to understand the structure of the various api calls and function calls.... Wow. Then Serena MCP (+ Claude Code) seems to be built exactly to solve that.
Within a few moments of reading some of the docs and trying it out I can immediately see this is a game changer.
Don't take my word, try it out. Especially if your project is starting to become more complex.
https://github.com/oraios/serena
I'm constantly hitting context limits faster with serena than without it
And honestly, I can't even tell if its actually helping/is beneficial at all
Anyone have any thoughts/experiences they'd like the share to help me understand if it's ACTUALLY helpful or not?
Thansk
Videos
I’ve been using a few MCPs in my setup lately, mainly Context 7, Supabase, and Playwright.
I'm just curious in knowing what others here are finding useful. Which MCPs have actually become part of your daily workflow with Cursor? I don’t want to miss out on any good ones others are using.
Also, is there anything that you feel is still missing as in an MCP you wish existed for a repetitive or annoying task?
Ive been going through this subreddit abit on serena MCP and its often mentioned, same goes for youtube videos - often mentioned - even saw some cool guys just posting some own products they made just for this here today/yesterday.
Im right now trying to get around how to be able to approach large legacy files and it is a pain, and installed serena mcp with claude code, but honestly im unsure if im getting any actual benefit from it - its stated that i will save tokens and get a much better indexing of large codebases, and while i do notice maybe that instead of going filesystem it accesses it with index am simply not feeling the love as to feeling able to work specifically in the larger files or getting better overview than claude out of the box of the codebase.
If anyone ask - what MCP is musthave, Serena will be mentioned - and can find alot of youtube videos with that headline but anyone knows of someone who goes through this with actual large codebases spending time on showing the benefit in real life ? those ive gone through so far is saying 'its great' and show how to install it and then thats about it.
And note i am not dissing Serena at all - its seems to be extremely valuable and i might be using it wrong but would be great if anyone had some real handson with real large codebases or just large source files so i could be pointed in the direction of how i could utilize it.
Or should i go for other tools here mainproblem is ofcourse you can get really really stuck if you have really bad legacy code with huge source file or bad structured code and goal here is trying to be able to ex do some rough refactoring on single large files that goes way above the context window if CC etc.
Or if anyone had a consistant luck in moving through large codebases for refactoring and able to show some working prompting and tools for this ( i am already planning/documenting/subagenting/etc so really looking for some hands on proper practice/right tools ).
Note languages vary - anything from C#, Java, Js, to different web-frameworks.
Thanks !
I've been using Claude/Cursor and these MCP things for a while now. These are the ones you must have
Context 7 is like having a really smart friend who always knows the latest way to use any coding library. No more outdated examples that don't work.
Docker MCP is genius because it keeps things clean. Instead of having hundreds of tools cluttering everything up, it only loads what you need right now.
Shadcn Registry MCP makes building pretty websites super easy. You just ask for a component and it knows exactly how to add it without breaking stuff.
Google's new MCPs are pretty cool if you use Google services. They just announced ones for Maps, BigQuery, and cloud stuff. There are also free ones for Firebase and other Google tools.
Notion MCP has been a lifesaver for me. I can tell Claude to update my to-do lists, track projects, and organize ideas without ever opening Notion.
Supabase MCP handles all the database work. No more writing confusing database commands myself - Claude just does it.
Anyone else using MCPs? Which ones do you like most?
One of the biggest gaps in most AI coding setups today is persistent memory. By default, session history gets reset, which kills continuity and prevents Cursor from adapting to your project or codebase over time. That means you end up re-explaining the same context and instructions, which hurts productivity.
I’ve been experimenting with different MCP-compatible memory layers to extend Cursor agents. Here are some standouts and their best-fit use cases:
1. File-based memory (claude.md, Cursor configs)
- Best for personalization and lightweight assistants. Simple, transparent, but doesn’t scale.
- MCP compatibility: Not built-in. Needs custom connectors to be useful in agent systems.
2. Vector DBs (Pinecone, Weaviate, Chroma, FAISS, pgvector, Milvus)
- Best for large-scale semantic search across docs, logs, or knowledge bases.
- MCP compatibility: No native MCP, requires wrappers.
3. Byterover
- Best for team collaboration with Git-like system for AI memories. Support episodic and semantic memory, plus agent tools and workflows to help agents build and use context effectively in tasks like debugging, planning, and code generation.
- MCP compatibility: Natively designed for MCP servers and works smoothly with Cursor across IDEs and CLIs.
4. Zep
- Best for production-grade assistants on large, evolving codebases. Hybrid search and summarization keep memory consistent.
- MCP compatibility: Partial. Some connectors exist, but setup is not always straightforward.
5. Letta
- Best for structured, policy-driven long-term memory. Useful in projects that evolve frequently and need strict update rules.
- MCP compatibility: Limited. Requires integration work for MCP.
6. Mem0
- Best for experimentation and custom pipelines. Backend-agnostic, good for testing retrieval and storage strategies.
- MCP compatibility: Not native, but some community connectors exist.
7. Serena
- Best for personal or small projects where polished UX and easy setup matter more than depth.
- MCP compatibility: No out-of-the-box MCP support.
8. LangChain Memories
- Best for quick prototyping of conversational memory. Easy to use but limited for long-term use.
- MCP compatibility: Some LangChain components can be wrapped, but not MCP-native.
9. LlamaIndex Memory Modules
- Best for pluggable and flexible memory experiments on top of retrieval engines.
- MCP compatibility: Similar to LangChain, integration requires wrappers.
Curious what everyone else is using. Are there any memory frameworks you’ve had good luck with, especially for MCP setups? Any hidden gems I should try? (with specific use cases)
After extensive testing, I’ve found that Claude Code (CC) significantly outperforms other AI coding tools, including Windsurf, Cursor, Replit and Serena, despite some claims that Serena is on par with CC.
I recently tested Serena—an MCP platform marketed as being on par with Claude Code while costing 10x less—but the results were disappointing. With each prompt, Serena introduced numerous errors, requiring 1–2 hours of manual debugging just to get an 80% complete result. In contrast, Claude Code delivered 100% accurate output across three significant UI components in just 6 minutes, with only 60 seconds of prompting and no further intervention.
Yes, CC is more expensive in terms of API usage—one task alone cost me $3.92—but the results were flawless. Not a single syntax, logic, or design issue. The time saved and the hands-off experience more than justified the cost in my case.
Some users have argued that Claude Code doesn’t do anything particularly special. I disagree. After testing various tools like Serena and Windsurf, it’s clear that CC consistently delivers superior quality and reliability.
Given Serena's use of Claude Desktop (avoiding per-token API costs), my aim is to explore how we might replicate Claude Code’s capabilities within a Serena-style (MCP) model. As a community, can we analyze what makes Claude Code so effective and find a way to build something comparable—without the API expense?
My goal with this post is to work together as a community to methodically uncover what makes Claude Code so remarkably effective—so we can replicate its performance within Claude Desktop at a fraction of the cost.
Analyzing Anon Kode, an open-source replica of Claude Code, might be a good place to start.
I wanted to share my recent experience with two different AI-assisted development setups for a massive Laravel 12 project and get your thoughts. The project involves a migration of the old Laravel 8 to a new, fresh version of Laravel 12 by preserving dual Architecture with Modern Upgrades.
The old app has a package that contains extensive business logic (18+ models, 11+ controllers, complex validation rules)
Migration Strategy:
- Fresh Laravel 12 installation - Filament 3.3 installation - Basic package structure setup - Replace appzcoder/laravel-admin with Filament resources - UserResource, RoleResource, PermissionResource creation - RolePermissionSeeder with language permissions - Test user creation and authentication setup - Update composer.json for Laravel 12 compatibility - Replace deprecated packages with new ones - Update model factories and middleware registration - Fix Laravel 12 compatibility issues - Create a compatibility layer between Filament Shield and existing permissions - Update ApplicationPermission, AdminPermission, CheckRole middleware - Integrate URL-based permission system with Filament - Backup existing database - Run Laravel 12 migrations on fresh database - Create data migration commands for preserving existing data - Migrate users, roles, workers, workplaces, and all HR data - Create Filament pages linking to custom routes used by a custom-written Laravel extension - Update custom Package for Laravel 12 - Update navigation to show both systems - Comprehensive testing of all functionality - Performance optimization and bug fixes
The Contenders:
Claude Desktop app + Serena MCP
Codex + Serena MCP
I was initially using the Claude Desktop app with the Serena MCP, and for a while, it was a solid combination. However, recently I've hit some major productivity roadblocks. Claude started to "overthink" tasks, introducing features I never asked for and generating unnecessary markdown files outlining the tasks I had already explained. It felt like I was spending more time cleaning up after it than it was saving me.
The Game Changer: Codex + Serena MCP
On a whim, I switched to using Codex with the same Serena MCP setup, and the difference has been night and day. Here’s what stood out:
Codex gets it done in one shot. I've been consistently impressed with how Codex handles tasks. I provide my instructions, and it delivers the code exactly as requested, in a single pass. There's no back and forth, no need to correct extraneous additions. It's direct, efficient, and respects the scope of the task.
No unnecessary overhead. With Codex, I haven't had to deal with any of the "creative additions" I was experiencing with Claude. It doesn't add extra logic, features, or documentation that wasn't explicitly requested. This has been a massive time-saver and has made the development process much smoother.
In my experience, for a large, complex project like this, the straightforward, no-nonsense approach of Codex has been far more effective. It feels like a tool that's designed to be a precise instrument for developers, rather than a creative partner that sometimes goes off-script.
Has anyone else had similar experiences when comparing these (or other) AI models on large-scale projects? I'm curious to know if my experience is unique or if others have found certain models to be better suited for specific types of development workflows.
TL;DR: For my complex Laravel project, Codex + Serena MCP has been significantly more efficient and direct than Claude + Serena MCP. Codex completes tasks in one go without adding unrequested features, which has been a major boost to my productivity.
Hi all,
I've been using the serena MCP with claude code inside VSCode, but it breaks the IDE integration workflow that allows me to see a diff view of the code changes. Does anyone have a "compromise" setup that gets most of the benefits of Serena without losing the diff view? Would it work to just remove the editing tools like regex replace, or at that point does Serena become a waste? Thanks!
Hey all - I’ve read a lot about MCPs here. I still don’t quite get them.
The gist I understand is that they are basically little servers you can run to augment what the AI can do?
I heard Serena is a good one for understanding your whole codebase better but I thought cursor already did that? Can someone sort of explain the benefits of Serena for me?
And maybe recommend a few others to try ?
Thanks in advance !
I am using Serena MCP, but I don't notice that Claude Code works better with it. In fact, anytime it calls Serena's tool, CC slows to a grind. I have my project indexed. Is it just me, or are MCPs just hype and not value adds?