Thanks so much to /u/thelastlokean for raving about this.
I've been spending days writing my own custom scripts with grep, ast-grep, and writing tracing through instrumentation hooks and open telemetry to get Claude to understand the structure of the various api calls and function calls.... Wow. Then Serena MCP (+ Claude Code) seems to be built exactly to solve that.
Within a few moments of reading some of the docs and trying it out I can immediately see this is a game changer.
Don't take my word, try it out. Especially if your project is starting to become more complex.
https://github.com/oraios/serena
I'm constantly hitting context limits faster with serena than without it
And honestly, I can't even tell if its actually helping/is beneficial at all
Anyone have any thoughts/experiences they'd like the share to help me understand if it's ACTUALLY helpful or not?
Thansk
Videos
Ive been going through this subreddit abit on serena MCP and its often mentioned, same goes for youtube videos - often mentioned - even saw some cool guys just posting some own products they made just for this here today/yesterday.
Im right now trying to get around how to be able to approach large legacy files and it is a pain, and installed serena mcp with claude code, but honestly im unsure if im getting any actual benefit from it - its stated that i will save tokens and get a much better indexing of large codebases, and while i do notice maybe that instead of going filesystem it accesses it with index am simply not feeling the love as to feeling able to work specifically in the larger files or getting better overview than claude out of the box of the codebase.
If anyone ask - what MCP is musthave, Serena will be mentioned - and can find alot of youtube videos with that headline but anyone knows of someone who goes through this with actual large codebases spending time on showing the benefit in real life ? those ive gone through so far is saying 'its great' and show how to install it and then thats about it.
And note i am not dissing Serena at all - its seems to be extremely valuable and i might be using it wrong but would be great if anyone had some real handson with real large codebases or just large source files so i could be pointed in the direction of how i could utilize it.
Or should i go for other tools here mainproblem is ofcourse you can get really really stuck if you have really bad legacy code with huge source file or bad structured code and goal here is trying to be able to ex do some rough refactoring on single large files that goes way above the context window if CC etc.
Or if anyone had a consistant luck in moving through large codebases for refactoring and able to show some working prompting and tools for this ( i am already planning/documenting/subagenting/etc so really looking for some hands on proper practice/right tools ).
Note languages vary - anything from C#, Java, Js, to different web-frameworks.
Thanks !
I’ve been using a few MCPs in my setup lately, mainly Context 7, Supabase, and Playwright.
I'm just curious in knowing what others here are finding useful. Which MCPs have actually become part of your daily workflow with Cursor? I don’t want to miss out on any good ones others are using.
Also, is there anything that you feel is still missing as in an MCP you wish existed for a repetitive or annoying task?
One of the biggest gaps in most AI coding setups today is persistent memory. By default, session history gets reset, which kills continuity and prevents Cursor from adapting to your project or codebase over time. That means you end up re-explaining the same context and instructions, which hurts productivity.
I’ve been experimenting with different MCP-compatible memory layers to extend Cursor agents. Here are some standouts and their best-fit use cases:
1. File-based memory (claude.md, Cursor configs)
- Best for personalization and lightweight assistants. Simple, transparent, but doesn’t scale.
- MCP compatibility: Not built-in. Needs custom connectors to be useful in agent systems.
2. Vector DBs (Pinecone, Weaviate, Chroma, FAISS, pgvector, Milvus)
- Best for large-scale semantic search across docs, logs, or knowledge bases.
- MCP compatibility: No native MCP, requires wrappers.
3. Byterover
- Best for team collaboration with Git-like system for AI memories. Support episodic and semantic memory, plus agent tools and workflows to help agents build and use context effectively in tasks like debugging, planning, and code generation.
- MCP compatibility: Natively designed for MCP servers and works smoothly with Cursor across IDEs and CLIs.
4. Zep
- Best for production-grade assistants on large, evolving codebases. Hybrid search and summarization keep memory consistent.
- MCP compatibility: Partial. Some connectors exist, but setup is not always straightforward.
5. Letta
- Best for structured, policy-driven long-term memory. Useful in projects that evolve frequently and need strict update rules.
- MCP compatibility: Limited. Requires integration work for MCP.
6. Mem0
- Best for experimentation and custom pipelines. Backend-agnostic, good for testing retrieval and storage strategies.
- MCP compatibility: Not native, but some community connectors exist.
7. Serena
- Best for personal or small projects where polished UX and easy setup matter more than depth.
- MCP compatibility: No out-of-the-box MCP support.
8. LangChain Memories
- Best for quick prototyping of conversational memory. Easy to use but limited for long-term use.
- MCP compatibility: Some LangChain components can be wrapped, but not MCP-native.
9. LlamaIndex Memory Modules
- Best for pluggable and flexible memory experiments on top of retrieval engines.
- MCP compatibility: Similar to LangChain, integration requires wrappers.
Curious what everyone else is using. Are there any memory frameworks you’ve had good luck with, especially for MCP setups? Any hidden gems I should try? (with specific use cases)
After extensive testing, I’ve found that Claude Code (CC) significantly outperforms other AI coding tools, including Windsurf, Cursor, Replit and Serena, despite some claims that Serena is on par with CC.
I recently tested Serena—an MCP platform marketed as being on par with Claude Code while costing 10x less—but the results were disappointing. With each prompt, Serena introduced numerous errors, requiring 1–2 hours of manual debugging just to get an 80% complete result. In contrast, Claude Code delivered 100% accurate output across three significant UI components in just 6 minutes, with only 60 seconds of prompting and no further intervention.
Yes, CC is more expensive in terms of API usage—one task alone cost me $3.92—but the results were flawless. Not a single syntax, logic, or design issue. The time saved and the hands-off experience more than justified the cost in my case.
Some users have argued that Claude Code doesn’t do anything particularly special. I disagree. After testing various tools like Serena and Windsurf, it’s clear that CC consistently delivers superior quality and reliability.
Given Serena's use of Claude Desktop (avoiding per-token API costs), my aim is to explore how we might replicate Claude Code’s capabilities within a Serena-style (MCP) model. As a community, can we analyze what makes Claude Code so effective and find a way to build something comparable—without the API expense?
My goal with this post is to work together as a community to methodically uncover what makes Claude Code so remarkably effective—so we can replicate its performance within Claude Desktop at a fraction of the cost.
Analyzing Anon Kode, an open-source replica of Claude Code, might be a good place to start.
Hey all - I’ve read a lot about MCPs here. I still don’t quite get them.
The gist I understand is that they are basically little servers you can run to augment what the AI can do?
I heard Serena is a good one for understanding your whole codebase better but I thought cursor already did that? Can someone sort of explain the benefits of Serena for me?
And maybe recommend a few others to try ?
Thanks in advance !
Hi all,
I've been using the serena MCP with claude code inside VSCode, but it breaks the IDE integration workflow that allows me to see a diff view of the code changes. Does anyone have a "compromise" setup that gets most of the benefits of Serena without losing the diff view? Would it work to just remove the editing tools like regex replace, or at that point does Serena become a waste? Thanks!
MCP is great for integrating with Claude/Cursor, but building production agents with it does not make sense to me: You don't have access to the server's prompts, you lack observability, and can't debug.
Most of the work when trying to build a reliable agent is 1) determining which tools do you provide to the llm 2) how to describe the tools, their interface. MCP gives you pre-built tools that you can't change neither the interface nor the descriptions/prompts.
There is value in quick integration (or at least the promise of), but I don't see why it would be used when building an agent.
Would love to hear the opposite opinion.
I am using Serena MCP, but I don't notice that Claude Code works better with it. In fact, anytime it calls Serena's tool, CC slows to a grind. I have my project indexed. Is it just me, or are MCPs just hype and not value adds?
Sequential Thinking MCP – Breaks down complex problems into manageable steps, enabling structured problem-solving. Ideal for system design planning, architectural decisions, and refactoring strategies.
Puppeteer MCP – Navigate websites, take screenshots, and interact with web pages. Makes a big difference in UI testing and automation.
Memory Bank MCP – A must-have for complex projects. Organizes project knowledge hierarchically, helping AI better understand your project’s structure and goals. This MCP automates the creation of a memory bank for your project.
Playwright MCP – Critical for cross-browser testing and advanced web automation. A modern, feature-rich alternative to Puppeteer.
GitHub MCP – Saves time by eliminating context switching between your environment and GitHub. Allows you to manage repositories, modify content, work with issues and pull requests, and more—all within your workflow.
Knowledge Graph Memory MCP – Crucial for maintaining project context across sessions. Prevents repetition and ensures the AI retains key project details.
DuckDuckGo MCP – Lightweight web search tool for accessing current documentation, error solutions, and up-to-date information without leaving your environment. Doesn’t require an API key—unlike many alternatives.
MCP Compass – Your guide through the growing MCP ecosystem. Helps you discover the right tools for specific tasks using simple natural language queries.
Check out detailed setup instructions, practical examples, and use cases for all these MCPs: https://enlightby.ai/projects/36
The tutorial also lets you configure MCPs natively in Cursor IDE by interacting directly with Cursor's environment.
What are your must-have MCP servers?