In one of my previous posts here, somebody asked how Context7 really works. It made me realize a lot of us use it as a black box, not knowing what happens under the hood.
I was curious too so I dug to put the pieces together.
Here's a summary of how the Context7 MCP works:
Understand that MCPs just expose tool descriptions (function calling)
Those tool descriptions influence how Claude Code calls Context7
Claude Code sends a best-guess keyword of the library name to the Context7 MCP's
resolve-library-idtoolContext7 returns a list of possible library matches
Claude Code makes a best guess selection of the library based on some criteria and sends a keyword of what topic you're trying to get docs on to Context7 MCP's
get-library-docstoolContext7 returns a list of possible code snippets/docs regarding about said topic
Claude Code calls Context7 MCP's two tools as many times as necessary to achieve the intended goal
Claude Code synthesizes the output from
get-library-docstool picking out what it needs
It's easy to see this all playing out if you start Claude Code with claude --debug --verbose.
Based on a prompt such as Show me how I could use "Cloudflare Durable Objects" with "Cloudflare D1 (llmstxt)" together. Use context7. Here's what a call to resolve-library-id looks like. Notice how the output comes with additional instructions.
context7 - resolve-library-id (MCP)(libraryName: "Cloudflare Durable Objects")
⎿ Available Libraries (top matches):
Each result includes:
- Library ID: Context7-compatible identifier (format: /org/project)
- Name: Library or package name
- Description: Short summary
- Code Snippets: Number of available code examples
- Trust Score: Authority indicator
- Versions: List of versions if available. Use one of those versions if and only if the user explicitly provides a version in their query.
For best results, select libraries based on name match, trust score, snippet coverage, and relevance to your use case.
----------
- Title: Cloudflare Durable Objects
- Context7-compatible library ID: /llmstxt/developers_cloudflare-durable-objects-llms-full.txt
- Description: Durable Objects provide a building block for stateful applications and distributed systems, enabling AI agents, collaborative applications, and real-time
interactions without managing infrastructure.
- Code Snippets: 3906
- Trust Score: 8
----------
- Title: y-durableobjects
- Context7-compatible library ID: /napolab/y-durableobjects
- Description: Facilitates real-time collaboration in Cloudflare Workers using Yjs and Durable Objects for scalable, decentralized editing features.
- Code Snippets: 27
- Trust Score: 8.4
----------
- Title: Sandbox SDK
- Context7-compatible library ID: /cloudflare/sandbox-sdk
- Description: Run isolated code environments on Cloudflare's edge network using Durable Objects and Cloudflare Containers, enabling command execution, file management, and
service exposure via public URLs.
- Code Snippets: 12
- Trust Score: 9.3
----------
...This is how the get-library-docs tool call looks like:
context7 - get-library-docs (MCP)(context7CompatibleLibraryID: "/llmstxt/developers_cloudflare-durable-objects-llms-full.txt", topic: "database integration patterns", tokens:
5000)
⎿ ========================
CODE SNIPPETS
========================
TITLE: D1 Compatible PRAGMA Statements
DESCRIPTION: Documentation for PRAGMA statements compatible with Cloudflare D1, which is based on SQLite. This covers various aspects of database introspection, integrity
checking, and configuration.
SOURCE: https://developers.cloudflare.com/d1/sql-api/sql-statements/
LANGUAGE: APIDOC
CODE:
```
PRAGMA Statements:
These PRAGMA statements are compatible with Cloudflare D1, providing insights and control over the database.
Table Information:
- PRAGMA table_list: Lists all tables in the database.
- PRAGMA table_info("TABLE_NAME"): Describes the schema of a specific table.
- PRAGMA table_xinfo("TABLE_NAME"): Provides extended information about table columns.
Index Information:
- PRAGMA index_list("TABLE_NAME"): Lists all indexes for a specific table.
- PRAGMA index_info(INDEX_NAME): Describes a specific index.
- PRAGMA index_xinfo("INDEX_NAME"): Provides extended information about index columns.
Integrity and Constraints:
- PRAGMA quick_check: Performs a quick integrity check on the database.
- PRAGMA foreign_key_check: Checks for violations of foreign key constraints.
- PRAGMA foreign_key_list("TABLE_NAME"): Lists foreign key constraints for a table.
- PRAGMA case_sensitive_like = (on|off): Controls case sensitivity for LIKE operations.
- PRAGMA ignore_check_constraints = (on|off): Ignores CHECK constraints during operations.
- PRAGMA foreign_keys = (on|off): Enables or disables foreign key enforcement.
- PRAGMA defer_foreign_keys = (on|off): Controls deferred foreign key constraint checking.
Other:
- PRAGMA legacy_alter_table = (on|off): Enables or disables legacy ALTER TABLE syntax.
- PRAGMA recursive_triggers = (on|off): Controls recursive execution of triggers.
- PRAGMA reverse_unordered_selects = (on|off): Affects the order of results for unordered SELECTs.
- PRAGMA optimize: Optimizes the database schema (may not be fully supported or have different behavior).
Querying sqlite_master:
- SELECT name FROM sqlite_master WHERE type='table';: Example query to list all tables.
Search with LIKE:
- LIKE operator can be used for pattern matching in WHERE clauses.
```
----------------------------------------
TITLE: Rust Worker D1 Database Integration Example
DESCRIPTION: Example of a Rust worker handling a GET request to fetch data from a D1 database. It demonstrates preparing a SQL statement, binding parameters, executing the
query, and returning the result as JSON.
SOURCE: https://github.com/cloudflare/workers-rs#_snippet_75
LANGUAGE: rust
CODE:
```
use worker::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct Thing {
thing_id: String,
desc: String,
num: u32,
}
#[event(fetch, respond_with_errors)]
pub async fn main(request: Request, env: Env, _ctx: Context) -> Result<Response> {
Router::new()
.get_async("/:id", |_, ctx| async move {
let id = ctx.param("id").unwrap()?;
let d1 = ctx.env.d1("things-db")?;
let statement = d1.prepare("SELECT * FROM things WHERE thing_id = ?1");
let query = statement.bind(&[id])?;
let result = query.first::<Thing>(None).await?;
match result {
Some(thing) => Response::from_json(&thing),
None => Response::error("Not found", 404),
}
})
.run(request, env)
.await
}
```
----------------------------------------
...How to see the Context7 MCP's tool description
Claude Code actually makes it really easy to see the tool descriptions for all of your enabled MCPs. Just issue the /mcp slash command, select Context7 and keep drilling down until you get to the tool you're interested in. Eventually you'll see the tool description.
Super important: to reiterate, the description and parameters in these tools are what influences when and how Claude Code calls them.
Saving on Tokens and Latency
Each call to resolve-library-id can return about 7000 tokens. And every call to get-library-docs can be between 4000 to 10000 tokens. If you already know exactly which Context7 library ID you want to use to query, you can already save a decent amount of tokens and a big plus there's less latency.
To do that go to context7.com, search for your library, make sure it's the one you need (sometimes there's similar ones), and copy the link to the detail page.
The URL looks like https://context7.com/llmstxt/developers_cloudflare_com-d1-llms-full.txt
If you remove the domain you get the library ID like so /llmstxt/developers_cloudflare_com-d1-llms-full.txt.
Now you can use that library ID in your prompt.
Here's how that could look like:
Show me how I could use "Cloudflare Durable Objects" (use library id /llmstxt/developers_cloudflare-durable-objects-llms-full.txt) with "Cloudflare D1 (llmstxt)" (use library id /llmstxt/developers_cloudflare_com-d1-llms-full.txt) together. Use context7.
Now it completely skips 2 calls to resolve-library-id.
----
Hopefully this deep dive helps you to see how all of the pieces fit together.
——
UPDATE: I really enjoy writing these posts and appreciate every upvote and comment. Thank you!
Trying very hard to grow my very small YouTube channel. If you'd like to support me, please subscribe here https://www.youtube.com/@jorgecolonconsulting.
Got several Claude Code ideas to talk about for future videos inspired by the comments here.
Videos
I found this MCP tool recently: https://smithery.ai/server/@upstash/context7-mcp
Context 7, a software document retrieval tool and combined it with chain of thought reasoning using https://smithery.ai/server/@smithery-ai/server-sequential-thinking
Here's the prompt I used, it was rather helpful in improving accuracy and the overall experience:
You are a large language model equipped with a functional extension: Model Context Protocol (MCP) servers. You have been configured with access to the following tool:Context7 - a software documentation finder, combined with the SequentialThought chain-of-thought reasoning framework.
Tool Descriptions:
resolve-library-idRequired first step: Resolves a general package name into a Context7-compatible library ID. This must be called before using
get-library-docsto retrieve valid documentation.get-library-docsFetches up-to-date documentation for a library. You must first call
resolve-library-idto obtain the exact Context7-compatible library ID.sequentialthinkingEnables chain-of-thought reasoning to analyze and respond to user queries.
Your task:
You will extensively use these tools when users ask questions about how a software package works. Your responses should follow this structured approach:
Analyze the user’s request to identify the type of query. Queries may be:
Creative: e.g., proposing an idea using a package and how it would work.
Technical: e.g., asking about a specific part of the documentation.
Error debugging: e.g., encountering an error and searching for a fix in the documentation.
Use SequentialThought to determine the query type.
For each query type, follow these steps:
Generate your own idea or response based on the request.
Find relevant documentation using Context7 to support your response and reference it.
Reflect on the documentation and your response to ensure quality and correctness.
RESULTS:
I asked for a LangChain prompt chain system using MCP servers, and it gave me a very accurate response with examples straight from the docs!
How do you properly configure with Claude code and more importantly how does it get used or how do you make claude code use it?
Genuine question: What's driving all the excitement around Context7?
From what I can tell, it's an MCP server that fetches documentation and dumps it into your LLM's context. The pitch is that it solves "outdated training data" problems.
But here's what I don't get:
For 90% of use cases, Claude Sonnet already knows the docs cold. React? TypeScript? Next.js? Tailwind? The model was trained on these. It doesn't need the entire React docs re-explained to it. That's just burning tokens.
For the 10% where you actually need current docs (brand new releases, niche packages, internal tools), wouldn't a targeted web_fetch or curl be better? You get exactly the page you need, not a massive documentation dump. It's more precise, uses fewer tokens, and you control what goes into context.
I see people installing Context7 and then asking it about React hooks or Express middleware. Things that are absolutely baked into the model's training. It feels like installing a GPS to explain directions to a cab driver.
Am I completely off base here? What am I missing about why this is everywhere suddenly?
Edit: Did some digging into how Context7 actually works.
It's more sophisticated than I initially thought, but it still doesn't solve the core problem:
How it works:
Context7 doesn't do live web fetches. It queries their proprietary backend API that serves pre crawled documentation
They crawl 33k+ libraries on a 10-15 day rolling schedule, pre-process everything, and cache it
When you query, you get 5,000-10,000 tokens of ranked documentation snippets
Ranking system prioritizes: code examples > prose, API signatures > descriptions
You can filter by topic (e.g., "routing", "authentication")
You're getting documentation that Context7 crawled up to 15 days ago from their database. You could just web_fetch the actual docs yourself and get current information directly from the source, without:
Depending on Context7's infrastructure and update schedule
Burning 5-10k tokens on pre-selected chunks when the model already knows the library
Rate limits from their API
For mature, well documented frameworks like React, Next.js, or TypeScript that are baked into the training data, this is still redundant. For the 10% of cases where you need current docs (new releases, niche packages), web_fetch on the specific page you need is more precise, more current, and uses fewer tokens.
TL;DR: Context7 is a documentation caching layer with smart ranking. But for libraries Claude already knows, it's overkill. For the cases where you actually need current docs, web_fetch is more direct.
...and use Context7 Skill instead ! 😁
"Agent Skills" is so awesome (should we have a new tag in this substack for "Skills"?)
Actually I realized that most of the docs have "llms.txt" right now, so I just created an "Agent Skill" to look for relevant info in that file.
Another thing is Claude models are super smart, if the content of llms.txt is too long, it'll count the lines and spawn multiple Explorer subagents in parallel to gather all the info
If a llms.txt is not found, it will fall back to reading Context7 links 🤘
Why prioritize llms.txt over Context7? Latest updates & official docs.
Why Skill over MCP? Speed & initial context optimization.
This skill (and others) are in this repo: https://github.com/mrgoonie/claudekit-skills
I’ve been using a few MCPs in my setup lately, mainly Context 7, Supabase, and Playwright.
I'm just curious in knowing what others here are finding useful. Which MCPs have actually become part of your daily workflow with Claude Code? I don’t want to miss out on any good ones others are using.
Also, is there anything that you feel is still missing as in an MCP you wish existed for a repetitive or annoying task?
Checked their website, and it looks like a user submitted unmoderated mess of junk. Tried their MCP server and it keeps erroring out with:
⎿ Documentation not found or not finalized for this library. This might have happened because you used an invalid Context7-compatible library ID. To get a valid Context7-compatible library ID, use the 'resolve-library-id' with the package name you wish to retrieve documentation for.
Does this after calling resolve and resolving a proper ID, and happens on everything.
But I guess the bigger concern is, if you want high quality docs for specific things like eg. the OpenAI image API in a single markdown doc for CC to reference, how do you do it? Thanks.
I'm looking to start expanding my Claude Code usage to integrate MCP servers.
What kind of MCPs are you practically using on a 'daily' basis. I'm curious about new practical workflows not things which are MCP'd for MCP sake...
Please detail the benefits of your MCP enabled workflow versus a non-MCP workflow. We don't MCP name drops.
TL;DR: Used Claude with local MCP tools to read and modify Word documents directly. It’s like having a coding assistant that can actually touch your files. What I did:
1. Asked Claude to analyze a job requirements document - It used a 3-step semantic search process: • READ: Extracted all paragraphs from my .docx file • EMBED: Made the content searchable (though we hit some method issues here) • SEARCH: Found specific info about experience requirements 2. Got detailed answers - Claude found that the job required: • 17 years of IT experience overall • 8 years in semantic technologies • 8 years in technical standards (OWL, RDF, etc.) • Proven AI/ML experience 3. Modified the document in real-time - Then I asked Claude to update specific paragraphs, and it actually changed the Word document on my machine: • Updated paragraph 14 to “Test MCP agent” • Updated paragraph 15 to “salut maman” (lol)
Why this is crazy: • Claude isn’t just reading or generating text anymore • It’s actually executing commands on my local system • Reading real files, modifying real documents • All through natural conversation The technical side: Claude used MCP commands like: • mcp.fs.read_docx_paragraphs to extract content • mcp.fs.update_docx_paragraphs to modify specific paragraphs
It even figured out the correct parameter formats through trial and error when I gave it the wrong method name initially. This feels like the future We’re moving from “AI that talks” to “AI that does”. Having an assistant that can read your documents, understand them, AND modify them based on conversation is wild. Anyone else experimenting with MCP? What local tools are you connecting to Claude?
Can someone(s) help me understand the best way to utilize the Context7 MCP? I have it setup in Claude Code and can call each tool individually (get library id, get docs from library id). However, where do these docs get stored? Do they remain JUST in memory and for how long? Is there any persistence across sessions?
I've seen some people talk about putting reference to Context7 in their CLAUDE.md file. If I do that, will it leverage Context7 for all prompts? Only those with a specific language mentioned? Only those where I say "use Context7"?
Basically, its great when I take it step by step, but how do I make this more seamless and "integrated" into a workflow. Thoughts?
I’ve been using a few MCPs in my setup lately, mainly Context 7, Supabase, and Playwright.
I'm just curious in knowing what others here are finding useful. Which MCPs have actually become part of your daily workflow with Claude Code? I don’t want to miss out on any good ones others are using.
Also, is there anything that you feel is still missing as in an MCP you wish existed for a repetitive or annoying task?
Hello all,
I am working with Context 7 using the desktop app, and I must say it helps a lot — the context of the answers is much more to the point.
Now, I would like to expand to more MCPs that can assist me with coding and performing deep research while coding, particularly in related open-source projects, documentation, and code examples.
I do not want them to change my files, only provide output — I will handle the implementation myself. So, experts, please:
Suggest more coding-related MCPs that help you.
Provide good prompt suggestions for combining MCP pipelines.
I have it installed along with filesystem MCP and Brave Research. When it rewrites my app's code or refactors it, does it always use the MCP? Or would it state it every time if it did use it?
Many mention the MCPs they use, but not how they use them.
In light of that, I thought I'd show how I use mine and in what scenarios.
Here's my main MCPs:
Serena MCP
Playwright
Sequential Thinking by Anthropic
Context7
Serena
I like using Serena MCP for large projects for two reasons: it uses language servers for popular languages so finding references to symbols in a large project is very effective. Language servers are the same things that your IDE uses to show type information about symbols and references to those.
Similarly to running CC's /init there’s an onboarding process in Serena that gathers technical information about your project and its purpose which helps give context about your project. Apparently Serena pulls this in automatically on every chat, but I tend to prompt CC with "read Serena's initial instructions" at the beginning of every chat or after running /clear. I guess you could say that falls under “context engineering”. I like to think of it as “focused context = focused output”.
I prompt it to use the find_referencing_symbols tool referencing a specific file. This helps when you’re doing refactors, needle in haystack search or need to do surgical insertion of behavior. One really useful way I used it in a large legacy project for a client was “look for all references to symbol_name where [some fuzzy condition]. Start at this file for reference \@filename (the \ is a Reddit quirk DON'T INCLUDE) and prefer using the find_referencing_symbols tool over search_for_pattern”. It did a great job. Something that would’ve taken much more cognitive load to process and time.
There’s several other Serena tools that seem interesting to me, but I haven’t incorporated it into my workflow yet. In particular, the think tools.
Context7
A lot of people talk about using Context7, but this is how I specifically use it. I like to use it to get the latest documentation on a package, but mostly for things that aren’t complex. Since it relies on embeddings and re-ranking sometimes more nuanced context can be missed. For more complex things I might reference actual webpages or even download markdown files to do agentic RAG locally with CC.
Playwright
I use Playwright when I’m working on web apps. Since it can take screenshots and sees the DOM it can give more multimodal context to CC. Useful for tricky frontend work. I’ve even used it to do some marketing stuff like scraping my bookmarks on X and finding information I want.
Sequential Thinking
Last one I use is sequential thinking by Anthropic. It helps for task adherence for tasks that have multiple, complex steps. Anytime that I have a very complex multi-step task I'll finish off the prompt with "use sequential thinking". It works by decomposing multi tasks into discrete tasks and then ensuring each one was done.
------
UPDATE: This post blew up and I'm really appreciative of all of you. Thanks for the upvotes and taking the time to read. I try to provide as much value as I can.
Reached #2 post today that's crazy!
Trying very hard to grow my very small YouTube channel. If you'd like to support me, please subscribe here https://www.youtube.com/@jorgecolonconsulting.
My next video is on how I'm using subagents and some tips there.
UPDATE 2: Just released a new Tutoring for Vibe Coders service for those that value their time and want to understand how to cut through the rough parts of it. Already booked my first customer!
If that sounds interesting to you book a call with me.
I did some search in here and I now understand what Serena and Context7 do. I have also added these MCPs into my Claude Code.
However, I can’t seem to find how I can use these MCPs effectively. Since I have added thses MCPs, will Claude be aware of them automatically and use them as and when needed? or do I need to instruct in CLAUDE.MD for Claude to use them?
This might sound like another Claude Code glaze, but I can't really get enough of it.
I had an idea of building an invoice management system, but the thing is I know zilch about frontend programming. I knew Claude could make me a functional solution, but I wanted it to stick to my dummy Figma design, setup Neon DB, and control versioning itself. So, I gave it this MCP server that can route requests to Figma, Neon, and GitHub. I really wanted to see if it could pull this off.
Usually, this would take me 2–3 weeks of setup (auth, DB, UI, email, PDFs… all the glue work). With Claude Code and MCPs, it actually came together in a matter of hours.
Here’s what was happening under the hood:
I ran everything through Claude Code and MCPs. So instead of juggling GitHub, Figma, Neon, etc. Claude just pulled in the right tools at runtime. Used Context7 and Rube MCP - a universal server, basically one MCP with every tool to talk to anything (GitHub, Figma, Linear, etc.). You get managed OAuth as well.
Just told CC to: “Build me an invoice management app with Next.js, Postgres (Neon), Prisma, Auth.js, PDF gen, email sending.” That was literally it.
By lunch, I had
Auth (magic links, session mgmt) - DB spun up on Neon, fully wired with Prisma - Clean Figma-inspired UI pulled straight from a design kit via MCP.
Working invoicing features with multiple templates + PDF export
For the entire day: $3.65 (~5.8M tokens pushed through Sonnet + Haiku). For less than a latte, I shipped something I could actually use.
I’m still handling the tricky bits (security, edge cases, backend optimisations), but the boilerplate grind is over. It feels like a different world than it was two years ago, a brave new world of code automation.
Here's the repo: https://github.com/rohittcodes/linea.
I contributed a blog post regarding the same, do check: Claude Code with MCPs is all you need
Also, as someone starting their career in tech, I was happy with the outcome, but also felt uneasy in my gut. If it can do this so cheaply, a lot of us might need to rethink life choices in 2,3 years.
Would love your opinion on Claude Code, MCP, and the future of coding in general. Where do you see it evolving in the next few years?
I am just trying to get a sense of the tools or hacks I am missing and collectively good for everyone to assess too :-)
Just wondering what MCP servers you guys integrated and feel like has dramatically changed your success. Also, what other methodologies do you work with to achieve good results? Conversely what has been a disappointment and you've decided not to work with anymore?