Videos
I found this MCP tool recently: https://smithery.ai/server/@upstash/context7-mcp
Context 7, a software document retrieval tool and combined it with chain of thought reasoning using https://smithery.ai/server/@smithery-ai/server-sequential-thinking
Here's the prompt I used, it was rather helpful in improving accuracy and the overall experience:
You are a large language model equipped with a functional extension: Model Context Protocol (MCP) servers. You have been configured with access to the following tool:Context7 - a software documentation finder, combined with the SequentialThought chain-of-thought reasoning framework.
Tool Descriptions:
resolve-library-idRequired first step: Resolves a general package name into a Context7-compatible library ID. This must be called before using
get-library-docsto retrieve valid documentation.get-library-docsFetches up-to-date documentation for a library. You must first call
resolve-library-idto obtain the exact Context7-compatible library ID.sequentialthinkingEnables chain-of-thought reasoning to analyze and respond to user queries.
Your task:
You will extensively use these tools when users ask questions about how a software package works. Your responses should follow this structured approach:
Analyze the user’s request to identify the type of query. Queries may be:
Creative: e.g., proposing an idea using a package and how it would work.
Technical: e.g., asking about a specific part of the documentation.
Error debugging: e.g., encountering an error and searching for a fix in the documentation.
Use SequentialThought to determine the query type.
For each query type, follow these steps:
Generate your own idea or response based on the request.
Find relevant documentation using Context7 to support your response and reference it.
Reflect on the documentation and your response to ensure quality and correctness.
RESULTS:
I asked for a LangChain prompt chain system using MCP servers, and it gave me a very accurate response with examples straight from the docs!
In one of my previous posts here, somebody asked how Context7 really works. It made me realize a lot of us use it as a black box, not knowing what happens under the hood.
I was curious too so I dug to put the pieces together.
Here's a summary of how the Context7 MCP works:
Understand that MCPs just expose tool descriptions (function calling)
Those tool descriptions influence how Claude Code calls Context7
Claude Code sends a best-guess keyword of the library name to the Context7 MCP's
resolve-library-idtoolContext7 returns a list of possible library matches
Claude Code makes a best guess selection of the library based on some criteria and sends a keyword of what topic you're trying to get docs on to Context7 MCP's
get-library-docstoolContext7 returns a list of possible code snippets/docs regarding about said topic
Claude Code calls Context7 MCP's two tools as many times as necessary to achieve the intended goal
Claude Code synthesizes the output from
get-library-docstool picking out what it needs
It's easy to see this all playing out if you start Claude Code with claude --debug --verbose.
Based on a prompt such as Show me how I could use "Cloudflare Durable Objects" with "Cloudflare D1 (llmstxt)" together. Use context7. Here's what a call to resolve-library-id looks like. Notice how the output comes with additional instructions.
context7 - resolve-library-id (MCP)(libraryName: "Cloudflare Durable Objects")
⎿ Available Libraries (top matches):
Each result includes:
- Library ID: Context7-compatible identifier (format: /org/project)
- Name: Library or package name
- Description: Short summary
- Code Snippets: Number of available code examples
- Trust Score: Authority indicator
- Versions: List of versions if available. Use one of those versions if and only if the user explicitly provides a version in their query.
For best results, select libraries based on name match, trust score, snippet coverage, and relevance to your use case.
----------
- Title: Cloudflare Durable Objects
- Context7-compatible library ID: /llmstxt/developers_cloudflare-durable-objects-llms-full.txt
- Description: Durable Objects provide a building block for stateful applications and distributed systems, enabling AI agents, collaborative applications, and real-time
interactions without managing infrastructure.
- Code Snippets: 3906
- Trust Score: 8
----------
- Title: y-durableobjects
- Context7-compatible library ID: /napolab/y-durableobjects
- Description: Facilitates real-time collaboration in Cloudflare Workers using Yjs and Durable Objects for scalable, decentralized editing features.
- Code Snippets: 27
- Trust Score: 8.4
----------
- Title: Sandbox SDK
- Context7-compatible library ID: /cloudflare/sandbox-sdk
- Description: Run isolated code environments on Cloudflare's edge network using Durable Objects and Cloudflare Containers, enabling command execution, file management, and
service exposure via public URLs.
- Code Snippets: 12
- Trust Score: 9.3
----------
...This is how the get-library-docs tool call looks like:
context7 - get-library-docs (MCP)(context7CompatibleLibraryID: "/llmstxt/developers_cloudflare-durable-objects-llms-full.txt", topic: "database integration patterns", tokens:
5000)
⎿ ========================
CODE SNIPPETS
========================
TITLE: D1 Compatible PRAGMA Statements
DESCRIPTION: Documentation for PRAGMA statements compatible with Cloudflare D1, which is based on SQLite. This covers various aspects of database introspection, integrity
checking, and configuration.
SOURCE: https://developers.cloudflare.com/d1/sql-api/sql-statements/
LANGUAGE: APIDOC
CODE:
```
PRAGMA Statements:
These PRAGMA statements are compatible with Cloudflare D1, providing insights and control over the database.
Table Information:
- PRAGMA table_list: Lists all tables in the database.
- PRAGMA table_info("TABLE_NAME"): Describes the schema of a specific table.
- PRAGMA table_xinfo("TABLE_NAME"): Provides extended information about table columns.
Index Information:
- PRAGMA index_list("TABLE_NAME"): Lists all indexes for a specific table.
- PRAGMA index_info(INDEX_NAME): Describes a specific index.
- PRAGMA index_xinfo("INDEX_NAME"): Provides extended information about index columns.
Integrity and Constraints:
- PRAGMA quick_check: Performs a quick integrity check on the database.
- PRAGMA foreign_key_check: Checks for violations of foreign key constraints.
- PRAGMA foreign_key_list("TABLE_NAME"): Lists foreign key constraints for a table.
- PRAGMA case_sensitive_like = (on|off): Controls case sensitivity for LIKE operations.
- PRAGMA ignore_check_constraints = (on|off): Ignores CHECK constraints during operations.
- PRAGMA foreign_keys = (on|off): Enables or disables foreign key enforcement.
- PRAGMA defer_foreign_keys = (on|off): Controls deferred foreign key constraint checking.
Other:
- PRAGMA legacy_alter_table = (on|off): Enables or disables legacy ALTER TABLE syntax.
- PRAGMA recursive_triggers = (on|off): Controls recursive execution of triggers.
- PRAGMA reverse_unordered_selects = (on|off): Affects the order of results for unordered SELECTs.
- PRAGMA optimize: Optimizes the database schema (may not be fully supported or have different behavior).
Querying sqlite_master:
- SELECT name FROM sqlite_master WHERE type='table';: Example query to list all tables.
Search with LIKE:
- LIKE operator can be used for pattern matching in WHERE clauses.
```
----------------------------------------
TITLE: Rust Worker D1 Database Integration Example
DESCRIPTION: Example of a Rust worker handling a GET request to fetch data from a D1 database. It demonstrates preparing a SQL statement, binding parameters, executing the
query, and returning the result as JSON.
SOURCE: https://github.com/cloudflare/workers-rs#_snippet_75
LANGUAGE: rust
CODE:
```
use worker::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct Thing {
thing_id: String,
desc: String,
num: u32,
}
#[event(fetch, respond_with_errors)]
pub async fn main(request: Request, env: Env, _ctx: Context) -> Result<Response> {
Router::new()
.get_async("/:id", |_, ctx| async move {
let id = ctx.param("id").unwrap()?;
let d1 = ctx.env.d1("things-db")?;
let statement = d1.prepare("SELECT * FROM things WHERE thing_id = ?1");
let query = statement.bind(&[id])?;
let result = query.first::<Thing>(None).await?;
match result {
Some(thing) => Response::from_json(&thing),
None => Response::error("Not found", 404),
}
})
.run(request, env)
.await
}
```
----------------------------------------
...How to see the Context7 MCP's tool description
Claude Code actually makes it really easy to see the tool descriptions for all of your enabled MCPs. Just issue the /mcp slash command, select Context7 and keep drilling down until you get to the tool you're interested in. Eventually you'll see the tool description.
Super important: to reiterate, the description and parameters in these tools are what influences when and how Claude Code calls them.
Saving on Tokens and Latency
Each call to resolve-library-id can return about 7000 tokens. And every call to get-library-docs can be between 4000 to 10000 tokens. If you already know exactly which Context7 library ID you want to use to query, you can already save a decent amount of tokens and a big plus there's less latency.
To do that go to context7.com, search for your library, make sure it's the one you need (sometimes there's similar ones), and copy the link to the detail page.
The URL looks like https://context7.com/llmstxt/developers_cloudflare_com-d1-llms-full.txt
If you remove the domain you get the library ID like so /llmstxt/developers_cloudflare_com-d1-llms-full.txt.
Now you can use that library ID in your prompt.
Here's how that could look like:
Show me how I could use "Cloudflare Durable Objects" (use library id /llmstxt/developers_cloudflare-durable-objects-llms-full.txt) with "Cloudflare D1 (llmstxt)" (use library id /llmstxt/developers_cloudflare_com-d1-llms-full.txt) together. Use context7.
Now it completely skips 2 calls to resolve-library-id.
----
Hopefully this deep dive helps you to see how all of the pieces fit together.
——
UPDATE: I really enjoy writing these posts and appreciate every upvote and comment. Thank you!
Trying very hard to grow my very small YouTube channel. If you'd like to support me, please subscribe here https://www.youtube.com/@jorgecolonconsulting.
Got several Claude Code ideas to talk about for future videos inspired by the comments here.
TL;DR: Used Claude with local MCP tools to read and modify Word documents directly. It’s like having a coding assistant that can actually touch your files. What I did:
1. Asked Claude to analyze a job requirements document - It used a 3-step semantic search process: • READ: Extracted all paragraphs from my .docx file • EMBED: Made the content searchable (though we hit some method issues here) • SEARCH: Found specific info about experience requirements 2. Got detailed answers - Claude found that the job required: • 17 years of IT experience overall • 8 years in semantic technologies • 8 years in technical standards (OWL, RDF, etc.) • Proven AI/ML experience 3. Modified the document in real-time - Then I asked Claude to update specific paragraphs, and it actually changed the Word document on my machine: • Updated paragraph 14 to “Test MCP agent” • Updated paragraph 15 to “salut maman” (lol)
Why this is crazy: • Claude isn’t just reading or generating text anymore • It’s actually executing commands on my local system • Reading real files, modifying real documents • All through natural conversation The technical side: Claude used MCP commands like: • mcp.fs.read_docx_paragraphs to extract content • mcp.fs.update_docx_paragraphs to modify specific paragraphs
It even figured out the correct parameter formats through trial and error when I gave it the wrong method name initially. This feels like the future We’re moving from “AI that talks” to “AI that does”. Having an assistant that can read your documents, understand them, AND modify them based on conversation is wild. Anyone else experimenting with MCP? What local tools are you connecting to Claude?