Videos
I want to keep my mcp config.json in version control - so I don't want to keep API keys in there.
Is there a way that I can use a .env file or similar to keep the secrets out of the config?
Currently I'm using MCP SuperAssistant, and want to move to VSCode/Copilot, but I hope this issue is maybe more generic than the choice of tool.
From their Github Repo:
❌ Without Context7
LLMs rely on outdated or generic information about the libraries you use. You get:
❌ Code examples are outdated and based on year-old training data
❌ Hallucinated APIs don't even exist
❌ Generic answers for old package versions
✅ With Context7
Context7 MCP pulls up-to-date, version-specific documentation and code examples straight from the source — and places them directly into your prompt.
Context7 fetches up-to-date code examples and documentation right into your LLM's context.
1️⃣ Write your prompt naturally
2️⃣ Tell the LLM to use context7
3️⃣ Get working code answers
No tab-switching, no hallucinated APIs that don't exist, no outdated code generations.
I have tried it with VS Code + Cline as well as Windsurf, using GPT-4.1-mini as a base model and it works like a charm.
Context7 website
Github Repo
YT Tutorials on how to use with Cline or Windsurf:
Context7: The New MCP Server That Will CHANGE AI Coding (FREE)
This is Hands Down the BEST MCP Server for AI Coding Assistants
In one of my previous posts here, somebody asked how Context7 really works. It made me realize a lot of us use it as a black box, not knowing what happens under the hood.
I was curious too so I dug to put the pieces together.
Here's a summary of how the Context7 MCP works:
Understand that MCPs just expose tool descriptions (function calling)
Those tool descriptions influence how Claude Code calls Context7
Claude Code sends a best-guess keyword of the library name to the Context7 MCP's
resolve-library-idtoolContext7 returns a list of possible library matches
Claude Code makes a best guess selection of the library based on some criteria and sends a keyword of what topic you're trying to get docs on to Context7 MCP's
get-library-docstoolContext7 returns a list of possible code snippets/docs regarding about said topic
Claude Code calls Context7 MCP's two tools as many times as necessary to achieve the intended goal
Claude Code synthesizes the output from
get-library-docstool picking out what it needs
It's easy to see this all playing out if you start Claude Code with claude --debug --verbose.
Based on a prompt such as Show me how I could use "Cloudflare Durable Objects" with "Cloudflare D1 (llmstxt)" together. Use context7. Here's what a call to resolve-library-id looks like. Notice how the output comes with additional instructions.
context7 - resolve-library-id (MCP)(libraryName: "Cloudflare Durable Objects")
⎿ Available Libraries (top matches):
Each result includes:
- Library ID: Context7-compatible identifier (format: /org/project)
- Name: Library or package name
- Description: Short summary
- Code Snippets: Number of available code examples
- Trust Score: Authority indicator
- Versions: List of versions if available. Use one of those versions if and only if the user explicitly provides a version in their query.
For best results, select libraries based on name match, trust score, snippet coverage, and relevance to your use case.
----------
- Title: Cloudflare Durable Objects
- Context7-compatible library ID: /llmstxt/developers_cloudflare-durable-objects-llms-full.txt
- Description: Durable Objects provide a building block for stateful applications and distributed systems, enabling AI agents, collaborative applications, and real-time
interactions without managing infrastructure.
- Code Snippets: 3906
- Trust Score: 8
----------
- Title: y-durableobjects
- Context7-compatible library ID: /napolab/y-durableobjects
- Description: Facilitates real-time collaboration in Cloudflare Workers using Yjs and Durable Objects for scalable, decentralized editing features.
- Code Snippets: 27
- Trust Score: 8.4
----------
- Title: Sandbox SDK
- Context7-compatible library ID: /cloudflare/sandbox-sdk
- Description: Run isolated code environments on Cloudflare's edge network using Durable Objects and Cloudflare Containers, enabling command execution, file management, and
service exposure via public URLs.
- Code Snippets: 12
- Trust Score: 9.3
----------
...This is how the get-library-docs tool call looks like:
context7 - get-library-docs (MCP)(context7CompatibleLibraryID: "/llmstxt/developers_cloudflare-durable-objects-llms-full.txt", topic: "database integration patterns", tokens:
5000)
⎿ ========================
CODE SNIPPETS
========================
TITLE: D1 Compatible PRAGMA Statements
DESCRIPTION: Documentation for PRAGMA statements compatible with Cloudflare D1, which is based on SQLite. This covers various aspects of database introspection, integrity
checking, and configuration.
SOURCE: https://developers.cloudflare.com/d1/sql-api/sql-statements/
LANGUAGE: APIDOC
CODE:
```
PRAGMA Statements:
These PRAGMA statements are compatible with Cloudflare D1, providing insights and control over the database.
Table Information:
- PRAGMA table_list: Lists all tables in the database.
- PRAGMA table_info("TABLE_NAME"): Describes the schema of a specific table.
- PRAGMA table_xinfo("TABLE_NAME"): Provides extended information about table columns.
Index Information:
- PRAGMA index_list("TABLE_NAME"): Lists all indexes for a specific table.
- PRAGMA index_info(INDEX_NAME): Describes a specific index.
- PRAGMA index_xinfo("INDEX_NAME"): Provides extended information about index columns.
Integrity and Constraints:
- PRAGMA quick_check: Performs a quick integrity check on the database.
- PRAGMA foreign_key_check: Checks for violations of foreign key constraints.
- PRAGMA foreign_key_list("TABLE_NAME"): Lists foreign key constraints for a table.
- PRAGMA case_sensitive_like = (on|off): Controls case sensitivity for LIKE operations.
- PRAGMA ignore_check_constraints = (on|off): Ignores CHECK constraints during operations.
- PRAGMA foreign_keys = (on|off): Enables or disables foreign key enforcement.
- PRAGMA defer_foreign_keys = (on|off): Controls deferred foreign key constraint checking.
Other:
- PRAGMA legacy_alter_table = (on|off): Enables or disables legacy ALTER TABLE syntax.
- PRAGMA recursive_triggers = (on|off): Controls recursive execution of triggers.
- PRAGMA reverse_unordered_selects = (on|off): Affects the order of results for unordered SELECTs.
- PRAGMA optimize: Optimizes the database schema (may not be fully supported or have different behavior).
Querying sqlite_master:
- SELECT name FROM sqlite_master WHERE type='table';: Example query to list all tables.
Search with LIKE:
- LIKE operator can be used for pattern matching in WHERE clauses.
```
----------------------------------------
TITLE: Rust Worker D1 Database Integration Example
DESCRIPTION: Example of a Rust worker handling a GET request to fetch data from a D1 database. It demonstrates preparing a SQL statement, binding parameters, executing the
query, and returning the result as JSON.
SOURCE: https://github.com/cloudflare/workers-rs#_snippet_75
LANGUAGE: rust
CODE:
```
use worker::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct Thing {
thing_id: String,
desc: String,
num: u32,
}
#[event(fetch, respond_with_errors)]
pub async fn main(request: Request, env: Env, _ctx: Context) -> Result<Response> {
Router::new()
.get_async("/:id", |_, ctx| async move {
let id = ctx.param("id").unwrap()?;
let d1 = ctx.env.d1("things-db")?;
let statement = d1.prepare("SELECT * FROM things WHERE thing_id = ?1");
let query = statement.bind(&[id])?;
let result = query.first::<Thing>(None).await?;
match result {
Some(thing) => Response::from_json(&thing),
None => Response::error("Not found", 404),
}
})
.run(request, env)
.await
}
```
----------------------------------------
...How to see the Context7 MCP's tool description
Claude Code actually makes it really easy to see the tool descriptions for all of your enabled MCPs. Just issue the /mcp slash command, select Context7 and keep drilling down until you get to the tool you're interested in. Eventually you'll see the tool description.
Super important: to reiterate, the description and parameters in these tools are what influences when and how Claude Code calls them.
Saving on Tokens and Latency
Each call to resolve-library-id can return about 7000 tokens. And every call to get-library-docs can be between 4000 to 10000 tokens. If you already know exactly which Context7 library ID you want to use to query, you can already save a decent amount of tokens and a big plus there's less latency.
To do that go to context7.com, search for your library, make sure it's the one you need (sometimes there's similar ones), and copy the link to the detail page.
The URL looks like https://context7.com/llmstxt/developers_cloudflare_com-d1-llms-full.txt
If you remove the domain you get the library ID like so /llmstxt/developers_cloudflare_com-d1-llms-full.txt.
Now you can use that library ID in your prompt.
Here's how that could look like:
Show me how I could use "Cloudflare Durable Objects" (use library id /llmstxt/developers_cloudflare-durable-objects-llms-full.txt) with "Cloudflare D1 (llmstxt)" (use library id /llmstxt/developers_cloudflare_com-d1-llms-full.txt) together. Use context7.
Now it completely skips 2 calls to resolve-library-id.
----
Hopefully this deep dive helps you to see how all of the pieces fit together.
——
UPDATE: I really enjoy writing these posts and appreciate every upvote and comment. Thank you!
Trying very hard to grow my very small YouTube channel. If you'd like to support me, please subscribe here https://www.youtube.com/@jorgecolonconsulting.
Got several Claude Code ideas to talk about for future videos inspired by the comments here.
I understand that Context7 is an MCP that pulls in the latest documentation for any library. I've added it in the MCP settings and I've generated some code with it by prompting "use context7 for the latest documentations on X".
Questions:
- Do I constantly need to be explicitly asking it to use context7 for every prompt, whether it's adding in a new library or if it's continuing off of a previous prompt?
- If yes to above, can I just add it to a systems prompt to always use the context7 MCP in every prompt? Will that become more expensive?
Genuine question: What's driving all the excitement around Context7?
From what I can tell, it's an MCP server that fetches documentation and dumps it into your LLM's context. The pitch is that it solves "outdated training data" problems.
But here's what I don't get:
For 90% of use cases, Claude Sonnet already knows the docs cold. React? TypeScript? Next.js? Tailwind? The model was trained on these. It doesn't need the entire React docs re-explained to it. That's just burning tokens.
For the 10% where you actually need current docs (brand new releases, niche packages, internal tools), wouldn't a targeted web_fetch or curl be better? You get exactly the page you need, not a massive documentation dump. It's more precise, uses fewer tokens, and you control what goes into context.
I see people installing Context7 and then asking it about React hooks or Express middleware. Things that are absolutely baked into the model's training. It feels like installing a GPS to explain directions to a cab driver.
Am I completely off base here? What am I missing about why this is everywhere suddenly?
Edit: Did some digging into how Context7 actually works.
It's more sophisticated than I initially thought, but it still doesn't solve the core problem:
How it works:
Context7 doesn't do live web fetches. It queries their proprietary backend API that serves pre crawled documentation
They crawl 33k+ libraries on a 10-15 day rolling schedule, pre-process everything, and cache it
When you query, you get 5,000-10,000 tokens of ranked documentation snippets
Ranking system prioritizes: code examples > prose, API signatures > descriptions
You can filter by topic (e.g., "routing", "authentication")
You're getting documentation that Context7 crawled up to 15 days ago from their database. You could just web_fetch the actual docs yourself and get current information directly from the source, without:
Depending on Context7's infrastructure and update schedule
Burning 5-10k tokens on pre-selected chunks when the model already knows the library
Rate limits from their API
For mature, well documented frameworks like React, Next.js, or TypeScript that are baked into the training data, this is still redundant. For the 10% of cases where you need current docs (new releases, niche packages), web_fetch on the specific page you need is more precise, more current, and uses fewer tokens.
TL;DR: Context7 is a documentation caching layer with smart ranking. But for libraries Claude already knows, it's overkill. For the cases where you actually need current docs, web_fetch is more direct.
Hi all! I am a solo, small developer and made an MCP, new to reddit
ContextS ("S" for smart) is an AI-powered documentation tool I made to retrieve documentation from Context7, pass it to an AI of choice (currently supports some Gemini, OpenAI, and Anthropic models) alongside a "context" of what documentation the client (client in this case means the AI you are using primarily, not ContextS) needs on a library. It can be easily set up for free with a Gemini API key.
It provides targeted guidance and code examples to give the client better understanding while often using less overall tokens. Check it out! All feedback is welcome.
https://github.com/ProCreations-Official/contextS
I found this MCP tool recently: https://smithery.ai/server/@upstash/context7-mcp
Context 7, a software document retrieval tool and combined it with chain of thought reasoning using https://smithery.ai/server/@smithery-ai/server-sequential-thinking
Here's the prompt I used, it was rather helpful in improving accuracy and the overall experience:
You are a large language model equipped with a functional extension: Model Context Protocol (MCP) servers. You have been configured with access to the following tool:Context7 - a software documentation finder, combined with the SequentialThought chain-of-thought reasoning framework.
Tool Descriptions:
resolve-library-idRequired first step: Resolves a general package name into a Context7-compatible library ID. This must be called before using
get-library-docsto retrieve valid documentation.get-library-docsFetches up-to-date documentation for a library. You must first call
resolve-library-idto obtain the exact Context7-compatible library ID.sequentialthinkingEnables chain-of-thought reasoning to analyze and respond to user queries.
Your task:
You will extensively use these tools when users ask questions about how a software package works. Your responses should follow this structured approach:
Analyze the user’s request to identify the type of query. Queries may be:
Creative: e.g., proposing an idea using a package and how it would work.
Technical: e.g., asking about a specific part of the documentation.
Error debugging: e.g., encountering an error and searching for a fix in the documentation.
Use SequentialThought to determine the query type.
For each query type, follow these steps:
Generate your own idea or response based on the request.
Find relevant documentation using Context7 to support your response and reference it.
Reflect on the documentation and your response to ensure quality and correctness.
RESULTS:
I asked for a LangChain prompt chain system using MCP servers, and it gave me a very accurate response with examples straight from the docs!