This format worked without exposing it. Sucks that there is no documentation. { "mcpServers": { "github.com/upstash/context7-mcp": { "command": "bun", "args": [ "x", "@upstash/context7-mcp" ], "env": { "API_KEY": "${CONTEXT7_API_KEY}" }, "disabled": false, "autoApprove": [ "resolve-library-id", "get-library-docs" ] } } } Answer from deadcoder0904 on reddit.com
🌐
GitHub
github.com › upstash › context7
GitHub - upstash/context7: Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
Always use context7 when I need ... library/API documentation. This means you should automatically use the Context7 MCP tools to resolve library id and get library docs without me having to explicitly ask. From then on, you'll get Context7's docs in any related conversation without typing anything extra. You can alter the rule to match your use cases. ... npx -y @smithery/cli@latest install @upstash/context7-mcp --client <CLIENT_NAME> --key ...
Starred by 40K users
Forked by 2K users
Languages   TypeScript 76.3% | JavaScript 22.7% | Dockerfile 1.0%
🌐
Reddit
reddit.com › r/aiseoinsider › context7 mcp ai agent: how to 10x your code quality with latest api data (zero hallucinations)
r/AISEOInsider on Reddit: Context7 MCP AI Agent: How To 10X Your Code Quality With Latest API Data (Zero Hallucinations)
March 14, 2025 - Learn how to maximize your ROI ... Here's how to resolve common MCP AI agent challenges quickly: API key errors usually stem from incorrect configuration....
🌐
Reddit
reddit.com › r/claudeai › deep dive: i dug and dug and finally found out how the context7 mcp works under-the-hood
r/ClaudeAI on Reddit: Deep Dive: I dug and dug and finally found out how the Context7 MCP works under-the-hood
August 20, 2025 -

In one of my previous posts here, somebody asked how Context7 really works. It made me realize a lot of us use it as a black box, not knowing what happens under the hood.

I was curious too so I dug to put the pieces together.

Here's a summary of how the Context7 MCP works:

  1. Understand that MCPs just expose tool descriptions (function calling)

  2. Those tool descriptions influence how Claude Code calls Context7

  3. Claude Code sends a best-guess keyword of the library name to the Context7 MCP's resolve-library-id tool

  4. Context7 returns a list of possible library matches

  5. Claude Code makes a best guess selection of the library based on some criteria and sends a keyword of what topic you're trying to get docs on to Context7 MCP's get-library-docs tool

  6. Context7 returns a list of possible code snippets/docs regarding about said topic

  7. Claude Code calls Context7 MCP's two tools as many times as necessary to achieve the intended goal

  8. Claude Code synthesizes the output from get-library-docs tool picking out what it needs

It's easy to see this all playing out if you start Claude Code with claude --debug --verbose.

Based on a prompt such as Show me how I could use "Cloudflare Durable Objects" with "Cloudflare D1 (llmstxt)" together. Use context7. Here's what a call to resolve-library-id looks like. Notice how the output comes with additional instructions.

context7 - resolve-library-id (MCP)(libraryName: "Cloudflare Durable Objects")
  ⎿ Available Libraries (top matches):

    Each result includes:
    - Library ID: Context7-compatible identifier (format: /org/project)
    - Name: Library or package name
    - Description: Short summary
    - Code Snippets: Number of available code examples
    - Trust Score: Authority indicator
    - Versions: List of versions if available. Use one of those versions if and only if the user explicitly provides a version in their query.

    For best results, select libraries based on name match, trust score, snippet coverage, and relevance to your use case.

    ----------

    - Title: Cloudflare Durable Objects
    - Context7-compatible library ID: /llmstxt/developers_cloudflare-durable-objects-llms-full.txt
    - Description: Durable Objects provide a building block for stateful applications and distributed systems, enabling AI agents, collaborative applications, and real-time
    interactions without managing infrastructure.
    - Code Snippets: 3906
    - Trust Score: 8
    ----------
    - Title: y-durableobjects
    - Context7-compatible library ID: /napolab/y-durableobjects
    - Description: Facilitates real-time collaboration in Cloudflare Workers using Yjs and Durable Objects for scalable, decentralized editing features.
    - Code Snippets: 27
    - Trust Score: 8.4
    ----------
    - Title: Sandbox SDK
    - Context7-compatible library ID: /cloudflare/sandbox-sdk
    - Description: Run isolated code environments on Cloudflare's edge network using Durable Objects and Cloudflare Containers, enabling command execution, file management, and
    service exposure via public URLs.
    - Code Snippets: 12
    - Trust Score: 9.3
    ----------
...

This is how the get-library-docs tool call looks like:

context7 - get-library-docs (MCP)(context7CompatibleLibraryID: "/llmstxt/developers_cloudflare-durable-objects-llms-full.txt", topic: "database integration patterns", tokens:
                                   5000)
  ⎿ ========================
    CODE SNIPPETS
    ========================
    TITLE: D1 Compatible PRAGMA Statements
    DESCRIPTION: Documentation for PRAGMA statements compatible with Cloudflare D1, which is based on SQLite. This covers various aspects of database introspection, integrity
    checking, and configuration.

    SOURCE: https://developers.cloudflare.com/d1/sql-api/sql-statements/

    LANGUAGE: APIDOC
    CODE:
    ```
    PRAGMA Statements:

    These PRAGMA statements are compatible with Cloudflare D1, providing insights and control over the database.

    Table Information:
      - PRAGMA table_list: Lists all tables in the database.
      - PRAGMA table_info("TABLE_NAME"): Describes the schema of a specific table.
      - PRAGMA table_xinfo("TABLE_NAME"): Provides extended information about table columns.

    Index Information:
      - PRAGMA index_list("TABLE_NAME"): Lists all indexes for a specific table.
      - PRAGMA index_info(INDEX_NAME): Describes a specific index.
      - PRAGMA index_xinfo("INDEX_NAME"): Provides extended information about index columns.

    Integrity and Constraints:
      - PRAGMA quick_check: Performs a quick integrity check on the database.
      - PRAGMA foreign_key_check: Checks for violations of foreign key constraints.
      - PRAGMA foreign_key_list("TABLE_NAME"): Lists foreign key constraints for a table.
      - PRAGMA case_sensitive_like = (on|off): Controls case sensitivity for LIKE operations.
      - PRAGMA ignore_check_constraints = (on|off): Ignores CHECK constraints during operations.
      - PRAGMA foreign_keys = (on|off): Enables or disables foreign key enforcement.
      - PRAGMA defer_foreign_keys = (on|off): Controls deferred foreign key constraint checking.

    Other:
      - PRAGMA legacy_alter_table = (on|off): Enables or disables legacy ALTER TABLE syntax.
      - PRAGMA recursive_triggers = (on|off): Controls recursive execution of triggers.
      - PRAGMA reverse_unordered_selects = (on|off): Affects the order of results for unordered SELECTs.
      - PRAGMA optimize: Optimizes the database schema (may not be fully supported or have different behavior).

    Querying sqlite_master:
      - SELECT name FROM sqlite_master WHERE type='table';: Example query to list all tables.

    Search with LIKE:
      - LIKE operator can be used for pattern matching in WHERE clauses.
    ```

    ----------------------------------------

    TITLE: Rust Worker D1 Database Integration Example
    DESCRIPTION: Example of a Rust worker handling a GET request to fetch data from a D1 database. It demonstrates preparing a SQL statement, binding parameters, executing the
    query, and returning the result as JSON.

    SOURCE: https://github.com/cloudflare/workers-rs#_snippet_75

    LANGUAGE: rust
    CODE:
    ```
    use worker::*;
    use serde::Deserialize;

    #[derive(Deserialize)]
    struct Thing {
    thing_id: String,
    desc: String,
    num: u32,
    }

    #[event(fetch, respond_with_errors)]
    pub async fn main(request: Request, env: Env, _ctx: Context) -> Result<Response> {
    Router::new()
    .get_async("/:id", |_, ctx| async move {
    let id = ctx.param("id").unwrap()?;
    let d1 = ctx.env.d1("things-db")?;
    let statement = d1.prepare("SELECT * FROM things WHERE thing_id = ?1");
    let query = statement.bind(&[id])?;
    let result = query.first::<Thing>(None).await?;
    match result {
    Some(thing) => Response::from_json(&thing),
    None => Response::error("Not found", 404),
    }
    })
    .run(request, env)
    .await
    }
    ```

    ----------------------------------------
...

How to see the Context7 MCP's tool description

Claude Code actually makes it really easy to see the tool descriptions for all of your enabled MCPs. Just issue the /mcp slash command, select Context7 and keep drilling down until you get to the tool you're interested in. Eventually you'll see the tool description.

Super important: to reiterate, the description and parameters in these tools are what influences when and how Claude Code calls them.

Saving on Tokens and Latency

Each call to resolve-library-id can return about 7000 tokens. And every call to get-library-docs can be between 4000 to 10000 tokens. If you already know exactly which Context7 library ID you want to use to query, you can already save a decent amount of tokens and a big plus there's less latency.

To do that go to context7.com, search for your library, make sure it's the one you need (sometimes there's similar ones), and copy the link to the detail page.

The URL looks like https://context7.com/llmstxt/developers_cloudflare_com-d1-llms-full.txt

If you remove the domain you get the library ID like so /llmstxt/developers_cloudflare_com-d1-llms-full.txt.

Now you can use that library ID in your prompt.

Here's how that could look like:

Show me how I could use "Cloudflare Durable Objects" (use library id /llmstxt/developers_cloudflare-durable-objects-llms-full.txt)
  with "Cloudflare D1 (llmstxt)" (use library id /llmstxt/developers_cloudflare_com-d1-llms-full.txt) together. Use context7.

Now it completely skips 2 calls to resolve-library-id.

----

Hopefully this deep dive helps you to see how all of the pieces fit together.

——

UPDATE: I really enjoy writing these posts and appreciate every upvote and comment. Thank you!

Trying very hard to grow my very small YouTube channel. If you'd like to support me, please subscribe here https://www.youtube.com/@jorgecolonconsulting.

Got several Claude Code ideas to talk about for future videos inspired by the comments here.

🌐
Upstash
upstash.com › blog › context7-llmtxt-cursor
Introducing Context7: Up-to-Date Docs for LLMs and AI Code Editors | Upstash Blog
You can use Context7 with any LLM-powered editor or agent, but Cursor and Windsurf make the experience seamless. Both also have generous free levels, so it's easy to give them a try if you haven't already. Public MCP server (in private preview - join the waitlist!). APIs/SDKs for easy access.
🌐
Apidog
apidog.com › blog › context7-mcp-server
How to Install and Use Context7 MCP Server
July 17, 2025 - These issues slow down development, ... and documentation sites. Context7 is an MCP server that solves these problems by dynamically injecting up-to-date, version-specific documentation into your prompts....
🌐
GitHub
github.com › upstash › context7 › issues › 666
context 7 is not connecting even after changing API key · Issue #666 · upstash/context7
September 4, 2025 - context7 - resolve-library-id (MCP)(libraryName: "Django") ⎿ Unauthorized. Please check your API key. ● context7 - resolve-library-id (MCP)(libraryName: "SQLite") ⎿ Unauthorized. Please check your ...
Published   Sep 04, 2025
🌐
Reddit
reddit.com › r/claudeai › prompt for a more accurate claude coding experience - context7 + sequentialthought mcp server
r/ClaudeAI on Reddit: Prompt for a more accurate Claude coding experience - Context7 + Sequentialthought MCP server
May 4, 2025 -

I found this MCP tool recently: https://smithery.ai/server/@upstash/context7-mcp
Context 7, a software document retrieval tool and combined it with chain of thought reasoning using https://smithery.ai/server/@smithery-ai/server-sequential-thinking

Here's the prompt I used, it was rather helpful in improving accuracy and the overall experience:

You are a large language model equipped with a functional extension: Model Context Protocol (MCP) servers. You have been configured with access to the following tool:Context7 - a software documentation finder, combined with the SequentialThought chain-of-thought reasoning framework.

Tool Descriptions:

  • resolve-library-idRequired first step: Resolves a general package name into a Context7-compatible library ID. This must be called before using get-library-docs to retrieve valid documentation.

  • get-library-docsFetches up-to-date documentation for a library. You must first call resolve-library-id to obtain the exact Context7-compatible library ID.

  • sequentialthinkingEnables chain-of-thought reasoning to analyze and respond to user queries.

Your task:

You will extensively use these tools when users ask questions about how a software package works. Your responses should follow this structured approach:

  1. Analyze the user’s request to identify the type of query. Queries may be:

    • Creative: e.g., proposing an idea using a package and how it would work.

    • Technical: e.g., asking about a specific part of the documentation.

    • Error debugging: e.g., encountering an error and searching for a fix in the documentation.

  2. Use SequentialThought to determine the query type.

  3. For each query type, follow these steps:

    1. Generate your own idea or response based on the request.

    2. Find relevant documentation using Context7 to support your response and reference it.

    3. Reflect on the documentation and your response to ensure quality and correctness.

RESULTS:
I asked for a LangChain prompt chain system using MCP servers, and it gave me a very accurate response with examples straight from the docs!

Find elsewhere
🌐
DEV Community
dev.to › mehmetakar › context7-mcp-tutorial-3he2
Context7 MCP Tutorial - DEV Community
April 25, 2025 - AI hallucinations about APIs that don’t exist. Generalized help for outdated package versions. Accurate, live documentation and examples pulled from the actual library. Instant, relevant answers based on real packages and versions. All you have to do is add use context7 to your prompt.
🌐
Smithery
smithery.ai › server › @upstash › context7-mcp
Context7 | MCP Server | Smithery
Fetch up-to-date, version-specific documentation and code examples directly into your prompts. Enhance your coding experience by eliminating outdated information and hallucinated APIs. Simply add `use context7` to your questions for accurate and relevant answers.
🌐
LobeHub
lobehub.com › mcp › upstash-context7
Context7 MCP - Up-to-date Code Docs For Any Prompt | MCP Servers · LobeHub
1 day ago - Context7 API Key (Optional) for higher rate limits and private repositories (Get yours by creating an account at context7.com/dashboard)
🌐
Trevor Lasn
trevorlasn.com › blog › context7-mcp
Context7 MCP: Up-to-date Docs for LLMs and AI code editors
November 1, 2025 - The documentation gets injected into the prompt, so responses are based on current APIs instead of guessing. Two transport options: HTTP (remote) or stdio (local). ... Get an API key at context7.com.
🌐
Reddit
reddit.com › r/chatgptcoding › i just found out about context7 mcp server and it's awesome!
r/ChatGPTCoding on Reddit: I just found out about Context7 MCP Server and it's awesome!
April 24, 2025 -

From their Github Repo:

❌ Without Context7

LLMs rely on outdated or generic information about the libraries you use. You get:

  • ❌ Code examples are outdated and based on year-old training data

  • ❌ Hallucinated APIs don't even exist

  • ❌ Generic answers for old package versions

✅ With Context7

Context7 MCP pulls up-to-date, version-specific documentation and code examples straight from the source — and places them directly into your prompt.

Context7 fetches up-to-date code examples and documentation right into your LLM's context.

  • 1️⃣ Write your prompt naturally

  • 2️⃣ Tell the LLM to use context7

  • 3️⃣ Get working code answers

No tab-switching, no hallucinated APIs that don't exist, no outdated code generations.

I have tried it with VS Code + Cline as well as Windsurf, using GPT-4.1-mini as a base model and it works like a charm.

  • Context7 website

  • Github Repo

YT Tutorials on how to use with Cline or Windsurf:

  • Context7: The New MCP Server That Will CHANGE AI Coding (FREE)

  • This is Hands Down the BEST MCP Server for AI Coding Assistants

🌐
Reddit
reddit.com › r/mcp › contexts - a middleman in between context7 and your ai, making documentation "smart".
r/mcp on Reddit: ContextS - A middleman in between context7 and your AI, making documentation "smart".
May 31, 2025 -

Hi all! I am a solo, small developer and made an MCP, new to reddit

ContextS ("S" for smart) is an AI-powered documentation tool I made to retrieve documentation from Context7, pass it to an AI of choice (currently supports some Gemini, OpenAI, and Anthropic models) alongside a "context" of what documentation the client (client in this case means the AI you are using primarily, not ContextS) needs on a library. It can be easily set up for free with a Gemini API key. 

It provides targeted guidance and code examples to give the client better understanding while often using less overall tokens. Check it out! All feedback is welcome.
https://github.com/ProCreations-Official/contextS

🌐
Medium
medium.com › wix-engineering › mcp-explained-deep-dive-and-comparison-of-popular-code-search-mcps-context7-github-official-mcp-43f547f12501
MCP Explained: Deep Dive and Comparison of Popular Code Search MCPs (Context7, GitHub Official MCP, AWS MCP Suite). Done By Octocode-mcp 🐙 | by Guy Bary | Wix Engineering | Medium
July 22, 2025 - Context7 excels in documentation accuracy with zero-setup deployment and comprehensive multilingual support across 15+ languages. GitHub Official MCP provides comprehensive GitHub integration with 14 specialized toolsets and enterprise-grade ...
🌐
Reddit
reddit.com › r/cursor › noob question about mcp (specifically context7)
r/cursor on Reddit: Noob question about MCP (specifically Context7)
March 30, 2025 -

I understand that Context7 is an MCP that pulls in the latest documentation for any library. I've added it in the MCP settings and I've generated some code with it by prompting "use context7 for the latest documentations on X".

Questions:
- Do I constantly need to be explicitly asking it to use context7 for every prompt, whether it's adding in a new library or if it's continuing off of a previous prompt?
- If yes to above, can I just add it to a systems prompt to always use the context7 MCP in every prompt? Will that become more expensive?