🌐
GitHub
docs.github.com › en › copilot › reference › ai-models › supported-models
Supported AI models in GitHub Copilot - GitHub Docs
5 days ago - To learn how Copilot Chat serves different AI models, see Hosting of models for GitHub Copilot. GPT-5.4 nano is currently only available in the Codex Visual Studio Code extension (Copilot Pro+ only) and is not available in Copilot Chat.
Home
Get started, troubleshoot, and make the most of GitHub. Documentation for new users, developers, administrators, and all of GitHub's products.
GitHub Copilot
Learn how billing works for Copilot Pro and Copilot Pro+. ... Automatically select models for Copilot Chat, Copilot cloud agent, and third-party agents.
Plans
GitHub Copilot Student is available to verified students. The plan includes unlimited completions, access to premium models in Copilot Chat, access to Copilot cloud agent, and a monthly allowance of premium requests. GitHub Copilot Pro is designed for individuals who want more flexibility.
Features
A prompt file is a Markdown file, stored in your workspace, that mimics the existing format of writing prompts. See About customizing GitHub Copilot responses. You can configure Model Context Protocol (MCP) servers for many Copilot features, giving Copilot access to external tools or data sources.
🌐
GitHub
github.com › features › copilot › plans
GitHub Copilot · Plans & pricing
3 days ago - Agent mode, code review, coding agent, Copilot CLI, and Copilot Chat use premium requests, with usage varying by model. Model options may vary by feature. Learn more about premium requests ... Need human help? Let’s define how to propel your team into a new era.
Discussions

UPDATED: Supported AI models in GitHub Copilot - GitHub Docs
So they added Haiku 4.5 to free tier and didn't remove anything? So the models being gone in vscode is an error on their side. More on reddit.com
🌐 r/GithubCopilot
14
39
January 7, 2026
Which model to use with GitHub Copilot (and when)
Gemini 2.5 Pro for Logic and stuff. 3.7 Sonnet for any designs. Other models for general low logic code and if rate limited. I see 2.5 Pro as the best for Logic and way better code quality compared to other. 3.7 is hands down better at Designs. More on reddit.com
🌐 r/GithubCopilot
23
50
April 24, 2025
Announcement & FAQ: Changes to GitHub Copilot Individual Plans
We've recently announced that we’re making the following changes to GitHub Copilot’s Individual plans to protect the experience for existing customers: pausing new sign-ups, tightening usage limits, and adjusting model availability. More on github.com
🌐 github.com
542
49
2 weeks ago
Do you actually feel a difference between GitHub Copilot models vs using LLMs directly (Claude, Gemini, GPT, etc.)?
Yes-there’s a clear difference, and the sweet spot for me is Copilot inline + direct chat for big-picture work. My setup: TS/React, Python (FastAPI), and some Go; VS Code and JetBrains. In Copilot, GPT-4.1 is fastest and tidy for TypeScript/unit tests, Claude 4.5 feels best at reading the repo and making safe multi-step edits, and Gemini 3.0 handles longer files but occasionally invents import paths. Direct (ChatGPT/Claude.ai) gives me deeper context and longer plans; Copilot trims answers and sometimes misses cross-file implications on big refactors. What I do: Copilot (Claude) for inline fixes, tests, and small refactors. For migrations or multi-file changes, I jump to Claude.ai , paste a compact module map + failing tests, ask for a step-by-step plan and a unified diff, then apply it locally and iterate. Keep Copilot context restricted to the current workspace and ask for diffs, not prose. With Kong Gateway and Supabase, I sometimes use DreamFactory to spin up a quick read-only REST API over Postgres so the model can pull real data during refactors. Short version: Copilot for speed in-editor, direct chats for heavy reasoning and large edits. More on reddit.com
🌐 r/GithubCopilot
21
12
December 9, 2025
People also ask

What is GitHub Copilot?

GitHub Copilot transforms the developer experience. Backed by the leaders in AI, GitHub Copilot provides contextualized assistance throughout the software development lifecycle, from inline suggestions and chat assistance in the IDE to code explanations and answers to docs in GitHub and more. With GitHub Copilot elevating their workflow, developers can focus on: value, innovation, and happiness.

GitHub Copilot enables developers to focus more energy on problem solving and collaboration and spend less effort on the mundane and boilerplate. That’s why developers who use GitHub Copilot report up to 75% higher satisfaction with their jobs than those who don’t and are up to 55% more productive at writing code without sacrifice to quality, which all adds up to engaged developers shipping great software faster.

GitHub Copilot integrates with leading editors, including Visual Studio Code, Visual Studio, JetBrains IDEs, and Neovim, and, unlike other AI coding assistants, is natively built into GitHub. Growing to millions of individual users and tens of thousands of business customers, GitHub Copilot is the world’s most widely adopted AI developer tool and the competitive advantage developers ask for by name.

🌐
github.com
github.com › features › copilot › plans
GitHub Copilot · Plans & pricing
What are the differences between the GitHub Copilot Business, GitHub Copilot Enterprise, and GitHub Copilot Individual plans?

GitHub Copilot has multiple offerings for organizations and an offering for individual developers. All the offerings include both inline suggestion and chat assistance. The primary differences between the organization offerings and the individual offering are license management, policy management, and IP indemnity.

Organizations can choose between GitHub Copilot Business and GitHub Copilot Enterprise. GitHub Copilot Business primarily features GitHub Copilot in the coding environment - that is the IDE, CLI and GitHub Mobile. GitHub Copilot Enterprise includes everything in GitHub Copilot Business. It also  adds an additional layer of customization for organizations and integrates into GitHub.com as a chat interface to allow developers to converse with GitHub Copilot throughout the platform. GitHub Copilot Enterprise can index an organization’s codebase for a deeper understanding of the customer’s knowledge for more tailored suggestions and will offer customers access to fine-tuned custom, private models for inline suggestions.

GitHub Copilot Individual is designed for individual developers, freelancers, students, educators, and open source maintainers. The plan includes all the features of GitHub Copilot Business except organizational license management, policy management, and IP indemnity.

🌐
github.com
github.com › features › copilot › plans
GitHub Copilot · Plans & pricing
How can I upgrade my GitHub Copilot Free license to Copilot Pro?

If you're on the Free plan, you can upgrade to Pro through your Copilot settings page or directly on the Copilot marketing page.

🌐
github.com
github.com › features › copilot › plans
GitHub Copilot · Plans & pricing
🌐
GitHub
docs.github.com › en › copilot › reference › ai-models › model-comparison
AI model comparison - GitHub Docs
GitHub Copilot supports multiple AI models with different capabilities. The model you choose affects the quality and relevance of responses by Copilot Chat and Copilot inline suggestions.
🌐
GitHub
docs.github.com › en › copilot › get-started › plans
Plans for GitHub Copilot - GitHub Docs
4 days ago - GitHub Copilot Student is available to verified students. The plan includes unlimited completions, access to premium models in Copilot Chat, access to Copilot cloud agent, and a monthly allowance of premium requests. GitHub Copilot Pro is designed for individuals who want more flexibility.
🌐
GitHub
docs.github.com › en › copilot › concepts › billing › individual-plans
About individual GitHub Copilot plans and benefits - GitHub Docs
20 hours ago - ... Starting June 1, 2026, GitHub is moving Copilot from request-based billing to usage-based billing. For more information, see Usage-based billing for individuals. ... Starting April 20, 2026, new sign-ups for Copilot Pro, Copilot Pro+, ...
🌐
Microsoft Community Hub
techcommunity.microsoft.com › microsoft community hub › communities › products › azure › microsoft developer community blog
Choosing the Right Model in GitHub Copilot: A Practical Guide for Developers | Microsoft Community Hub
February 9, 2026 - GitHub Copilot’s Auto model selection automatically chooses the best available model for your prompts, reducing the mental load of picking a model and helping you avoid rate‑limiting.
Find elsewhere
🌐
GitHub
docs.github.com › en › copilot › reference › copilot-billing › models-and-pricing
Models and pricing for GitHub Copilot - GitHub Docs
1 week ago - Starting June 1, 2026, Copilot Pro and Copilot Pro+ subscribers who choose to remain on existing annual billing plans and stay on the request-based billing model will experience changes to model multipliers.
🌐
GitHub
docs.github.com › en › copilot › how-tos › copilot-on-github › set-up-copilot › configure-access-to-ai-models
Configuring access to AI models in GitHub Copilot - GitHub Docs
For information on how Copilot Chat serves different AI models, see Hosting of models for GitHub Copilot. If you have a Copilot Free, Copilot Pro, or Copilot Pro+ plan, you can use AI models directly within Copilot without configuring access or managing policies.
🌐
GitHub
github.com › orgs › community › discussions › 189268
Important Updates to GitHub Copilot for Students 🎒 · community · Discussion #189268
7 hours ago - Update March 13: We've now added the option so folks can upgrade from your GitHub Copilot Student plan to a paid GitHub Copilot Pro or GitHub Copilot Pro+ plan if you want to, while retaining the rest of your GitHub Student Pack benefits. Update April 27: As part of these reliability updates, GPT-5.3-Codex has been removed from the model picker for the Copilot Student plan.
🌐
GitHub
github.blog › home › ai & ml › github copilot › which ai model should i use with github copilot?
Which AI model should I use with GitHub Copilot? - The GitHub Blog
April 18, 2025 - Fast, efficient, and cost-effective, o4-mini and o3-mini are ideal for simple coding questions and quick iterations. If you’re looking for a no-frills model, use these. ... Quick prototyping.
🌐
GitHub
docs.github.com › en › copilot › reference › ai-models › model-hosting
Hosting of models for GitHub Copilot - GitHub Docs
1 week ago - GitHub Copilot uses Gemini 3.1 Pro, Gemini 3 Flash, and Gemini 2.5 Pro hosted on Google Cloud Platform (GCP). When using Gemini models, prompts and metadata are sent to GCP, which makes the following data commitment: Gemini doesn't use your ...
🌐
Reddit
reddit.com › r/githubcopilot › updated: supported ai models in github copilot - github docs
r/GithubCopilot on Reddit: UPDATED: Supported AI models in GitHub Copilot - GitHub Docs
January 7, 2026 - It appears on the free section on my model selection. Wdym for week? ... everyone including me freaked out for nothing... copilot has downgraded status currently: https://www.reddit.com/r/GithubCopilot/s/IPTMSgJJwL ... So Opus stays in Pro plan.
🌐
GitHub
docs.github.com › en › copilot › concepts › auto-model-selection
About Copilot auto model selection - GitHub Docs
You can change the model Copilot uses to generate responses to chat prompts. You may find that different models perform better, or provide more useful responses, depending on the type of questions you ask. Options include premium models with advanced capabilities.
🌐
Reddit
reddit.com › r/githubcopilot › do you actually feel a difference between github copilot models vs using llms directly (claude, gemini, gpt, etc.)?
r/GithubCopilot on Reddit: Do you actually feel a difference between GitHub Copilot models vs using LLMs directly (Claude, Gemini, GPT, etc.)?
December 9, 2025 -

I’m experimenting with the different AI models available in GitHub Copilot (GPT, Claude, Gemini, etc.), and I’d like to hear from people who actively switch between them.​​

  1. When you change the Copilot model (for example GPT‑4.1 ↔ Claude 4.5/Opus 4.5 ↔ Gemini 3.0), do you clearly notice differences in:

    • code quality and correctness

    • reasoning about the whole project or repo

    • speed / latency

    • how well it handles large codebases or multi-file edits?​

  2. For those who also use these models directly (ChatGPT, Claude.ai, Gemini, etc.):

    • How different do they feel compared to using the same model through Copilot inside the IDE?

    • Do you feel any “downgrade” in Copilot (shorter answers, weaker reasoning, less context, worse refactors), or is it basically the same for your workflow?​

  3. What’s your ideal setup today? For example:

    • “Copilot (Claude) for inline coding + ChatGPT for long explanations and architecture”

    • “Copilot (GPT) for small fixes + Claude/Gemini in browser for big refactors and debugging sessions”

    • or any other combo that works well for you.​

Please include: language(s) you code in, IDE/editor, and main model you prefer and why. That kind of detail makes the answers much more useful than just “X feels better than Y”.

Top answer
1 of 15
3
Yes-there’s a clear difference, and the sweet spot for me is Copilot inline + direct chat for big-picture work. My setup: TS/React, Python (FastAPI), and some Go; VS Code and JetBrains. In Copilot, GPT-4.1 is fastest and tidy for TypeScript/unit tests, Claude 4.5 feels best at reading the repo and making safe multi-step edits, and Gemini 3.0 handles longer files but occasionally invents import paths. Direct (ChatGPT/Claude.ai) gives me deeper context and longer plans; Copilot trims answers and sometimes misses cross-file implications on big refactors. What I do: Copilot (Claude) for inline fixes, tests, and small refactors. For migrations or multi-file changes, I jump to Claude.ai , paste a compact module map + failing tests, ask for a step-by-step plan and a unified diff, then apply it locally and iterate. Keep Copilot context restricted to the current workspace and ask for diffs, not prose. With Kong Gateway and Supabase, I sometimes use DreamFactory to spin up a quick read-only REST API over Postgres so the model can pull real data during refactors. Short version: Copilot for speed in-editor, direct chats for heavy reasoning and large edits.
2 of 15
2
Hello u/No_Vegetable1698 . Looks like you have posted a query. Once your query is resolved, please reply the solution comment with "!solved" to help everyone else know the solution and mark the post as solved. I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
🌐
GitHub
docs.github.com › en › copilot › how-tos › use-ai-models › change-the-completion-model
Changing the AI model for GitHub Copilot inline suggestions - GitHub Docs
In the dropdown menu, select the model you want to use. Open the Settings editor by pressing Ctrl+, (Linux/Windows) / Command+, (Mac). Type copilot completion and look for the "GitHub > Copilot: Selected Completion Model" section.
🌐
GitHub
github.com › orgs › community › discussions › 192948
GitHub Copilot is moving to usage-based billing · community · Discussion #192948
2 days ago - If you have Changelog emails turned on, or follow their Blog, you may have noticed a changelog entry titled Claude 4.7 is Generally Available" and, like me assumed "okay, they released 4.7 for Copilot, I'll check the multipliers later." Wait, but they said they informed us that Pro users would lose access? I don't see anything like that after skimming through the changelog. Well, if you actually read the entire link at a snail's speed, you can sort of cobble together what Microsoft PR tried to do 11 days ago. It seems like they are not going to be providing any Opus models (the biggest visible announcement in that changelog was that Opus 4.7 would be provided at a promotional multiplier at 7.5x for a few weeks, making several users assume that it was a typo -- surely it's not correct that the regular multiplier rate will be 22.5x?)