🚀 Get in VS Code!

AI language models in VS Code

Copilot in Visual Studio Code offers different built-in language models that are optimized for different tasks. You can also bring your own language model API key to use models from other providers. This article describes how to change the language model for chat or code completions, and how to use your own API key.

Choose the right model for your task

Some models are optimized for fast coding tasks, while others are better suited for slower planning and reasoning tasks.

Model type Models
Fast coding
  • GPT-4o
  • Claude Sonnet 3.5
  • Claude Sonnet 3.7
  • Gemini 2.0 Flash
Reasoning/planning
  • Claude Sonnet 3.7 Thinking
  • o1
  • o3-mini

Depending on which chat mode you are using, the list of available models might be different. In agent mode, the list of models is limited to those that have good support for tool calling.

The list of models available in Copilot can change over time.

If you are a Copilot Business or Enterprise user, your administrator needs to enable certain models for your organization by opting in to Editor Preview Features in the Copilot policy settings on GitHub.com.

Change the model for chat conversations

Use the language model picker in the chat input field to change the model that is used for chat conversations and code editing.

Screenshot that shows the model picker in the Chat view.

You can further extend the list of available models by using your own language model API key.

Change the model for code completions

To change the language model that is used for generating code completions in the editor:

  1. Select Configure Code Completions... from the Copilot menu in the VS Code title bar.

  2. Select Change Completions Model..., and then select one of the models from the list.

Bring your own language model key

If you already have an API key for a language model provider, you can use their models in chat in VS Code, in addition to the built-in models that Copilot provides. You can use models from the following providers: Anthropic, Azure, Google Gemini, Ollama, OpenAI, and OpenRouter.

Important

This feature is currently in preview and is only available for GitHub Copilot Free and GitHub Copilot Pro users.

To manage the available models for chat:

  1. Select Manage Models from the language model picker in the Chat view.

    Alternatively, run the GitHub Copilot: Manage Models command from the Command Palette.

    Screenshot that shows the model picker in the Chat view, which has an item for managing the list of models.

  2. Select a model provider from the list.

    Screenshot that shows the model provider Quick Pick.

  3. Enter the provider-specific details, such as the API key or endpoint URL.

  4. Enter the model details or select a model from the list, if available for the provider.

    The following screenshot shows the model picker for Ollama running locally, with the Phi-4 model deployed.

    Screenshot that shows the model picker of Ollama running locally, allowing you to select a model from the list of available models.

  5. You can now select the model from the model picker in the Chat view and use it for chat conversations.

Update the provider details

To update the provider details, such as the API key or endpoint URL:

  1. Select Manage Models from the language model picker in the Chat view.

    Alternatively, run the GitHub Copilot: Manage Models command from the Command Palette.

  2. Hover over a model provider in the list, and select the gear icon to edit the provider details.

    Screenshot that shows the model provider Quick Pick, with a gear icon next to the provider name.

  3. Update the provider details, such as the API key or endpoint URL.