Explore Agentic Development -

Visual Studio Code 1.118

Follow us on LinkedIn, X, Bluesky |


Release date: April 29, 2026

Downloads: Windows: x64 Arm64 | Mac: Universal Intel silicon | Linux: deb rpm tarball Arm snap


Welcome to the 1.118 release of Visual Studio Code. This release expands where you can work with Copilot agents and makes them more efficient. Here are the highlights for this release:

Happy Coding!


Agent experience

Visual Studio Code Agents (Insiders)

Note: The Visual Studio Code Agents app is currently in preview and only available when installing VS Code Insiders.

The Visual Studio Code Agents app is a companion app that ships alongside VS Code Insiders. It provides a focused, agent-native environment where you can run parallel sessions across repos and iterate on multi-step coding tasks. We first introduced the Agents app in 1.115 and continue to refine it based on user feedback.

In this release, you can discover the Agents app directly from the VS Code Insiders title bar, making it easy to jump into an agent-powered workflow. We also published dedicated Agents documentation to help you get started.

Recent feature highlights include:

  • Shared state across VS Code and Agents: The Agents app shares more state with VS Code Insiders for a smoother transition between the two. This includes authentication (on Windows), AI customizations, workspace trust, recent folders, and keyboard shortcuts.

  • Claude agent: The Claude Agent is available in the Agents app, so that you can use it alongside other agents like Copilot CLI or Copilot Cloud for your coding tasks.

  • Web client: Access the Agents experience from the browser at insiders.vscode.dev/agents, bringing the agent-native workflow to any machine where you have a Dev Tunnel running (via code-insiders tunnel). To get started, download VS Code Insiders and run code-insiders tunnel to set up a Dev Tunnel. You can then connect to it from the web.

  • Background browsers: The integrated browser persists across sessions, so it no longer refreshes when you return to a session. This makes context switching smoother when using the integrated browser to preview changes while the agent works.

  • Layout controls for changes: When the agent makes changes, you can open the diff view side-by-side with the Chat view or open it in a modal window to focus on the changes. Use the layout controls in the diff view toolbar to toggle between different display modes.

  • Dynamic title bar entry points: Switch between VS Code Insiders and the Agents app with a single click. Select Open in Agents from the VS Code Insiders title bar to jump into the Agents app, or select Open in VS Code from the Agents title bar to return to the Insiders editor.

Your feedback helps us shape the Agents experience - please continue sharing it with us by filing issues on GitHub. You can also explore existing issues to see what others have reported and provide your feedback on specific topics.

Remote control for Copilot CLI sessions (Experimental)

Setting: github.copilot.chat.cli.remote.enabled Open in VS Code Open in VS Code Insiders

Previously, to engage with a Copilot CLI session, you had to be at the machine where you started it. If the agent paused for an approval or hit a question while you were away from your desk, the work stalled until you returned. With remote control, you can monitor and steer your ongoing Copilot CLI sessions from anywhere, giving you more flexibility to keep work moving without being tied to your machine.

Remote control for Copilot CLI lets you check progress, respond to approvals, and steer work from another device (using GitHub.com or the GitHub mobile app), while your Copilot CLI session keeps running in the background.

To try remote control:

  1. Enable the github.copilot.chat.cli.remote.enabled Open in VS Code Open in VS Code Insiders setting.
  2. Enter /remote on in a Copilot CLI chat to get started.

You can always run /remote to view remote control status, or /remote off to disable remote control.

Synced session titles for Copilot CLI

The chat session title is used across different chat surfaces like the chat sessions list, the chat editor tab and header, and the Copilot CLI terminal interface to provide a consistent identifier for the session. Depending on where you renamed a Copilot CLI session, other chat surfaces could still show the old title.

VS Code adopts the Copilot SDK session title APIs as the source of truth and routes the sessions list and chat editor header through a single title resolver to keep the displayed title consistent across surfaces. The chat sessions list, the chat editor tab and header, and copilot --resume in the terminal now all stay in sync when you rename a session, regardless of where the rename originated.

Renames performed in the terminal from Copilot CLI are also picked up by VS Code the next time the session metadata is read.

Copilot added as a Git co-author by default

VS Code now enables Git AI co-authoring by default for chat and agent workflows. When Copilot makes changes to your files, Copilot is automatically added as a co-author on that commit.

You can change the default behavior using git.addAICoAuthor Open in VS Code Open in VS Code Insiders .

Codebase search and context

Semantic indexing of non-GitHub repositories rolled out to all users

When you ask Copilot a question like "where do we handle user authentication?", the agent has to translate your fuzzy intent into the exact files and symbols that matter. Plain text search only matches the literal words you typed, so it often misses the relevant code when your codebase uses different terminology. Semantic indexing lets agents search by meaning, surfacing files that use related terms like login, signIn, verifyCredentials, or OAuth token exchange, even if the word "authentication" never appears in the code. This gives the agent better grounding for answers and edits.

Semantic indexing is now available in all workspaces. Previously, this capability was limited to workspaces that use GitHub or ADO repositories.

The semantic index is built and maintained automatically. Workspaces that use a GitHub or ADO repository can typically use semantic search instantly, while other workspaces might require a few minutes to build up the initial index. You can also use the Build Codebase semantic index command to explicitly build the index for the current workspace.

Semantic search is one of the many tools Copilot uses to understand your workspace when answering questions and generating edits. Copilot will pick the best tools for the job, so you generally do not have to micromanage how it searches. Check out the How Copilot understands your workspace docs for more details on semantic search and the other tools that Copilot uses.

GitHub text search across repos or orgs

When the agent needs to look up an exact string, API name, or error message in code outside your current workspace, semantic search isn't always the right fit. You want a precise match across a known repository or your entire organization, not a fuzzy one.

To support this, Copilot now includes a built-in githubTextSearch agent tool that does a grep-style search through the code of a GitHub repository or an entire GitHub organization. This complements the existing githubRepo tool, which does semantic searches within a GitHub repo. Together, these tools give the agent a richer way to learn from codebases outside the one you are currently working on.

For more advanced GitHub features such as searching and managing issues or pull requests, consider using the GitHub MCP server.

Dedicated context for skills (Experimental)

Setting: github.copilot.chat.skillTool.enabled Open in VS Code Open in VS Code Insiders

When you use a skill that performs multi-step tool calls or pulls in large reference material, that auxiliary content can crowd your main chat context and degrade the quality of follow-up responses.

You can now run a skill in a dedicated subagent context that isolates its execution from the main conversation, so your primary context stays focused and skill responses remain higher quality.

To run a skill in a dedicated subagent context, set the context attribute in the SKILL.md frontmatter:

---
name: my-skill
description: My skill description
context: fork
---

This feature is experimental and requires the github.copilot.chat.skillTool.enabled Open in VS Code Open in VS Code Insiders setting to be enabled.

Workspace .mcp.json files and server deduplication

We've added support for workspace-level .mcp.json files to declare MCP servers, aligning with other tools such as the Copilot CLI.

Additionally, we've introduced behavior to deduplicate MCP servers that share the same name. By default, only the most-specific MCP server will be enabled, and enabling a server will disable other servers by the same name. You can control which MCP servers are enabled by searching @mcp @installed in the extensions view, or via the Chat: Open Customizations window.

Improving token efficiency

On April 27, GitHub announced that Copilot is moving to usage-based billing on June 1, 2026. To help you get the most value out of your plan, we have been working on several initiatives to improve token efficiency without hindering the quality of the agent. Most of the improvements described below are already in place. Where an opt-in setting is available, it is noted in the relevant section.

Prompt caching efficiency

Over the past several iterations, we have improved cache reuse across system prompts, tools, conversation history, and summarization, without changing how the agent behaves. In practice, this means repeated context is billed at a much lower token rate (for example, about 10x lower for Anthropic models), which helps reduce cost in longer, multi-turn agent workflows.

Strategic cache breakpoint placement. We audited where cache breakpoints are set so they are used efficiently and placed at stable boundaries: end of system prompt, end of tools, end of the most recent tool turn, and conversation turn boundaries. As a result, once an agent session is underway, more than 93% of each request is reused from cache instead of being charged as new input.

A cache-stable system prompt and tools list. A cached prefix is only as good as the bytes that precede it. We reviewed the system prompt and tool registration paths to remove sources of byte drift across requests. For example, the new chat.experimental.symbolTools.cacheStable Open in VS Code Open in VS Code Insiders registers vscode_renameSymbol and vscode_listCodeUsages with a static description instead of one that changes based on which languages are loaded. That way, when a language extension activates partway through a session, it no longer changes the request and resets the cache. We also re-ordered the tools list so deferred and non-deferred tools are grouped predictably, keeping the tools-array bytes identical across turns.

Cache-friendly background compaction. As a session gets long, we summarize older turns in the background so the agent can keep working without running out of context. The model can still look up tool results and details from earlier turns when it needs them. These background summaries now reuse the same cached context as the main agent, making long, multi-turn sessions noticeably more efficient.

Last-two-messages breakpoint strategy. In long agent sessions, older turns eventually fall out of the cacheable window. We now anchor cache breakpoints on System prompt, the Tools list, and the two most recent messages. This is currently available behind the github.copilot.chat.anthropic.cacheBreakpoints.lastTwoMessages Open in VS Code Open in VS Code Insiders setting.

Tool search tool

The tool search tool keeps requests lean by splitting the agent's toolset into two groups. A compact always-available core of ~30 tools, which covers ~88% of tool calls, are always included. The remaining tools are deferred: their schemas are not loaded into the model's context until the model explicitly requests them. When the agent needs a deferred capability, it calls tool_search, which runs a client-side, embedding-based semantic search and returns the most relevant matches on demand.

The result is a stable, cacheable prefix on every turn and a significantly smaller per-turn tool footprint, while the agent still has access to the full toolset.

The tool search tool is already enabled by default for Anthropic models (Claude Sonnet 4.5+ and Opus 4.5+), where we observed up to 20% in token savings. In this release, we are rolling it out to supported OpenAI models (GPT-5.4 and GPT-5.5) via the Responses API, with early Insiders results showing similar or better savings. To use the tool search tool with GPT models, enable the github.copilot.chat.responsesApi.toolSearchTool.enabled Open in VS Code Open in VS Code Insiders setting.

New tools for search and execution

This release brings two new specialized agentic tools: Search and Execution. Both are powered by small, purpose-built models that cost significantly less to run. After over a month of flighting, we are seeing promising results, with token savings of up to 20%.

Agentic search tool

The agentic search tool handles codebase exploration and context retrieval. When the main agent needs context, it describes what it's looking for in plain language, and the search tool takes over. The search tool then runs an independent process to search your workspace using grep, file search, semantic search, and file reading, then returns the most relevant results.

Under the hood, this tool is powered by a fine-tuned small language model, trained to run many searches in parallel across a minimal number of turns. This tight scope keeps latency and cost low without sacrificing retrieval quality.

Rollout continues over the coming month, bringing these savings to all Copilot Chat users.

Agentic execution tool

The agentic execution tool handles anything related to running terminal commands. When the agent has to run tests or check a build, it hands the task off to the execution tool, which runs the commands and reports back. To keep the scope tight, the execution tool can only run terminal commands, and it is capped at 10 terminal calls per invocation so it cannot loop indefinitely.

Terminal output tends to be long and noisy. Once the task is done, the execution tool filters that output down to what a coding agent actually needs and passes only the relevant parts back. Offloading this work from the main model to a smaller, faster one keeps verbose output from eating into your token usage.

Chat performance and history

WebSockets supported for OpenAI models

For OpenAI models that support it, chat requests now use WebSocket mode on the Responses API. Instead of opening a new HTTP request per turn, VS Code keeps a persistent WebSocket connection and sends only new input items along with the previous response ID. The server retains the conversation state, which reduces request size and latency on follow-up turns, particularly noticeable in agent workflows with many back-and-forth calls. Our measurements show that using WebSockets makes OpenAI models 12% faster.

WebSocket mode is used automatically when the selected model supports it, with no configuration required.

Chronicle (Experimental)

Setting: github.copilot.chat.localIndex.enabled Open in VS Code Open in VS Code Insiders

As you rely more on Copilot, your chat history becomes a valuable record of what you worked on, which files you touched, and which PRs and issues you referenced. But that history is hard to revisit: scrolling through past sessions to remember what you did yesterday or to prepare for a standup is slow, and there's no easy way to ask questions across sessions or learn from your own usage patterns.

Chronicle solves this by tracking your chat interactions in a local SQLite database. Every time you chat, it records session metadata (branch, repo, timestamps), conversation turns, files touched via tool calls, and external references (PRs, issues, commits), so you can search and summarize your coding activity on demand. Chronicle can also analyze your usage to give you personalized tips on how to improve your prompting and tool usage.

Chronicle exposes a few commands you can use in chat to query your session history and get insights about your coding activity:

  • /chronicle:standup: Generates a standup report from the last 24 hours of coding sessions, grouped by feature/branch, with summaries, file lists, and PR links.

  • /chronicle:tips: Analyzes 7 days of usage to give personalized tips on prompting, tool usage, and workflow.

  • /chronicle [query]: Free-form natural language queries against session history (for example, "what files did I edit yesterday?").

This feature is experimental and requires the github.copilot.chat.localIndex.enabled Open in VS Code Open in VS Code Insiders setting to be enabled.

Trust and security

Approved account organizations policy

Enterprises can now gate chat and related AI feature activation on approved GitHub organization membership by using the ChatApprovedAccountOrganizations device policy.

This policy helps organizations apply GitHub account-based policy consistently across chat entry points. Chat features are not activated until (1) the user is signed into a GitHub account with membership in an approved organization and (2) the account-based policy has been resolved. This fail-closed behavior is useful for enterprises that configure account-based policies on GitHub.com and need eligibility enforced before chat is shown.

Learn more about enterprise policies.

Sandboxing default read permissions

Read access is no longer automatically enabled for all paths under the $HOME directory. This update strengthens sandbox isolation and ensures commands only access the files they explicitly need.

Before any command runs in sandbox, read permissions are added based on the executing command only and all other paths in the $HOME directory are denied read access. Accessing any arbitrary paths results in failure due to denied read permissions.

By default, workspace folders and the sandbox temporary folder (managing sandbox configuration at run time) are granted read access under $HOME dir.

Accessibility

Setting: accessibility.verbosity.chatQuestionCarousel Open in VS Code Open in VS Code Insiders

When Copilot asks a question via the question carousel triggered by a terminal interaction, you can now press ⌥T (Windows, Linux Alt+T) to quickly return focus to the terminal. Previously, the only way to navigate back was by selecting the Focus Terminal button.

The button's aria label now also includes the keybinding hint to make it more discoverable for screen reader users. You can control whether navigation hints appear in the carousel's aria label with the accessibility.verbosity.chatQuestionCarousel Open in VS Code Open in VS Code Insiders setting.

Editor Experience

Optimized loading of large local resources in webviews

We've optimized how webviews load local resources to improve speed and reduce memory usage. This change benefits any extension that uses webviews or custom editors, as well as built-in VS Code features such as notebook rendering.

Webviews in VS Code use a service worker to load resources from the workspace or host file system. The service worker intercepts the request for the local file and then proxies it through VS Code's file system calls. This enables us to not only load resources from the disk but also from virtual file systems contributed by extensions.

Previously, for file system requests, VS Code would read the entire file into a buffer and then send it to the webview's service worker. This works for a few small JavaScript and image files, but not when you're loading 20 video files that are tens to hundreds of MB each.

Now, we stream the file contents to the service worker in chunks. This approach improves responsiveness and also reduces the amount of data that VS Code has to accumulate before handing it off to the browser engine.

We further optimized the streaming by adopting transferable streams. A file stream is created in the main VS Code renderer process and consumed directly by new Response(...) inside the webview's service worker. This bypasses what were previously multiple layers of postMessage calls.

Languages

TypeScript 7.0 Beta support

We've continued working with the TypeScript team to improve VS Code's support for TypeScript 7. TypeScript 7 is a complete rewrite in native code and offers dramatically better performance.

The TypeScript 7.0 beta continues to improve language features and also includes a number of editor quality-of-life improvements. We've also made it even easier to try out TS 7.0 and switch back and forth between it and the current stable TS 6.0 release.

To try TS 7.0 in VS Code, you only have to install the TypeScript Native preview extension.

Contributions to extensions

Chat Customizations Evaluation extension

We added a new extension, Chat Customizations Evaluations (extension ID ms-vscode.vscode-chat-customizations-evaluations) to help analyze and improve your chat customizations, like prompt files, custom agents, instructions, and skills. After analyzing a customization file, the extension generates diagnostics for issues it detects in the file and gives recommendations to improve it.

Open a prompt, agent, instructions, or skill definition file and select Analyze to evaluate it. After the diagnostics appear, use the customization evaluations fix skill to apply the recommended changes.

Remote Development

The Remote Development extensions allow you to use a Dev Container, remote machine via SSH or Remote Tunnels, or the Windows Subsystem for Linux (WSL) as a full-featured development environment.

Dev Container lockfile for Features enabled by default

Setting: dev.containers.lockfile Open in VS Code Open in VS Code Insiders

We are enabling the lockfile devcontainer-lock.json by default. The lockfile records the Dev Container Feature version and checksum the first time a Feature is installed and pins the Feature to that particular version and checksum to improve resilience against supply chain attacks.

The editor shows a Code Lens on Dev Container Features in the devcontainer.json file when newer versions are available.

Dependabot support is also available for automatically submitting PRs to update the lockfile.

For more information, see Dev Container Feature Lockfile in the Dev Container specification.

Engineering

Faster development builds with TypeScript 7

VS Code's development watch task now uses TypeScript 7 for type checking. This dramatically reduces the time to build and fully type check our codebase.

Previously, typechecking the roughly 6,000 files in the main VS Code project took around 60 seconds. TypeScript 7 cuts this to around 10 seconds for a fresh build. From starting the watch build task to having VS Code and all of the built-in extensions built and fully typechecked now takes around 30 seconds.

Deprecated features and settings

New deprecations in this release

None

Upcoming deprecations

  • Edit Mode is officially deprecated as of VS Code version 1.110. Users can temporarily re-enable Edit Mode via VS Code setting chat.editMode.hidden Open in VS Code Open in VS Code Insiders This setting is managed at the organization level. Contact your administrator to change it.. This setting will remain supported through version 1.125. Beginning with version 1.125, Edit Mode will be fully removed and can no longer be enabled via settings.

Thank you

Contributions to our issue tracking:

Contributions to vscode:

Contributions to vscode-pull-request-github:


We really appreciate people trying our new features as soon as they are ready, so check back here often and learn what's new.

If you'd like to read release notes for previous VS Code versions, go to Updates on code.visualstudio.com.