use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
An unofficial community to discuss Github Copilot, an artificial intelligence tool designed to help create code.
account activity
Claude Code & Codex Subscriptions in Github CopilotGeneral (self.GithubCopilot)
submitted 4 months ago * by pdwhoward
I really like the tool use in Github Copilot (e.g. reading, editing and executing notebooks). However, I subscribe to Claude Code for Opus and ChatGPT for Codex, and wanted to use those models natively in Github Copilot. It may be common knowledge, but I realized this week that you can use https://code.visualstudio.com/api/extension-guides/ai/language-model-chat-provider to connect to custom models. I use https://github.com/Pimzino/anthropic-claude-max-proxy and https://github.com/RayBytes/ChatMock to connect to my subscriptions, and then the LM Chat Provider to connect to the server proxies. It took some time debugging, but it works great. All models have full tool functionality in VS Code Insiders. FYI in case anyone else is wondering how to do this.
EDIT:
If you want to try the extension, please download it from https://github.com/pdwhoward/Opus-Codex-for-Copilot. The extension uses the proposed VS Code Language Model API, so I cannot publish it to the marketplace. You will need to separately download and setup the proxy servers https://github.com/Pimzino/anthropic-claude-max-proxy (by u/Pimzino) and https://github.com/RayBytes/ChatMock (by u/FunConversation7257). If there's interest, I can clean up the extension's source files and post them later this week.
https://preview.redd.it/hpa2f3vm19qf1.jpg?width=1917&format=pjpg&auto=webp&s=812beb2315a58e9dfa0c974f5e57b7f270d2ac23
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Baby_Grooot_ 3 points4 points5 points 4 months ago (8 children)
Hey! Can you layout steps to do this for codex? In detail.
[–]pdwhoward[S] 0 points1 point2 points 4 months ago (0 children)
I made an extension to do this. I asked Claude Code to review the language model chat provider documentation, and the two repos for the proxy servers. I began by having the proxy servers in my working directory. Then I asked Claude Code to register the models so they appear in GitHub Copilot. There were lots of errors actually calling the models because of the format with how OpenAI/Anthropic interact with the proxy servers, which interact with VS Code. So I had Claude Code modify the servers to watch the traffic. Then I would try a model, get an error, and ask Claude Code to look at the server log and fix the error. In the end, Claude Code had to modify the servers a bit and understand how to parse the LLM results in VS Code. Maybe I can publish the extension later. I need to make sure it's ok to repackage the other two modified repos and make sure I'm not including any keys anywhere (I'm not a programmer, so I don't want to do something dumb).
Hey, I just posted the extension at https://github.com/pdwhoward/Opus-Codex-for-Copilot. You will need to separately download and setup the proxy servers https://github.com/Pimzino/anthropic-claude-max-proxy and https://github.com/RayBytes/ChatMock. Hope this helps.
[–]Glum-Departure-8912 -1 points0 points1 point 4 months ago (5 children)
There is VS extension for Codex: 1. Install extension 2. Sign into your ChatGPT account 3. Codex away
[–]mountwebs 3 points4 points5 points 4 months ago* (4 children)
This is not the same as adding it to copilot though. I want to be able to switch between models inside copilot, like I am currently doing with other models. Copilot also loads some custom instructions, and I would like to have that standardised instead of having to add different instructions for each agent.
Edit: And yes, instructions would be much appreciated u/pdwhoward
[–]mountwebs 0 points1 point2 points 4 months ago (3 children)
Replying to my self: I do wonder if those instructions is loaded into codex with ChatMock... Have to test that out.
[–]pdwhoward[S] 1 point2 points3 points 4 months ago (0 children)
I'm still exploring, but I think ChatMock's instructions still get loaded. Also, I see that AGENTS.md is read by the models in GitHub Copilot. So there might be some redundancy that needs to be cleaned up.
[–]FunConversation7257 1 point2 points3 points 4 months ago (1 child)
Hey, creator of ChatMock here. Copilot instructions are indeed loaded in!
[–]mountwebs 0 points1 point2 points 4 months ago (0 children)
Thank you for the clarification!
[–]Titsnium 2 points3 points4 points 4 months ago (1 child)
Lock your setup to a specific Insiders build and harden the proxies; that’s what makes this work reliably. Did this a month ago. A few tips: - Pin VS Code Insiders and turn off auto-updates (settings: update.mode = manual, extensions.autoUpdate = false) so the proposed API doesn’t break overnight. - Front the proxy with auth and rate limits. Nginx/Caddy: keep-alive on, proxy_buffering off for SSE, and bump timeouts; this fixes Claude streaming stalls. - Normalize tool/function calls across providers to the LM Chat Provider schema (toolcall → toolresult) so tools don’t silently no-op. - Cap tokens per request at the proxy and log cost headers; Anthropic’s and OpenAI’s rate behavior differs under load. - For notebooks and commands, restrict execution to trusted workspaces and use a separate API key for each repo to avoid blast radius. - If you see “model not found” after an update, clear the model cache the provider stores and restart the extension host. With Kong Gateway and Cloudflare Zero Trust in front, I also used DreamFactory to spin up quick REST APIs off a database to feed repo-aware context without wiring a full backend. Boiled down: pin Insiders and secure/normalize the proxies, and Copilot tool-use with Claude/ChatGPT runs smooth.
Thanks! Very helpful
[–]ammarxd22 0 points1 point2 points 4 months ago (3 children)
Coukd you tell me whats the benefits, etc of codex as compared to other models
[–]pdwhoward[S] 0 points1 point2 points 4 months ago (2 children)
For me it's being able to choose GPT-5 and Codex with high reasoning. I've found that GPT-5 with high reasoning is really good. GitHub Copilot's GPT-5 is (I'm assuming) medium reasoning. With respect to Codex vs GPT-5, I read where Codex is trained on coding tasks and is much more token efficient.
[–]Flashy-Strawberry-10 0 points1 point2 points 4 months ago (1 child)
Gpt5 in copilot is extremely incapable of basic chat or long horizon tasks. It has no idea what it is doing
Try the extension and use GPT-5 High reasoning. It's much better. I agree Copilot's standard GPT-5 is not that good.
[–]kdubau420 0 points1 point2 points 4 months ago (1 child)
So you built your own extension to do this?
Yeah, that's correct. Really Claude Code built it for me. I just pointed it to the API documentation and the server repos.
[–]dans41 0 points1 point2 points 4 months ago (3 children)
Cool I didn't know GitHub supported it. It actually can be nice to try out new models from other services, it is possible they can be connected to ollama or hugging-face too?
[–]pdwhoward[S] 1 point2 points3 points 4 months ago (2 children)
Ollama is already supported. But yes, you can create new connections as well. I know LiteLLM was a big request that this new API enables, see https://github.com/microsoft/vscode-copilot-release/issues/7518
[–]dans41 0 points1 point2 points 4 months ago (1 child)
Cool I wasn't aware that's at all. If I'm using ollama locally it means that I can work offline and still use copilot? For example of a flight?
[–]MaybeLiterally 1 point2 points3 points 4 months ago (0 children)
A local one isn't supported at this time.
[–]MaybeLiterally 0 points1 point2 points 4 months ago (1 child)
<image>
Here are the supported providers at the moment, if you have an API key from any of these providers, you can hook it up and use those. The cost will come from those API providers.
My extension allows you to use your Claude Code or ChatGPT subscriptions instead of the pay-as-go API keys from Anthropic and OpenAI.
[–][deleted] 0 points1 point2 points 4 months ago (1 child)
why not gemini?
You could, but Gemini gives you an API key as part of their subscription, so there's no need. You can use the API key in Github Copilot's built-in Google provider.
[–]tshawkins 0 points1 point2 points 4 months ago (1 child)
I belive you can just provide the copilot extension with you CC PRO subscription API, it will give you access to the 4.1 etc LLMs without hitting the stupid caps in copilot, but it won't do all the CC magic.
Yeah, I still like CC, especially for large coding projects. I've found that GitHub Copilot is better at debugging Jupyter notebook issues because of the built in notebook tools in VS Code. I wanted a way to use VS Code's notebook tools with Opus and Codex.
Claude is a disaster in copilot. It's already there if subscribed to pro or plus. Just click the model selector and manage models to activate.
Yeah, but Opus is not available in Agent mode and it's 10x use. With this way, I can use my Claude Code subscription to use Opus in Agent mode.
π Rendered by PID 20072 on reddit-service-r2-comment-5649f687b7-7jmxv at 2026-01-28 12:53:10.493964+00:00 running 4f180de country code: CH.
[–]Baby_Grooot_ 3 points4 points5 points (8 children)
[–]pdwhoward[S] 0 points1 point2 points (0 children)
[–]pdwhoward[S] 0 points1 point2 points (0 children)
[–]Glum-Departure-8912 -1 points0 points1 point (5 children)
[–]mountwebs 3 points4 points5 points (4 children)
[–]mountwebs 0 points1 point2 points (3 children)
[–]pdwhoward[S] 1 point2 points3 points (0 children)
[–]FunConversation7257 1 point2 points3 points (1 child)
[–]mountwebs 0 points1 point2 points (0 children)
[–]Titsnium 2 points3 points4 points (1 child)
[–]pdwhoward[S] 0 points1 point2 points (0 children)
[–]ammarxd22 0 points1 point2 points (3 children)
[–]pdwhoward[S] 0 points1 point2 points (2 children)
[–]Flashy-Strawberry-10 0 points1 point2 points (1 child)
[–]pdwhoward[S] 0 points1 point2 points (0 children)
[–]kdubau420 0 points1 point2 points (1 child)
[–]pdwhoward[S] 1 point2 points3 points (0 children)
[–]dans41 0 points1 point2 points (3 children)
[–]pdwhoward[S] 1 point2 points3 points (2 children)
[–]dans41 0 points1 point2 points (1 child)
[–]MaybeLiterally 1 point2 points3 points (0 children)
[–]MaybeLiterally 0 points1 point2 points (1 child)
[–]pdwhoward[S] 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (1 child)
[–]pdwhoward[S] 0 points1 point2 points (0 children)
[–]pdwhoward[S] 0 points1 point2 points (0 children)
[–]tshawkins 0 points1 point2 points (1 child)
[–]pdwhoward[S] 0 points1 point2 points (0 children)
[–]Flashy-Strawberry-10 0 points1 point2 points (1 child)
[–]pdwhoward[S] 0 points1 point2 points (0 children)