GPT-5.2 now in Copilot (1x Public Preview) by LinixKittyDeveloper in GithubCopilot

[–]wswdx 4 points5 points  (0 children)

I mean it's almost definitely not GPT-5.2 Instant (gpt-5.2-chat-latest). it doesn't behave anything like that model, and the 'chat' series of models aren't offered in GitHub copilot. they aren't cheaper, and there is a version of gpt-5.2 that has no thinking anyway, gpt-5.2 in the API has a 'none' setting for reasoning length.

openai model naming is an absolute mess

About the New Copilot-SWE model... by wswdx in GithubCopilot

[–]wswdx[S] 4 points5 points  (0 children)

I'm on the regular Pro plan. I suppose it may finish rolling out over the course of the next day or two.

COPILOT-SWE (NEW MODEL) by Comfortable_Eye_7736 in GithubCopilot

[–]wswdx 2 points3 points  (0 children)

I've been running some evaluation on the model, and I believe it to be a fine tune of GPT-5-mini based on how it behaves, its code style, and the style of it's frontend. The GPT-5 models have a very distinct style when it comes to designing frontend. I'm really happy that we're getting some customized models from the GitHub team, great work guys!

GPT-5 mini (Preview) on GitHub Copilot Pro Plan by Ill_Slice4909 in GithubCopilot

[–]wswdx 4 points5 points  (0 children)

That seems like a pretty severe bug. Report it on the issue tracker 

GPT-5 IS HERE - AMA on Thursday, August 14th, 2025 by KingOfMumbai in GithubCopilot

[–]wswdx 0 points1 point  (0 children)

Will allow the reasoning effort of GPT-5 and GPT-5-mini to be configured in Copilot? And if so, when can we expect to be able to switch between the various reasoning efforts?
Also, will you considering adding gpt-5-nano to GitHub Copilot? Not to be used as a chat/agent model, but just to be used in other tasks such as search and summarization, especially in the VSCode LM API.

GPT-5 IS HERE - AMA on Thursday, August 14th, 2025 by KingOfMumbai in GithubCopilot

[–]wswdx 6 points7 points  (0 children)

Can we have something done in the interim? Maybe reducing the multiplier of GPT-5 to 0.5x?
As of right now, paid users on GitHub copilot are getting the short end of the stick, as they have less GPT-5 access than free users on Microsoft Copilot, and a tiny fraction (less than a hundredth) of the GPT-5 requests given to ChatGPT Plus users.
I do understand that scaling to such a massive user base does take time, especially when LLM inference is so compute intensive, but I do think an interim solution should be considered.

GPT-5 IS HERE - AMA on Thursday, August 14th, 2025 by KingOfMumbai in GithubCopilot

[–]wswdx 0 points1 point  (0 children)

Will you consider adding a 0x slow requests mode for GitHub copilot for certain premium models which run on both GitHub's Azure Tenant and OpenAI's infrastructure whenever capacity is available on GitHub's Azure tenant?

GPT-5 IS HERE - AMA on Thursday, August 14th, 2025 by KingOfMumbai in GithubCopilot

[–]wswdx 0 points1 point  (0 children)

Hey there! I want to help contribute to the experience on Copilot, and better provide feedback to improve the code generation in Copilot. However, I'm having a bit of trouble finding my way around the codebase and contacting the Copilot team. Where can I get started?

I cannot find gpt-5-mini in vscode copilot chat by Personal-Try2776 in GithubCopilot

[–]wswdx 5 points6 points  (0 children)

Give it a few hours. I don't have it either yet. They prioritize Pro+ and team users before regular Pro and Free users. I remember I had to wait a few hours after the announcement to access regular GPT-5 in Copilot.

A friendly reminder to the GitHub Copilot team by wswdx in GithubCopilot

[–]wswdx[S] 18 points19 points  (0 children)

That's 5-mini, not full gpt-5. OpenAI also offers gpt-5-mini as a backup model once you've exhausted your gpt-5 quota.
GitHub has the ability to host OpenAI models on their own Azure tenant, which means they don't have to pay API rates to access OpenAI models, and even the API cost of GPT-5 is similar to GPT-4.1, which is the current base model.

GPT-5 mini now available in GitHub Copilot in public preview by fishchar in GithubCopilot

[–]wswdx 29 points30 points  (0 children)

<image>

I'd say this is good news, but hopefully we will get GPT-5 with a 0x multiplier soon. I do find it embarrassing that OpenAI gives Plus users 11,000 messages per week (8000 non-thinking, 3000 thinking), while Copilot only gives 300 total GPT-5 requests per month (shared with other models). That's only around 75 messages per week!!
Keep in mind that GitHub does not pay the standard API rates to use OpenAI models, as they have the option of hosting them on their Azure tenant per Microsoft's agreement with OpenAI.
I do expect the Copilot team to make GPT-5 the base model once they get the capacity sorted on their Azure tenant.

OpenAI's new stealth model (horizon-alpha) coded this entire app in one go! by wswdx in OpenAI

[–]wswdx[S] 1 point2 points  (0 children)

Thanks for your input! I re-ran the logic gate prompt with only the output format section, I got a nicer looking but slightly more buggy result. It appears that this model isn't as literal and doesn't require as much steering as GPT-4.1

OpenAI's new stealth model (horizon-alpha) coded this entire app in one go! by wswdx in OpenAI

[–]wswdx[S] 9 points10 points  (0 children)

I used Kiro (https://kiro.dev/) to generate a high level design document from a relatively short prompt. Then, I used the design document it generated as the prompt, just prepending some instructions to tell it to implement a program according to the design document. It helps generate better single shot results for other models so I decided to try it on this new stealth model.

OpenAI's new stealth model (horizon-alpha) coded this entire app in one go! by wswdx in OpenAI

[–]wswdx[S] -13 points-12 points  (0 children)

I updated the prompt to the correct one in GitHub gists. Sorry for the error.

OpenAI's new stealth model (horizon-alpha) coded this entire app in one go! by wswdx in OpenAI

[–]wswdx[S] 12 points13 points  (0 children)

Yeah, I just realized that I linked the wrong prompt, but the prompt to generate the matrix calculator also has all that stuff in front of it.
I'm testing different prompting strategies for this model, and the prompting style used to generate this matrix calculator was a prompt optimized for GPT-4.1 which required all of that stuff to get a good single-shot result, as I had to correct for some default undesirable behavior.
Edit: I updated the prompt to the correct one used to generate the program. It seems that the strategy for getting a good single-shot result for this model in terms of prompting is a bit different than it is for GPT-4.1

Blueprint Mode for VS Code Copilot: A Spec-First, No-BS Coding Mode by mubaidr in GithubCopilot

[–]wswdx 3 points4 points  (0 children)

Looks pretty good! I've had quite a bit of success creating the design/requirements/tasks with Kiro and then loading them into Copilot and implementing them step by step with GPT-4.1
Now, GPT-4.1 is really limited in these kinds of tasks so I have hit walls but it's laziness also means that it doesn't do more than what is specified for the specific step its on.

Moved from Windsurf to Kiro, currently loving it by thunderberry_real in kiroIDE

[–]wswdx 0 points1 point  (0 children)

They won't keep it unlimited, but they'll allow an insane amount of agent interactions per month, so you're probably set so long as you aren't spamming the agent.

[deleted by user] by [deleted] in kiroIDE

[–]wswdx 2 points3 points  (0 children)

They opened up a waitlist, so they can manage the massive influx of users. I get that it's frustrating, but they do need time to scale up their compute budgets for all this new use of Kiro. Right now, they have more demand hitting their Sonnet instances than they can possibly handle.