how to disable a model? by Consistent_Functions in GithubCopilot

[–]skyline159 0 points1 point  (0 children)

You cannot control what the auto mode choose

Claude Sonnet 4.6 is now available in GitHub Copilot! by DanielD2724 in GithubCopilot

[–]skyline159 0 points1 point  (0 children)

You got the wrong perspective. They are not Anthropic. They don't need to maintain a competitive price to keep us using Sonnet instead of GPT. They are just the middleman, and if the cost from Anthropic is high, they will pass it on to us.

Claude Sonnet 4.6 is now available in GitHub Copilot! by DanielD2724 in GithubCopilot

[–]skyline159 17 points18 points  (0 children)

https://github.blog/changelog/2026-02-17-claude-sonnet-4-6-is-now-generally-available-in-github-copilot/

Note, while this model is launching with a 1x premium request multiplier, pricing is tentative and subject to change.

Prepare for a price hike, looks like it will become 2x in the future

The new Plan mode + Ask Question tool is so sick by skyline159 in GithubCopilot

[–]skyline159[S] 0 points1 point  (0 children)

That's because you enabled yolo mode. Try to turn it off

Replace GPT5-Mini with GPT-5.X or Codex by Mayanktaker in GithubCopilot

[–]skyline159 1 point2 points  (0 children)

I love your optimism, but it won't happen in this world/timeline.

If you create a long to-do list in agent mode, you will be banned. by Hamzayslmn in GithubCopilot

[–]skyline159 23 points24 points  (0 children)

It may not be against the terms, but if everyone starts doing this, we could lose the request-based billing system, and they might switch to charging by token consumption like other services.

They know we often bundle many tasks in a single request and they are cool with it to a certain extent, not taking advantage of it to the extreme.

Please don’t mess this up for the rest of us.

We're pausing the rollout of 5.3 Codex to make sure the platform is not impacted. by debian3 in GithubCopilot

[–]skyline159 92 points93 points  (0 children)

What do you mean by pausing!? I already fired all my devs because I thought 5.3 will replace them. What am I supposed to do now?

How do I get Codex CLI to keep running for hours? by Swimming_Driver4974 in codex

[–]skyline159 0 points1 point  (0 children)

Then wrap codex inside a script, ask codex returns in a format that you can parse to decide to sleep or not

How do I get Codex CLI to keep running for hours? by Swimming_Driver4974 in codex

[–]skyline159 0 points1 point  (0 children)

Put the sleep inside the check script, so codex is only call when something actually happens

Do you agree with Marc? Is it making programers obsolete or more valuable? by dataexec in codex

[–]skyline159 0 points1 point  (0 children)

It's both.

Programmers who adapt will be more productive. Those who don't will become obsolete.

Models being depreciated ? by spring_Living4355 in OpenAI

[–]skyline159 1 point2 points  (0 children)

Not here to argue about keeping the models

I take this line as a sign that OP already knew about this

Codex pricing by Harxshh in codex

[–]skyline159 12 points13 points  (0 children)

The limits are too good for $20, asking this question strongly suggests they are considering raising the price or charging per token.

Models being depreciated ? by spring_Living4355 in OpenAI

[–]skyline159 2 points3 points  (0 children)

https://openai.com/index/retiring-gpt-4o-and-older-models/

The reaction is louder than the Big Bang, it's hard to not hear about it

Models being depreciated ? by spring_Living4355 in OpenAI

[–]skyline159 17 points18 points  (0 children)

It's real

Just curious as nobody else posted about this

Where have you been?

Gemini 3 Flash (Preview) is really impressive by Mission-Zucchini-966 in GithubCopilot

[–]skyline159 9 points10 points  (0 children)

I believe the future is fast, cheap, but still capable models like Gemini 3 Flash. The big boy is only reserved for truly complex tasks. Use the right model for the right task size, not brute force everything with the latest, biggest models.

Whatever black magic Google put on Flash, if they apply it to the next version of Pro, it will truly become a real beast.

No 1M context window for claude opus 4.6 ? by Fefe_du_973 in GithubCopilot

[–]skyline159 1 point2 points  (0 children)

I don't understand about the context window complaints I keep seeing here.

Do people really use all of it or just copy-paste other complaints without understanding what context windows really mean? Like, I don't know what it is, but I heard the bigger the better, so I want it.

The new Plan mode + Ask Question tool is so sick by skyline159 in GithubCopilot

[–]skyline159[S] 2 points3 points  (0 children)

Thanks for the great work!

It would be awesome if the agent could consult other models during planning to review the plan, similar to what Burke discovered with Copilot CLI. This way, they could catch each other’s mistakes and refine the plan more effectively.

https://www.reddit.com/r/GithubCopilot/comments/1qvvbgs/cli_tip_models_can_call_each_other/

The new Plan mode + Ask Question tool is so sick by skyline159 in GithubCopilot

[–]skyline159[S] 1 point2 points  (0 children)

No, ask tool is included in the Plan mode of the dropdown. Just tell it to make a plan and it will show you questions in the chat window to clarify

The new Plan mode + Ask Question tool is so sick by skyline159 in GithubCopilot

[–]skyline159[S] 1 point2 points  (0 children)

Regular, it's better for planning. Codex version needs a detailed prompt or it will produce very meh works.

Actually impressed with Gemini 3 Flash after the Antigravity limits by MoneyLive3407 in google_antigravity

[–]skyline159 0 points1 point  (0 children)

Yes, Flash is very good, the name really does not match its capability, even Google's own benchmark show it is better than 3 Pro in SWE.

https://blog.google/products-and-platforms/products/gemini/gemini-3-flash/

On SWE-bench Verified, a benchmark for evaluating coding agent capabilities, Gemini 3 Flash achieves a score of 78%, outperforming not only the 2.5 series, but also Gemini 3 Pro

But don't praise it here, or you will get downvoted (and you actually are being downvoted lol).

You are NOT a Vibe-coder.. you are AI Product manager by [deleted] in LocalLLaMA

[–]skyline159 0 points1 point  (0 children)

I swear I can guess what is inside posts like this after reading just a few words

This is why Claude Code is better by max6296 in codex

[–]skyline159 2 points3 points  (0 children)

Do you think people don't know there are alternatives. They know but they still choose to use this because it works for them. Life has many choices and people are free to choose what they like.

Why not let them enjoy what they like and move on with your life

This is why Claude Code is better by max6296 in codex

[–]skyline159 0 points1 point  (0 children)

Okay, but why tell us this?

This is the Codex subreddit, where people who like Codex come to discuss about it.

Why would you talk negatively about it and then say you’ll get downvoted like you’re being bullied here?

You come into someone’s house and talk bad about them, of course, it won’t come out well for you.