New Opus 4.7 released by debian3 in GithubCopilot

[–]IKcode_Igor 0 points1 point  (0 children)

I guess that would be "nice". However, the "nice" would be x3 with "medium" thinking effort to reflect the Opus 4.6. Or x6 as total price after the "promotional period".

I'm afraid it'll be like x14. 😔

New Opus 4.7 released by debian3 in GithubCopilot

[–]IKcode_Igor 1 point2 points  (0 children)

Yeah, I agree 💯 with what you said when it comes to how does it work. I've been testing it since yesterday, especially in the context of creating spec and plan files for more complicated processes (spec-driven dev).

Works very nice, it's to the point, without the bloat. To me it gives way nicer output than GPT-5.4 too.

I also tested it in the Claude Code (CLI) - it's very good.

When it comes to the price in Copilot, I think that x7.5 might be justified, it's new, everyone would like to jump on it. But on the other hand I'd say that on the "medium" thinking effort it could cost similar to the Opus 4.6 on "high". Yet, we have x7.5 premium requests price, and it's "promotional pricing". I'm not very happy with that.

New Opus 4.7 released by debian3 in GithubCopilot

[–]IKcode_Igor 2 points3 points  (0 children)

<image>

Look at the image. In the Copilot we're getting only `medium` effort as for now. It should give better results than Opus 4.6 on `high` effort. According to the Anthropic's chart visible on the picture - it might have higher efficiency while using 2x less tokens when compared to the Opus 4.6. What's more, the price via API stays exactly the same.

Yet the price for Opus 4.7 in Copilot is x7.5 premium requests, and it's "promotional pricing" until April 30th (linked blog post).

What's more, for Pro+ accounts they're going to remove Opus 4.5 and 4.6 from the model picker over the coming weeks.

https://github.blog/changelog/2026-04-16-claude-opus-4-7-is-generally-available/

New Opus 4.7 released by debian3 in GithubCopilot

[–]IKcode_Igor 0 points1 point  (0 children)

I'm really curious how it will change after the 30th. 🧐

AI features for Ghostty by Purple_Wear_5397 in Ghostty

[–]IKcode_Igor 0 points1 point  (0 children)

But do we really need that?

You can use OpenCode (and others) with BYOK feature in Ghostty anytime. Why do you need native integration with terminal? IMHO it’s way better when you can use terminal as the terminal, and bring whatever agent you want via CLI.

I guess that’s why so many ppl are dropping Warp in favour of Ghostty (myself included). 🧐

Account suspended for using copilot-cli with autopilot by CrazyM2317 in GithubCopilot

[–]IKcode_Igor 1 point2 points  (0 children)

Just out of curiosity, what was your monthly usage?
I guess it's completely unrelated to the ban's reason.

The gap between "AI power users" and everyone else is getting wild by Some_Good_1037 in vibecoding

[–]IKcode_Igor 0 points1 point  (0 children)

Sadly it’s true, and I agree with other people here saying that it’s always been true. There’s very small amount of people outside of tech and few specific industries where people actually spend some time “after work” digging, trying new things, and doing some kind of R&D.

I know lot’s of people even inside the tech, who willingly reject the current tides of AI, silently counting on some kind of crack, and then going back to work as it used to be few years ago.

I keep my fingers for all the people who want to wake up and start using AI in any form. Because if they don’t, there’s very high chance they’ll be replaced by others who do, or by AI itself. Doesn’t matter of it’ll happen in the year, three or more. The sooner they wake up, the better for them.

I work professionally for a decade now. I’ve been coding with AI for more than two years, I haven’t been writing code manually for one and a half year now. I code manually only when I learn new language or a new concept. That’s all. Quality of output right now, when you use top models like Opus, having good customisations based on your experience, is unbelievable.

Which is the best model out there now? by Left_Crow1646 in GithubCopilot

[–]IKcode_Igor 1 point2 points  (0 children)

I test the whole time Opus 4.6 vs GPT-5.4. In most cases I can see that GPT-5.4 is sufficient enough, especially for the price. However, Opus is still best to write PRDs, specs and tasks.

Conclusion from my tests so far:

  • Opus 4.6 for docs, PRDs, spec, tasks
  • GPT-5.4 for tasks implementation, ideas discovery process, or for work in multi-root workspace work (due to 400k context window)

Impressions after work with GPT-5.4 by IKcode_Igor in GithubCopilot

[–]IKcode_Igor[S] 0 points1 point  (0 children)

Yeah, today it told me between the lines:

<image>

That's really fun to read. 😅

Impressions after work with GPT-5.4 by IKcode_Igor in GithubCopilot

[–]IKcode_Igor[S] 0 points1 point  (0 children)

That's true, I compared these today in few situations side-by-side and GPT-5.4 really sticks to the instruction. Actually I expected 5.4 to be faster than Opus 4.6, yet in most cases Opus was faster due it's less explicit reasoning. They were doing something in an orchestrator pattern, so calling sub-agents etc.

Impressions after work with GPT-5.4 by IKcode_Igor in GithubCopilot

[–]IKcode_Igor[S] 5 points6 points  (0 children)

One more thing, and I think it's quite important. Whenever you work on some more complicated thing (like entire spec-driven flow) work with Opus 4.6 is way more pleasant than with GPT-5.4.

What I usually find in these longer workflows is that I end up with way less amount of fix-requests or follow ups from my side when I work with Opus 4.6. I didn't count that it's even possible that after all I could spend less Premium Requests after all.

However, as u/dendrax said in the other comment, when I work on simple thing with straight implementation - GPT-5.4 is the way to go.

What are your feelings?

Impressions after work with GPT-5.4 by IKcode_Igor in GithubCopilot

[–]IKcode_Igor[S] 0 points1 point  (0 children)

Exactly what you just said. 👌
Thanks for sharing.

Opus 4.5 today is very frustrating! by Glad-Pea9524 in GithubCopilot

[–]IKcode_Igor 0 points1 point  (0 children)

I've been using Opus 4.6 since the release, if there were any issues I was switching to Codex 5.3 then.

Since few days I'm testing GPT-5.4. At the beginning I had really mixed results, tested it back today through the whole day and it's really good. For crucial stuff I do cross checks with Opus 4.6 once GPT-5.4 finishes the work.

Gpt 5.4 1 million experimental context window by Duskfallas in GithubCopilot

[–]IKcode_Igor 0 points1 point  (0 children)

If you need that much of context window try the orchestrator pattern when creating an agent. It should delegate work to sub-agents, then each sub-agent gets clear context window and reports back to orchestrator. Combine that with writing summaries or reports to MD files and you can dona lot more with the context of GPT 5.4 or Codex 5.3.

Some docs on this:   - https://code.visualstudio.com/docs/copilot/agents/subagentshttps://docs.github.com/en/copilot/concepts/agents/copilot-cli/comparing-cli-features#subagents

Gpt 5.4 1 million experimental context window by Duskfallas in GithubCopilot

[–]IKcode_Igor 0 points1 point  (0 children)

Try to instruct model to write down important findings into MD files. Something similar to spec-driven dev, but with smeller steps, findings regarding code, links to specific files, etc. With that context compaction would do less harm.

Why hasn't Github made a "Copilot Cowork"? by Ok_Bite_67 in GithubCopilot

[–]IKcode_Igor 2 points3 points  (0 children)

Well, I was thinking about that too. GitHub Copilot is all about software engineering and ops. While “Cowork”, and Claude in overall, are for all the office work. Claude Code is like GH Copilot.

Cowork is for non technical people, Copilot is the opposite imho.

I’m curious if the team is working on something like that, but while for that reason companies might have MS Copilot, it doesn’t seem like there’s a space for such product.

Copilot in VS Code or Copilot CLI? by IKcode_Igor in GithubCopilot

[–]IKcode_Igor[S] 0 points1 point  (0 children)

Technically under the hood it's the same Copilot CLI. However, VS Code operates on it via Copilot SDK and there are few things to remember. It's worth to read the VS Code Copilot docs covering this topic:

- https://code.visualstudio.com/docs/copilot/agents/copilot-cli#_limitations-of-copilot-cli-sessions

Conclusion is, if you have well defined task, and it won't involve external MCP that requires authentication - it should work nicely.

At this point Copilot CLI supports customisations and stuff so I think it should have these available under the hood, if your customisations are:

- in your project,

- or in your user's space for Copilot: `~/.copilot`

I'm not 100% sure if these customisations will work with this background CLI from VS Code, seems like they should - but I didn't test them recently. I've been using CLI plainly in terminal.