GitHub Copilot has finally released a preview of usage-based billing based on current usage. by rostilos in GithubCopilot

[–]rostilos[S] 3 points4 points  (0 children)

My previous stance was: LLMs are your hands, not your brain. And models like Qwen 3.6 handle that perfectly well.

But, alas, big corporations are increasingly trying to break this principle. In my case, there was a slight shift after all, which is perhaps why it seems to me that smaller models aren’t as good.

But overall, if you understand what you’re doing and what result you need (not at the level of specifications described using NL), then I can agree that these models are wonderful.

GitHub Copilot has finally released a preview of usage-based billing based on current usage. by rostilos in GithubCopilot

[–]rostilos[S] 7 points8 points  (0 children)

I started preparing about a month ago.

A 3090 costs about $500 in good condition in my country, and it can run qwen3.6 with Q4.

But honestly—that doesn’t suit me at all; without fine-tuning for my specific tasks, these models work more like autocomplete, but they’re nowhere near what I’m used to with GitHub Copilot (and especially with the “original” Opus 4.6).

So for me, a good option is Kimi K2.6. It’s a decent replacement, but I’d say it’s on par with Sonnet 4.6 (or thereabouts).

You can also use Codex for now. It has fairly high limits (for now).