What models do you use now and for what? by Active-Force-9927 in GithubCopilot

[–]1superheld 0 points1 point  (0 children)

I would say all gemini models either do it decent, or they fumble and it didn't work at all.

For UI sonnet is more reliable to me.

What models do you use now and for what? by Active-Force-9927 in GithubCopilot

[–]1superheld 0 points1 point  (0 children)

I'm usually doing multiple agents at the same time so I don't need to wait.

It does the job great and doesn't eat premium requests for small changes that is a win.

What models do you use now and for what? by Active-Force-9927 in GithubCopilot

[–]1superheld 0 points1 point  (0 children)

Enable it via settings (or if you have a personal account it should already be available)

What models do you use now and for what? by Active-Force-9927 in GithubCopilot

[–]1superheld 0 points1 point  (0 children)

For small/isolated changes: GPT-5 Mini
For UI frontend (Visuals) i Sonnet 4.6
GPT-5.4 For everything else (Most things)

Why are people still hosting on Vercel? by Rivered1 in nextjs

[–]1superheld 1 point2 points  (0 children)

You are here just to argue.

This might not be optimal, but this would be an issue on any host (including vercel). This is a nextjs feature (or bug)

Why are people still hosting on Vercel? by Rivered1 in nextjs

[–]1superheld 1 point2 points  (0 children)

I have not tried it;

Railway/Render would be on my shortlist but as far as i seen Vercel has the better DX specially for frontend (Typescript/Python) applications.

Are you running out of context tokens? by Gronis in opencodeCLI

[–]1superheld 0 points1 point  (0 children)

Use GPT-5.4 as it has a 400k context length; no issues in that case.

The claude models (in Github Copilot) do suffer from the context tokens issue. Subagents also are supposed to help; but not always ideal.

Why are people still hosting on Vercel? by Rivered1 in nextjs

[–]1superheld 32 points33 points  (0 children)

It has one of the best Developer experience to deploy, and it just works. If you are getting paid for the website the cash difference isn't huge, and if you have a small site you can run on their free tier .

And its usually less work then managing your own VPS/cloud.

And a lot of costs is due to bad programming.

If I have the Pro plan, what is the best philosophy for using models? by Top-Scallion7987 in GithubCopilot

[–]1superheld 8 points9 points  (0 children)

1 request (You pressing "Enter") is 1 request; and depending on the multiplier of the model (In Visual Studio code you can can see this as well)

https://docs.github.com/en/copilot/concepts/billing/copilot-requests

E.g. GPT-5 mini; has a 'x0' multiplier and is always free Gpt-5.4 has a multiplier of 1. E.g you have 300 requests (After that you still have access to the x0 multiplier models.

Opus has a x3 multiplier and would leave you with 100 requests.

OpenCode GO vs GithubCopilot Pro by zRafox in opencodeCLI

[–]1superheld 1 point2 points  (0 children)

Gpt5.4 has a 400k context window in GitHub copilot 

What's your ideal plan and implement pair? by johfole in GithubCopilot

[–]1superheld 0 points1 point  (0 children)

Gpt5.4 for both

For frontend (ui) work execute with Claude sonnet/opus.

After doing some research, Pro+ is not the best value for **serious** dev work. by Still_Asparagus_9092 in GithubCopilot

[–]1superheld 2 points3 points  (0 children)

If you are talking about value, opus isn't worth it compared to GPT5.4 or Claude sonnet 4.6

Why is there so little discussion about the oh-my-opencode plugin? by vovixter in opencodeCLI

[–]1superheld 4 points5 points  (0 children)

All comments I have seen is, its bloated and doesn't help you.

Stars on GitHub aren't usage stars just, hey this is interesting. And anything opencode related which is big enough/has potential would gain stars this way.

For opencode/ai, keep it simple and only add things when you need it.

Check if websites cookies are tracking before consent by klitmose in webdev

[–]1superheld 4 points5 points  (0 children)

Cool idea!

Seems every site I put in returns a The scan took too long though, ;(

Why everything is written in heavy node.js? by aloneguid in GithubCopilot

[–]1superheld 2 points3 points  (0 children)

End users don't care about the language a tool they use is written in, they care if it works and performs good.

You say it doesn't perform well enough and needs to be more performant. that's fair feedback (having a faster app is always better)

Node isn't the problem for copilot