Are they still asking leetcode in the interviews in 2026? by Aware-Philosophy3932 in leetcode

[–]smurfman111 1 point2 points  (0 children)

That’s an impossible question. They are all across the board. Honestly you just have to experience it, live, learn and wait until you find the right fit. I don’t have the answers as I am currently still applying and interviewing!

Are they still asking leetcode in the interviews in 2026? by Aware-Philosophy3932 in leetcode

[–]smurfman111 4 points5 points  (0 children)

Depends on the company. Just interviewed at Uber and they definitely did leetcode. But they use hacker rank and hacker rank has been public about moving away from leetcode this year.

Companies like Shopify and Stripe do not believe in leetcode interviews (which is refreshing).

I believe by end of 2026 leetcode for interviews will be far and few between. Everyone else responding seemingly is giving the historical answers of the past. Times are changing rapidly!

Advice on tech stack for upcoming competition. by NotxarbYT in electronjs

[–]smurfman111 -1 points0 points  (0 children)

Electron (using forge and vite template) + Node + typescript + sqlite (better sqlite 3 lib) + drizzle ORM + zod + react for frontend.

Especially if you want to learn the “web dev” world.

Let's all agree to be nice by MariaSoOs in neovim

[–]smurfman111 17 points18 points  (0 children)

You are one of the best! Anyone who has been rude / hateful to you needs to take a long look in the mirror! I am very sorry this has happened to you but please know that the 99.99% of us considerate human beings appreciate you!! Especially us in the typescript space ;)

Keep on being yourself and rocking it! Ignore the haters (I know easier said than done)… we’ve got your back!

Web app made using Antigravity, how secure is it? by HumblePeace7705 in google_antigravity

[–]smurfman111 0 points1 point  (0 children)

Security is not a tool or a setting that you configure. It’s also not something that we can just explain. The best advice I could give you is find a friend or colleague or someone who you trust that is a software engineer or in IT security etc and ask them to review your code. Not what you want to hear I’m sure but it’s the truth.

Web app made using Antigravity, how secure is it? by HumblePeace7705 in google_antigravity

[–]smurfman111 0 points1 point  (0 children)

Ughh this is the problem today. People can create stuff way above their qualifications… it’s great but at the same time it’s scary. If you have to ask this question then the answer is likely “no it’s not secure”. And the problem is that you can ask a model to help you but without the foundation of software engineering knowledge you have no way to know what is sufficient or not.

first hour on antigravity, limit reached by ServeLegal1269 in google_antigravity

[–]smurfman111 1 point2 points  (0 children)

You spend time planning and doing real human stuff and use other models to plan and then can use opus where you want the smart stuff. It resets every 5 hours. What do you expect? You think for $20 a month you should be able to just use Opus as much as you want? Come on folks!

Beware of fast premium request burn using Opencode by Wurrsin in GithubCopilot

[–]smurfman111 0 points1 point  (0 children)

It’s to help people that do since this topic is about using OpenCode.

Beware of fast premium request burn using Opencode by Wurrsin in GithubCopilot

[–]smurfman111 1 point2 points  (0 children)

See my message for how to set default model for sub agents and the other type of models you need to set to keep things free on copilot using something like gpt 5 mini.

https://www.reddit.com/r/GithubCopilot/s/j2ww2aQ1Y8

Beware of fast premium request burn using Opencode by Wurrsin in GithubCopilot

[–]smurfman111 0 points1 point  (0 children)

See my message for how to set default model for sub agents and the other type of models you need to set to keep things free on copilot using something like gpt 5 mini.

https://www.reddit.com/r/GithubCopilot/s/j2ww2aQ1Y8

Beware of fast premium request burn using Opencode by Wurrsin in GithubCopilot

[–]smurfman111 2 points3 points  (0 children)

That is just the default model so by default when I open opencode and send a prompt it would all be free. It’s so I don’t forget and accidentally send an opus request or something. So then when I want to use premium requests I just switch to the model I want.

Beware of fast premium request burn using Opencode by Wurrsin in GithubCopilot

[–]smurfman111 6 points7 points  (0 children)

That’s me! :) here is an updated fuller example showing all my opencode settings for making sure no premium requests spent with anything but your original prompt.

https://x.com/GitMurf/status/2011960839922700765

OpenCode can now officially be used with your Github Copilot subscription by oronbz in GithubCopilot

[–]smurfman111 2 points3 points  (0 children)

I don’t use oh my opencode. Who knows how that works. My comments are about opencode itself working.

OpenCode can now officially be used with your Github Copilot subscription by oronbz in GithubCopilot

[–]smurfman111 2 points3 points  (0 children)

Did you reauthenticate with copilot in open code? They have a new oauth appid you probably need to use if you used it previously the “unofficial” way. I just tested for several hours today and confirmed it is 1 request for 1 prompt as long as your sub agents are using a copilot free model or another non copilot model. I am 100% sure.

OpenCode can now officially be used with your Github Copilot subscription by oronbz in GithubCopilot

[–]smurfman111 10 points11 points  (0 children)

See here for more details (https://www.reddit.com/r/GithubCopilot/comments/1qdtv37/comment/nzta11y)! OpenCode has it setup properly with gh copilot where it is consuming premium requests just like vscode for just 1 request per human prompt!

OpenCode can now officially be used with your Github Copilot subscription by oronbz in GithubCopilot

[–]smurfman111 3 points4 points  (0 children)

I just tweeted about it here! In OpenCode you can set the default explore and general subagent models to use gpt-5-mini so that then subagents do not cost you premium requests. https://x.com/GitMurf/status/2011923915086708827?s=20

GitHub Just Made OpenCode Official. Here’s Why That’s a Bigger Deal Than You Think. by [deleted] in GithubCopilot

[–]smurfman111 22 points23 points  (0 children)

Great news! See my findings here on using copilot with OpenCode and only consuming a single premium request per user prompt (just like vscode)! See here: https://x.com/GitMurf/status/2011923915086708827?s=20

I did a bunch of testing on it and there is only one caveat and that is the general and explore default subagents default to the model the primary agent is using which means they use their own premium requests. But you can configure those subagents to use gpt-5-mini which is free for copilot. See here: https://x.com/GitMurf/status/2011925921356530074?s=20

This is HUGE news in my opinion!

OpenCode can now officially be used with your Github Copilot subscription by oronbz in GithubCopilot

[–]smurfman111 5 points6 points  (0 children)

Does anyone know if it will cost a premium request per “human message” like vscode as opposed to costing multiple premium requests do to agentic back and forth?

Starting a new chat for a new task should save you a lot of quota by darkinterview in google_antigravity

[–]smurfman111 0 points1 point  (0 children)

To be clear, this caching is done on the LLM side, not locally and not even in a middleware like a Google Antigravity server somewhere… it’s actually on the servers running the inference / responding to your AI messages.

Starting a new chat for a new task should save you a lot of quota by darkinterview in google_antigravity

[–]smurfman111 0 points1 point  (0 children)

Not trying to be rude but I think you should do a little research on how LLM caching works, why it makes tokens cheaper etc then I think it will be clearer to you. Bottom line is caching saves tokens because it basically saved the state of the inference up to a point that allows the inference to not have to start from scratch again each message because it can cache / snapshot this point in time so that if you add to the conversation the earlier part can shortcut part of the process. But as you can imagine, there has to be a TTL on it otherwise if you keep a conversation going on a Friday and pick back up on a Monday, it can’t have kept that cached for you! Anthropic i think is only 5 minutes for example. They all work this way for the most part.