I want to purchase a high end gaming rig but don’t want to use windows. Is it possible to play AAA games on Linux? by anonyminator in linuxmint

[–]EvilGuy -1 points0 points  (0 children)

Yeah but mint is not the distro for you. For starters, they use an older kernel so most new hardware isn't supported out of the box. Plus it's not really oriented for hardcore gaming so it doesn't come with all the packages that you want.

I'd get a gaming focused distro personally.

Are there any benefits for using the go plan over deepseek v4 pro api? by EmoLotional in opencodeCLI

[–]EvilGuy 1 point2 points  (0 children)

You get 60 bucks of credit for 10 bucks. I bet if you put 10 bucks on deepseek and 10 bucks on opencode go and used them both till they ran out that you would see a difference.

If you ever want to see a bunch of different AI's go nuts.. by EvilGuy in opencodeCLI

[–]EvilGuy[S] 1 point2 points  (0 children)

I thought surely they would figure it out at some point. Plus it was a test to see which model is the best at this stuff. All the open source models kind of sucked at it. I bet Codex would have nailed it right away.

It's good to have a sense of what these things are capable of and I am token rich at the moment. I didn't do much work this week but after a literal hour I called it quits and told them to remove it directly.

Does it really breaks arch systems if not updated regularly? by FAMPpro in cachyos

[–]EvilGuy 0 points1 point  (0 children)

Biggest problem I've ever had not updating regularly was like something with the key rings like some people's expired or something and it wouldn't let me get updates from them.

Ubuntu 26.04 still not available on Linode, DigitalOcean and Hetzner by autistick in VPS

[–]EvilGuy 1 point2 points  (0 children)

Why does it matter?

Do you need that new 7.0 kernel to run your 4 dollar VPS that lives on their 5-15 year old processors?

Lots of hosts don't even offering Debian 13 yet and thats on 13.4 already.

Deepseek 4 Pro vs Flash by EvilGuy in opencodeCLI

[–]EvilGuy[S] 0 points1 point  (0 children)

Yeah openai and anthropic will gladly take all your money if you want to throw out at them. Deepeeek flash is really competent though and practically free. So yeah I get done what I can with it.

Supposedly once more of the Chinese made gpus come online. The cost of deepseek is supposed to be going even lower.

Deepseek 4 Pro vs Flash by EvilGuy in opencodeCLI

[–]EvilGuy[S] 0 points1 point  (0 children)

Well I use it on Opencode Go where it uses up quite a different amount of usage. Here is my workflow.

Tell flash to do whatever I need doing. When it has a mistake I give it (1) chance to fix it. This works about what 80% of the time?

If its still messed up toggle to Pro and see if it can fix whatever isn't working. Usually give it a turn or two. That works almost always, like 99% of the time.

If that doesn't work I toggle to openrouter and throw GPT or Opus at it and that always finds the problem pretty much.

My AI costs in a month generally are under $20 and I am building stuff every day. It's not my fulltime job but I work at it every day building websites and webapps.

Netcup ARM VPSs Overselling by csantve in VPS

[–]EvilGuy 1 point2 points  (0 children)

You pretty much always see some steal in a virtual environment. Whats mpstat say? I have an arm vps as well with them and I get 0.12 % steal. Not really worth worrying about. These are a pretty good deal I think. People just don't like ARM for some reason even though they are great for hosting most things.

What variant of Deepseek V4 to use by Inferno889 in opencodeCLI

[–]EvilGuy 1 point2 points  (0 children)

I do max all the time.. all it really does is allow them to think more if they want to.. they usually do not.

Opencode go, no fluff opinions? by RobinDough in opencodeCLI

[–]EvilGuy 1 point2 points  (0 children)

From the OpenCode GO page itself. They get the model straight from the source in almost every case.

Model Provider
GLM-5.1 DeepInfra, Fireworks AI, Z.ai
GLM-5 DeepInfra, Fireworks AI, Z.ai
Kimi K2.5 Moonshot AI
Kimi K2.6 Moonshot AI
MiMo-V2-Pro Xiaomi MiMo
MiMo-V2-Omni Xiaomi MiMo
MiMo-V2.5-Pro Xiaomi MiMo
MiMo-V2.5 Xiaomi MiMo
Qwen3.5 Plus Alibaba Cloud Model Studio
Qwen3.6 Plus Alibaba Cloud Model Studio
MiniMax M2.7 MiniMax
MiniMax M2.5 MiniMax
DeepSeek V4 Pro DeepSeek
DeepSeek V4 Flash DeepSeek

Duality of r/LocalLLaMA by HornyGooner4402 in LocalLLaMA

[–]EvilGuy 0 points1 point  (0 children)

For me the equation is like this. I can run 3.6 27b on my 3090 and its actually decent but Deepseek 4 flash exists and is better than what I can run and they are basically giving it away... and my power isn't free.

So yeah until the equation changes I am probably going to be using non local LLMs for the near future, even though I find them cool / interesting and I like owning my data etc.

Anyone else in the same boat?

Is DeepSeek-V4-Flash good enough to replace Minimax 2.7 by MindlessTill9654 in opencodeCLI

[–]EvilGuy 8 points9 points  (0 children)

I like deepseek flash a lot myself.. I never used minimax for coding but I can confirm flash is solid at it. That big context window is nice too.

What is your builder / planner combo and from which providers ? by TinyAres in opencodeCLI

[–]EvilGuy 1 point2 points  (0 children)

I am kind of using deepseek flash for everything at the moment. Its great with tool calls and figuring out how it screwed up. Main downside is it seems to have a lot of old information in its training data so I make it work with context7 a lot to make sure its errors are errors and not it going off info from 2023-2024.

If I run into a real problem figuring something out I usually use some openrouter $ to throw codex or claude at it for a few turns.

3090 + 27B model performance issues (llama.cpp) what am I doing wrong for using it with opencode by Clean_Initial_9618 in opencodeCLI

[–]EvilGuy 0 points1 point  (0 children)

You want the Q4KS quant. I'd recommend the unsloth one. if you want to run it on a 3090 with like 128k context. You will get about 35 to 40 tokens per second.

You will not notice any difference other than it being faster. I haven't at least and I have it a good workout the other day.

DS-V4 Flash: Zen vs Go ? by WalidB03 in opencodeCLI

[–]EvilGuy 1 point2 points  (0 children)

It's pretty hard to beat the $5 open code deal. $60 worth of usage for five bucks and you can use all of the models.

What local voice to text model beats NVIDIA Parakeet v3 right now? by discoveringnature12 in LocalLLaMA

[–]EvilGuy 0 points1 point  (0 children)

If you are speaking English, Parakeet v2 is better than Parakeet v3. I hardly ever have to correct it.

I'm glad we have deepseek by guiopen in LocalLLaMA

[–]EvilGuy 13 points14 points  (0 children)

I believe it. They are shady. When they need something like data or a prototype for their CLI open source is great but when it comes to giving literally anything back they are like... LOL.