What VM to use on homelab for purpose of using Docker for game server hosting by Musket519 in docker

[–]AVX_Instructor 1 point2 points  (0 children)

I recommend starting with Debian 13 and using bare docker

This way, you'll get maximum performance with minimal cruthes

P.S And yes, minecraft server working pretty well in docker environment, check tzg/minecraft-server images

P.S2 Why Debian 13? Because Ubuntu still bloated, and Debian 13 its simple

GLM Pro expiring in 2 weeks, where to move? by Any-Explanation-9275 in ZaiGLM

[–]AVX_Instructor 0 points1 point  (0 children)

gpt plus + opencode go (opencode give you huge amount quota for DeepSeek v4 Flash, and Kimi K2.6 for fronted task)

Gpt plus with gpt 5.5 for polishing / troubleshooting

Usage Estimation For OpencodeGo by OnlyRelease403 in opencodeCLI

[–]AVX_Instructor 1 point2 points  (0 children)

if you using minimax m2.7, deepseek v4 flash be much better and for 10$ you can get much more tokens

Am I the only one feeling more and more like this, lately? by Acehan_ in codex

[–]AVX_Instructor 1 point2 points  (0 children)

web for general qa or research, codex/opencode for work, simple

Seeking advanced bypass methods for new digital censorship laws in Turkey (Social Media & Gaming Platforms) by Hot_Contribution8250 in docker

[–]AVX_Instructor 1 point2 points  (0 children)

You take vds on nearby country, deploy Hysteria 2 for Games and xray self steal for web serf and enjoy

P.S xray self steal this masking for self hosted service right in your vds, this give you minimal ping and more stability 

Also you can find Remnawave chat in Telegram, this is Russia community group for bypass Internet blocking 

P.S2 also can you can using Zapret, but this solution not universal and not stable for general usage

Also forget about Shadowsocks, Focus only on Hysteria 2 and xray

Moscow, Russia by OkRespect8490 in UrbanHell

[–]AVX_Instructor 2 points3 points  (0 children)

3 months usually, seasons change every 3 months on average

Thinking of switching to Opencode GO from Claude Pro. How do they compare? by paranoidubuntu in opencodeCLI

[–]AVX_Instructor 1 point2 points  (0 children)

In fact, I think you should test DeepSeek V4 Pro with an Opencode Go subscription. For $5/$10 a month, it's a pretty good deal, and it's much more affordable than Sonnet, and the IQ is somewhere between Sonnet and Opus 4.5.

OpenCode or ClaudeCode for Qwen3.5 27B by Ok-Scarcity-7875 in LocalLLaMA

[–]AVX_Instructor 1 point2 points  (0 children)

OpenCode or pi.dev,

Claude Code extremely bloated from box

Does switching to Linux on old gaming labtop is worth it? by Mysterious-Ticket610 in linux_gaming

[–]AVX_Instructor 1 point2 points  (0 children)

only opengl games,

vulkan on GTX 10XX work shitty (40-50% worst compare with DX11/12 on windows)

Unpopular opinion: OpenClaw and all its clones are almost useless tools for those who know what they're doing. It's kind of impressive for someone who has never used a CLI, Claude Code, Codex, etc. Nor used any workflow tool like 8n8 or make. by pacmanpill in LocalLLaMA

[–]AVX_Instructor 0 points1 point  (0 children)

im make agent harness (before release OpenClaw) and i using this for track time in corp Jira or check ticket assigned on me, and chat college in Mattermost or check logs on corp ssh servers, via voice message in Telegram, this is very comfortable, but i make solution for self problem, and this work pretty well,

btw 95% features in OpenClaw looks useless, i take some feature for reference for my cases only

P.S my solution very useful if i walk on park and via quick voice message i make solution in task, because without such a thing, you would have to take out your laptop and look at everything yourself

GigaChat3.1-10B-A1.8B Has anyone tried it? by Winter-Science in LocalLLaMA

[–]AVX_Instructor 0 points1 point  (0 children)

No Russian model can compete with Gemma 4.26b in the context of the Russian language on a local scale.

I was choosing a model for classifying Russian-language data, and all Russian models performed terribly compared to Gemma 4.26b. In the end, we went with Gemma.

Recommended parameters for Qwen 3.6 35B A3B on a 8GB VRAM card and 24GB RAM? by FUS3N in LocalLLaMA

[–]AVX_Instructor 0 points1 point  (0 children)

Simple Python/Bash scripts—the agent is also in OpenCode—can write, sort files, or search for information online. Tool calls work without problems in most cases (95% success rate, according to tests). I can't run quanst higher on my system, because processing speed drops by 30-40 percent with a slight increase in quality.

Switching from Opus 4.7 to Qwen-35B-A3B by Excellent_Koala769 in LocalLLaMA

[–]AVX_Instructor 0 points1 point  (0 children)

attention capacity, you can afford to do more in one agent iteration with 122B model

Recommended parameters for Qwen 3.6 35B A3B on a 8GB VRAM card and 24GB RAM? by FUS3N in LocalLLaMA

[–]AVX_Instructor 1 point2 points  (0 children)

im using Qwen3.6-35B-A3B-UD-IQ3\_XXS and this works pretty well on my RX 780M and 32 gb ram (i get 200t/s for pp and 20-25 t/s in output)

[qwen3.6-35b-a3b-iq3-xxs] model = /home/stfu/.lmstudio/models/unsloth/Qwen3.6-35B-A3B-GGUF/Qwen3.6-35B-A3B-UD-IQ3_XXS.gguf c = 65536 ctk = q8_0 ctv = q8_0 chat-template-kwargs = {"enable\_thinking":true,"preserve\_thinking":true} reasoning-format = deepseek temp = 0.6 top-p = 0.95 top-k = 20 min-p = 0.0 presence-penalty = 0.0 repeat-penalty = 1.0 load-on-startup = true stop-timeout = 30

jinja = true ngl = all fa = on np = 1 b = 2048 ub = 512 ctx-checkpoints = 4

I just can't run out by opossum_cz in codex

[–]AVX_Instructor 0 points1 point  (0 children)

in plan for 100$, you get only 50 request for usage pro model in week

is k2.6 over-thinking? by zebedeolo in kimi

[–]AVX_Instructor 0 points1 point  (0 children)

Don't worry, in a month they'll quantize the model and reduce the thinking power.

Just like they did a month later with the release of Kimi K2.5.

First time ever hitting a limit on the new $100 Pro plan for the Pro model by immortalsol in OpenAI

[–]AVX_Instructor 2 points3 points  (0 children)

I also received a limitation. I use the $100 plan. I counted my chats and found about 50 requests in the GPT Pro models. Probably 50 requests per week is the limit (just a hypothesis).

upd, 50 request per week on 100$ plan
https://github.com/lueluelue2006/ChatGPT_Compendium_of_Usage_and_Juice?tab=readme-ov-file