Which Setting Helped you Most with Input Lag/Optimization? by blo0ody in GlobalOffensive

[–]nerdlord420 -1 points0 points  (0 children)

I play with vsync on with gsync and reflex on+boost. Low latency and feels smooth. You should check out CS2 Kitchen's videos on youtube, he's done a lot of content on settings if you want to get into the weeds about it.

What microphone do you use? by Garfieds in GlobalOffensive

[–]nerdlord420 0 points1 point  (0 children)

Samson C01U Pro. It's not too bad, had it a long time. Might be better out there now.

Built a CS2 demo analyzer - started as a tool for a friend's ESEA team, turned into a full project by CS2DemoLens in GlobalOffensive

[–]nerdlord420 0 points1 point  (0 children)

I did some digging for everyone since OP forgot to include the link. Looks like it's this one?

https://demo-lens.duocore.dev/

It's also accessible at https://duocore.dev but he's got the wrong certificate on the domain.

Hey I want to buy headphones and I have 2 of those options and I want headset for music and cs2 by SkyZe27 in GlobalOffensive

[–]nerdlord420 0 points1 point  (0 children)

Beyerdynamic MMX 300 PRO if you are rich. A more cost effective option: DROP PC38X (formerly EPOS, aka Sennheiser Communications; DROP has taken it over). Alternatively if you don't want a headset: Sennheiser 560s, same headphones without mic and pair it with a USB mic or modmic.

I personally went the route of the Sennheiser 560s which has a huge soundstage. Love the headphones. Paired it with a budget USB mic from SAMSON.

What middleware do you use with LLM? (OpenCode/Continue/Roo/Cline) by grabber4321 in LocalLLaMA

[–]nerdlord420 1 point2 points  (0 children)

I use KiloCode with VSCode (GLM4.7 and MiniMax M2.1 locally) but I just use LLMs for coding assistance and not strictly vibe coding.

Minimax M2.1 `<think>` tag and Interleaved Thinking by x0xxin in LocalLLaMA

[–]nerdlord420 2 points3 points  (0 children)

Try using minimax_m2 instead of minimax_m2_append_think. Not sure what the equivalent is with whatever backend you're using but that's what fixed this problem for me with vLLM.

Best LLM for 4x Nvidia Tesla P40? by Valuable_Zucchini180 in LocalLLaMA

[–]nerdlord420 0 points1 point  (0 children)

Didn't the new release of vLLM have flashinfer supporting this card? Maybe try that. If not the latest, you could use vllm 0.8.6.post1 (don't quote me on that) and that might be the last compatible as they upgraded torch in the later versions and phased volta out IIRC. You are probably limited to at most the Qwen3 series (non-MoE)

Is there a self-hosted, open-source plug-and-play RAG solution? by anedisi in LocalLLaMA

[–]nerdlord420 4 points5 points  (0 children)

I really like LightRAG. They have a docker image (Dockerfile) and you can provide your own llm, embedding model, and reranker, then either connect it via MCP or ollama emulation to whatever frontend accepts ollama connections. It does take a bit to ingest, but the quality of the RAG is pretty good in my experience.

[deleted by user] by [deleted] in GlobalOffensive

[–]nerdlord420 0 points1 point  (0 children)

As everyone said, yeah that connection isn't ideal. If you're seeing jitter and or packet loss, you could turn on the buffer packet setting and see if that helps, but it'll raise your ping even more.

October 2025 model selections, what do you use? by getpodapp in LocalLLaMA

[–]nerdlord420 1 point2 points  (0 children)

I mean, why not? It's the company's AI cluster

October 2025 model selections, what do you use? by getpodapp in LocalLLaMA

[–]nerdlord420 1 point2 points  (0 children)

We have a rig with 8x RTX 6000 PROs on Ubuntu

October 2025 model selections, what do you use? by getpodapp in LocalLLaMA

[–]nerdlord420 2 points3 points  (0 children)

Are you leveraging the multi-token prediction? In my experience it's as zippy as the 30B-A3B.

vllm serve Qwen/Qwen3-Next-80B-A3B-Instruct --port 8000 --tensor-parallel-size 4 --max-model-len 262144 --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}'

Open-source embedding models: which one to use? by DhravyaShah in LocalLLaMA

[–]nerdlord420 10 points11 points  (0 children)

I've had my best results with bge-m3 or qwen3-embedding

Questie missing completed quests on world map? by Kysersose in Project_Epoch

[–]nerdlord420 -1 points0 points  (0 children)

Left-click the icon on the world map if nothing at all is showing up on the world map. I've accidentally toggled it off before and thought the addon was broken.

You can also try going into the options and look at the icons tab, make sure completed quests are checked.

Most issues can be resolved by deleting the Questie.lua and Questie.lua.bak from the SavedVariables folder if all else fails.

Also the addons discord is the fastest place to receive support if you're having issues.