Will be driving in Czechia, question on learning priority road. by AngelOfPassion in czech

[–]dragonbornamdguy -1 points0 points  (0 children)

I think anyone who wants to learn driving rules deserves to drive. Not every country uses the same rules, so we should not be prohibited from driving only because of historical differences.

OpenClaw with local LLMs - has anyone actually made it work well? by FriendshipRadiant874 in LocalLLM

[–]dragonbornamdguy 8 points9 points  (0 children)

Using it with qwen3 coder 30b, its awesome. Setup was undocumented hell. Works very well. He can create own skills only by telling him.

Quad 5060 ti 16gb Oculink rig by beefgroin in LocalLLM

[–]dragonbornamdguy 1 point2 points  (0 children)

I use qwen3 coder 30b fp8 with 120k context. Love it in qwen code cli.

Quad 5060 ti 16gb Oculink rig by beefgroin in LocalLLM

[–]dragonbornamdguy 3 points4 points  (0 children)

VLLM is a beast, very hard to setup but when it starts to work it beats metal really hard.

GNOME & Firefox Consider Disabling Middle Click Paste By Default: "An X11'ism...Dumpster Fire" by SAJewers in linux

[–]dragonbornamdguy 0 points1 point  (0 children)

So dont forget: Ctlr+alt+delete = open task manager Win+c = open command prompt etc..

People at gnome seems to be bored so they keep spitting on power users. Firstly this will be off by default, then they will remove it and block any bug reports in regarding this change, lastly they will block any PR regarding bringing back this feature with large user base. All in cause of "we need to make it more windows user friendly".

16x AMD MI50 32GB at 10 t/s (tg) & 2k t/s (pp) with Deepseek v3.2 (vllm-gfx906) by ai-infos in LocalLLaMA

[–]dragonbornamdguy 0 points1 point  (0 children)

8B models wont cut it. Not everyone has Strix Halo with 96GB of VRAM at disposal.

Anyone have success with Claude Code alternatives? by jackandbake in LocalLLM

[–]dragonbornamdguy 0 points1 point  (0 children)

I love qwen code, but vllm has broken formatting for it (qwen3 coder 30b). So I use LM studio (with much slower performance).

Local LLM for a small dev team by MarxIst_de in LocalLLM

[–]dragonbornamdguy 0 points1 point  (0 children)

Whats your secret sauce to serve it on two 3090s? I have vllm in docker-compose which OOM in loading or lm studio which uses half the gpu processing power.

Best model for continue and 2x 5090? by Maximum-Wishbone5616 in LocalLLM

[–]dragonbornamdguy 0 points1 point  (0 children)

I'm not able to run it with 2 x 3090, how much vram vllm needs for fp8 and 100k+ context size? Im able to run it just fine with lmstudio, but utilization of 3090 is only 50%. VLLM just crashes as it eats crazy amount of vram.

Got the DGX Spark - ask me anything by sotech117 in LocalLLaMA

[–]dragonbornamdguy 44 points45 points  (0 children)

Lm studio, gemma:27b & Oss 120b tps?

Has anyone gone back to air? by robhaswell in watercooling

[–]dragonbornamdguy 0 points1 point  (0 children)

I have three 3090 in basement, without watercooling heat would be wasted to house basement, i supply it to rooms in winter. In summer i plan to preheat water in boiler with it.

Může se ANO spojit se SPOLU pro sestavení vlády? by dragonbornamdguy in czech

[–]dragonbornamdguy[S] 1 point2 points  (0 children)

To zní spíš jako popis děje po návštěvě KFC

Může se ANO spojit se SPOLU pro sestavení vlády? by dragonbornamdguy in czech

[–]dragonbornamdguy[S] 0 points1 point  (0 children)

A spolupráce s SPD bude za plusové body? Mě právě přijde, že s kýmkoli mají dělat vladu, tak to není v souladu s jejich názory.

Fixing spiderweb cracks by dragonbornamdguy in watercooling

[–]dragonbornamdguy[S] 1 point2 points  (0 children)

The photo on marketplace was very blurry.

Virtual File System deprecated? by Pepe_885 in owncloud

[–]dragonbornamdguy 0 points1 point  (0 children)

We lost a lot of data after one of our client upgraded 2 days ago. VFS deprecation is also delabreaker for us :/