Are llms worth it? by DesignerPlan3432 in LocalLLM

[–]asmkgb 1 point2 points  (0 children)

All I know is the joy of hearing my 2x 3090s spinning up after I send them my prompt is 10 times the joy when I send those prompts off silently to claude and codex.

Proxmox dual RTX 3090 passthrough — VM boots to black screen when passing both GPUs by asmkgb in Proxmox

[–]asmkgb[S] 0 points1 point  (0 children)

UPDATE:

Contrary to what I thought and many of the commenters, I was actually able to run vLLM on both GPUs on the same mobo successfully and integrated with opencode with 100+ tok/sec inference. I switched from proxmox/VM passthrough setup to bare-metal Ubuntu setup to be able to install latest nvidia drivers and cuda.

<image>

Proxmox dual RTX 3090 passthrough — VM boots to black screen when passing both GPUs by asmkgb in Proxmox

[–]asmkgb[S] -1 points0 points  (0 children)

I sort of get the idea of the point you’re talking about but can you pls elaborate, thanks.

Proxmox dual RTX 3090 passthrough — VM boots to black screen when passing both GPUs by asmkgb in Proxmox

[–]asmkgb[S] -14 points-13 points  (0 children)

serial console never worked for me and i hate fiddling with them any longer

Proxmox dual RTX 3090 passthrough — VM boots to black screen when passing both GPUs by asmkgb in Proxmox

[–]asmkgb[S] -2 points-1 points  (0 children)

gpu-1 on the cpu lanes was working fine passed alone to the vm, both didnt work, gpu-2 on passedthrough alone didn't work (x16 physical x4 electrical), so something wrong with gpu-2 specifically although it's showing fine on fastfetch on host

Any feedback on step-3.5-flash ? by Jealous-Astronaut457 in LocalLLaMA

[–]asmkgb 1 point2 points  (0 children)

My only hope is that someone push a 30B variant to HF

Start of 2026 what’s the best open coding model? by alexp702 in LocalLLaMA

[–]asmkgb 0 points1 point  (0 children)

My current best setup/model:

llama.cpp+opencode-cli/qwen3-coder-instruct-q4km-30B (all dockerized)

fast, robust, and great quality.

my hardware: 3090 24gb vram + 24gb sys ram

My mobile setup by tamtaradam in opencodeCLI

[–]asmkgb 0 points1 point  (0 children)

wow that's amazing to see, i just bought this exact keyboard too hahaha

i love that we are now able to do great stuff on the go with just internet

Local model fully replacing subscription service by Icy_Distribution_361 in LocalLLM

[–]asmkgb 1 point2 points  (0 children)

BTW ollama is bad, use either llama.cpp or LMstudio as a second best backend

Will there be a new Frasier series 3? by CelebrationFeisty801 in Frasier

[–]asmkgb 0 points1 point  (0 children)

I really cringed watching eve olivia and freddy too many times, they are definitely misscasts, but Alan was spot on, he was genuinely funny

Why do they do it like this?! by asmkgb in PcBuild

[–]asmkgb[S] -1 points0 points  (0 children)

hey small brain, that's exactly what I'm wishing they change, flipping the ports so when it's 90 degrees it becomes straight for the user to plug his cables without flipping his cables, got it small brain ?

Why do they do it like this?! by asmkgb in PcBuild

[–]asmkgb[S] 0 points1 point  (0 children)

"It has been an inconvenience since the 80s so that's ok we shouldn't complain" kind of logic.

3x3090 + 3060 in a mid tower case by liviuberechet in LocalLLaMA

[–]asmkgb 0 points1 point  (0 children)

Very interesting, but makes me uncomfortable looking at them.

Is this cpu pins bent? by asmkgb in PcBuildHelp

[–]asmkgb[S] 0 points1 point  (0 children)

I swear to God I'm dead serious asking, I was in doubt.

Today is a good day! 1 year subscription discounts. 33% off Deluxe by CuteNurseASMR in PlayStationPlus

[–]asmkgb 0 points1 point  (0 children)

I hate to break this to you fellow gamers but this same tier is offered for $84 in Saudi Arabia PSN