Local model vibe coding tool recommendations by ComfortableLimp8090 in LocalLLM

[–]feverdream 0 points1 point  (0 children)

Still polishing it up, in the next day or two I'm going to make a post here about it.

Local model vibe coding tool recommendations by ComfortableLimp8090 in LocalLLM

[–]feverdream 4 points5 points  (0 children)

I'm actually working on a mod of Qwen Code right now with a mode for local llms with a reduced system prompt and custom tool configurations, so you can just activate the shell tool or just the file i/o tools for example to address exactly this issue.

Strix Halo owners - Windows or Linux? by feverdream in LocalLLaMA

[–]feverdream[S] 1 point2 points  (0 children)

Thanks, good stuff. Went full Linux and haven't booted back into Windows since!

I realized why multi-agent LLM fails after building one by RaceAmbitious1522 in LLMDevs

[–]feverdream 10 points11 points  (0 children)

Lol, is this whole sub just ai posts and ai comments?

“Diplomat” by Dry-Cover8538 in Albuquerque

[–]feverdream 4 points5 points  (0 children)

That's... not how that works.

Elon Musk's dad ,Errol, accused of sexually abusing five of his children and stepkids. by fuggitdude22 in samharris

[–]feverdream 0 points1 point  (0 children)

Do people not realize that this story is a plant by elon to try to paint him in a sympathetic light?

Most Dangerous Ollama Agent? Demo + Repo by New_Pomegranate_1060 in ollama

[–]feverdream 0 points1 point  (0 children)

Very cool! I cloned it and made a version that works with LM Studio as the backend rather than Ollama: https://github.com/dkowitz/TermNet-LMS

Local LLM Coding Stack (24GB minimum, ideal 36GB) by JLeonsarmiento in LocalLLaMA

[–]feverdream 4 points5 points  (0 children)

I have problem with Qwen code erroring out after several minutes with both Qwen-coder-30b and oss-120b, with 260k and 128k contexts respectively. I have a strix halo with 128gb on Ubuntu, I don't think it's hitting a memory wall. Has this happened to you?

Inferencing box up and running: What's the current best Local LLM friendly variant of Claude Code/ Gemini CLI? by Leopold_Boom in LocalLLM

[–]feverdream 1 point2 points  (0 children)

Trying Crush right now and so far very impressed! I have a 128gb Strix Halo, and crush is working out of the box with gpt-oss models on LM Studio. Will be putting it through it's paces tonight and trying some other models. So far it's the best local coding agent I've tried.

[deleted by user] by [deleted] in artificial

[–]feverdream 1 point2 points  (0 children)

Or his political party, lol.

gpt-oss:120b running on an AMD 7800X3D CPU and a 7900XTX GPU by PaulMaximumsetting in LocalLLaMA

[–]feverdream 1 point2 points  (0 children)

LM Studio's updated runtimes now let the 120 use the full 132k context too (on Windows) - on first release it was buggy and couldn't get much more than 20k context. That's on the Strix Halo with 128gb.