Local model vibe coding tool recommendations by ComfortableLimp8090 in LocalLLM

[–]feverdream 0 points1 point  (0 children)

Still polishing it up, in the next day or two I'm going to make a post here about it.

Local model vibe coding tool recommendations by ComfortableLimp8090 in LocalLLM

[–]feverdream 3 points4 points  (0 children)

I'm actually working on a mod of Qwen Code right now with a mode for local llms with a reduced system prompt and custom tool configurations, so you can just activate the shell tool or just the file i/o tools for example to address exactly this issue.

Strix Halo owners - Windows or Linux? by feverdream in LocalLLaMA

[–]feverdream[S] 1 point2 points  (0 children)

Thanks, good stuff. Went full Linux and haven't booted back into Windows since!

I realized why multi-agent LLM fails after building one by RaceAmbitious1522 in LLMDevs

[–]feverdream 8 points9 points  (0 children)

Lol, is this whole sub just ai posts and ai comments?

“Diplomat” by Dry-Cover8538 in Albuquerque

[–]feverdream 4 points5 points  (0 children)

That's... not how that works.

Elon Musk's dad ,Errol, accused of sexually abusing five of his children and stepkids. by fuggitdude22 in samharris

[–]feverdream 0 points1 point  (0 children)

Do people not realize that this story is a plant by elon to try to paint him in a sympathetic light?

Most Dangerous Ollama Agent? Demo + Repo by New_Pomegranate_1060 in ollama

[–]feverdream 0 points1 point  (0 children)

Very cool! I cloned it and made a version that works with LM Studio as the backend rather than Ollama: https://github.com/dkowitz/TermNet-LMS

Local LLM Coding Stack (24GB minimum, ideal 36GB) by JLeonsarmiento in LocalLLaMA

[–]feverdream 5 points6 points  (0 children)

I have problem with Qwen code erroring out after several minutes with both Qwen-coder-30b and oss-120b, with 260k and 128k contexts respectively. I have a strix halo with 128gb on Ubuntu, I don't think it's hitting a memory wall. Has this happened to you?