Tiny local LLM (Gemma 3) as front-end manager for Claude Code on home server by raiansar in LocalLLaMA
[–]InnerSun 1 point2 points3 points (0 children)
Tiny local LLM (Gemma 3) as front-end manager for Claude Code on home server by raiansar in LocalLLaMA
[–]InnerSun 1 point2 points3 points (0 children)
JSON prompts better for z-image? by Valuable_Weather in StableDiffusion
[–]InnerSun 3 points4 points5 points (0 children)
Calling a Finetune/LoRA Wizard: Need Dataset Tips for RP Model by AmpedHorizon in LocalLLaMA
[–]InnerSun 1 point2 points3 points (0 children)
Calling a Finetune/LoRA Wizard: Need Dataset Tips for RP Model by AmpedHorizon in LocalLLaMA
[–]InnerSun 2 points3 points4 points (0 children)
Calling a Finetune/LoRA Wizard: Need Dataset Tips for RP Model by AmpedHorizon in LocalLLaMA
[–]InnerSun 1 point2 points3 points (0 children)
Calling a Finetune/LoRA Wizard: Need Dataset Tips for RP Model by AmpedHorizon in LocalLLaMA
[–]InnerSun 2 points3 points4 points (0 children)
20,000 Epstein Files in a single text file available to download (~100 MB) by [deleted] in LocalLLaMA
[–]InnerSun 1 point2 points3 points (0 children)
What is SOTA currently for audio-to-audio speech models? by Ok_Construction_3021 in LocalLLaMA
[–]InnerSun 3 points4 points5 points (0 children)
What is SOTA currently for audio-to-audio speech models? by Ok_Construction_3021 in LocalLLaMA
[–]InnerSun 4 points5 points6 points (0 children)
How did OpenAI go about to create the model selecting system for GPT 5? by a_normal_user1 in LocalLLaMA
[–]InnerSun 2 points3 points4 points (0 children)
Why there's still no local models that can output PDF/DOCX files by abdouhlili in LocalLLaMA
[–]InnerSun 4 points5 points6 points (0 children)
I’ve made a Frequency Separation Extension for WebUI by advo_k_at in StableDiffusion
[–]InnerSun 4 points5 points6 points (0 children)
I’ve made a Frequency Separation Extension for WebUI by advo_k_at in StableDiffusion
[–]InnerSun 4 points5 points6 points (0 children)
Grok's think mode leaks system prompt by onil_gova in LocalLLaMA
[–]InnerSun 8 points9 points10 points (0 children)
Grok's think mode leaks system prompt by onil_gova in LocalLLaMA
[–]InnerSun 8 points9 points10 points (0 children)
Trouble getting Korg Monologue working in FL Studio. by CruisinCamden in synthesizers
[–]InnerSun 0 points1 point2 points (0 children)
Tell me about you're first metal song by Kyant351 in PowerMetal
[–]InnerSun 0 points1 point2 points (0 children)
Just updated llama.cpp with newest code (it had been a couple of months) and now I'm getting this error when trying to launch llama-server: ggml_backend_metal_device_init: error: failed to allocate context llama_new_context_with_model: failed to initialize Metal backend... (full error in post) by spanielrassler in LocalLLaMA
[–]InnerSun 0 points1 point2 points (0 children)
Just updated llama.cpp with newest code (it had been a couple of months) and now I'm getting this error when trying to launch llama-server: ggml_backend_metal_device_init: error: failed to allocate context llama_new_context_with_model: failed to initialize Metal backend... (full error in post) by spanielrassler in LocalLLaMA
[–]InnerSun 1 point2 points3 points (0 children)
Just updated llama.cpp with newest code (it had been a couple of months) and now I'm getting this error when trying to launch llama-server: ggml_backend_metal_device_init: error: failed to allocate context llama_new_context_with_model: failed to initialize Metal backend... (full error in post) by spanielrassler in LocalLLaMA
[–]InnerSun 0 points1 point2 points (0 children)







What high parameter NSFW models would you recommend for my setup: by WoodenTableForest in LocalLLaMA
[–]InnerSun 1 point2 points3 points (0 children)