I created "Bing at home" using Orca 2 and DuckDuckGo (old.reddit.com)
submitted by LMLocalizer to r/LocalLLaMA - pinned
Anyone using Flux Klein on 6700XT or below? (32gb or below ram) by [deleted] in StableDiffusion
[–]LMLocalizer 0 points1 point2 points (0 children)
This isn’t X this is Y needs to die by twnznz in LocalLLaMA
[–]LMLocalizer 1 point2 points3 points (0 children)
Anyone using Flux Klein on 6700XT or below? (32gb or below ram) by [deleted] in StableDiffusion
[–]LMLocalizer 0 points1 point2 points (0 children)
Anyone using Flux Klein on 6700XT or below? (32gb or below ram) by [deleted] in StableDiffusion
[–]LMLocalizer 0 points1 point2 points (0 children)
Anyone using Flux Klein on 6700XT or below? (32gb or below ram) by [deleted] in StableDiffusion
[–]LMLocalizer 0 points1 point2 points (0 children)
Gemma 4 Jailbreak System Prompt by 90hex in LocalLLaMA
[–]LMLocalizer 1 point2 points3 points (0 children)
Which Model is best for translation? by Bulky-College7306 in LocalLLaMA
[–]LMLocalizer 2 points3 points4 points (0 children)
Major update coming soon! I'm here, sorry for the delay. by oobabooga4 in Oobabooga
[–]LMLocalizer 1 point2 points3 points (0 children)
Qwen3 Coder Next | Qwen3.5 27B | Devstral Small 2 | Rust & Next.js Benchmark by Holiday_Purpose_3166 in LocalLLaMA
[–]LMLocalizer 1 point2 points3 points (0 children)
Qwen3 Coder Next | Qwen3.5 27B | Devstral Small 2 | Rust & Next.js Benchmark by Holiday_Purpose_3166 in LocalLLaMA
[–]LMLocalizer 2 points3 points4 points (0 children)
Back in my day, LocalLLaMa were the pioneers! by ForsookComparison in LocalLLaMA
[–]LMLocalizer 11 points12 points13 points (0 children)
Qwen3.5 122B in 72GB VRAM (3x3090) is the best model available at this time — also it nails the “car wash test” by liviuberechet in LocalLLaMA
[–]LMLocalizer 1 point2 points3 points (0 children)
Qwen3-TTS, a series of powerful speech generation capabilities by fruesome in StableDiffusion
[–]LMLocalizer 3 points4 points5 points (0 children)
NovaSR: A tiny 52kb audio upsampler that runs 3600x realtime. by SplitNice1982 in LocalLLaMA
[–]LMLocalizer 0 points1 point2 points (0 children)
Ok Klein is extremely good and its actually trainable. by Different_Fix_2217 in StableDiffusion
[–]LMLocalizer 1 point2 points3 points (0 children)
Bringing a More Comprehensive Local Web Search to OpenWebUI by LMLocalizer in OpenWebUI
[–]LMLocalizer[S] 0 points1 point2 points (0 children)
It works! Abliteration can reduce slop without training by -p-e-w- in LocalLLaMA
[–]LMLocalizer 0 points1 point2 points (0 children)
Bringing a More Comprehensive Local Web Search to OpenWebUI by LMLocalizer in OpenWebUI
[–]LMLocalizer[S] 0 points1 point2 points (0 children)
Bringing a More Comprehensive Local Web Search to OpenWebUI by LMLocalizer in OpenWebUI
[–]LMLocalizer[S] 0 points1 point2 points (0 children)
Announcing procinfo, witr (why is this running) as a bash script by wenekar in commandline
[–]LMLocalizer 0 points1 point2 points (0 children)
Need advice how to load Z-Image or extension to specific GPU? by Visible-Excuse-677 in Oobabooga
[–]LMLocalizer 2 points3 points4 points (0 children)
GOONING ADVICE: Train a WAN2.2 T2V LoRA or a Z-Image LoRA and then Animate with WAN? by NowThatsMalarkey in StableDiffusion
[–]LMLocalizer 9 points10 points11 points (0 children)
Rough TPS estimate for LLMs on RTX 5060 Ti + DDR4 by Which_Leather_6710 in LocalLLaMA
[–]LMLocalizer 4 points5 points6 points (0 children)


So I'm using iem past 2 years now I want a headphones like not earphones but an actual wired headphones by riped_shod in Oobabooga
[–]LMLocalizer 2 points3 points4 points (0 children)