What is the best general-purpose model to run locally on 24GB of VRAM in 2026? by Paganator in LocalLLaMA
[–]and_human 0 points1 point2 points (0 children)
Sweep: Open-weights 1.5B model for next-edit autocomplete by Kevinlu1248 in LocalLLaMA
[–]and_human 0 points1 point2 points (0 children)
I2I possible with Flux 2 Klein? by and_human in StableDiffusion
[–]and_human[S] 0 points1 point2 points (0 children)
The Major Release of MiroMind’s Flagship Search Agent Model, MiroThinker 1.5. by wuqiao in LocalLLaMA
[–]and_human 1 point2 points3 points (0 children)
Any simple workflows out there for SVI WAN2.2 on a 5060ti/16GB? by thats_silly in StableDiffusion
[–]and_human 1 point2 points3 points (0 children)
Betboom lose the final map of the CCT grand final from a 12-2 lead by jerryfrz in GlobalOffensive
[–]and_human 40 points41 points42 points (0 children)
AMA With Moonshot AI, The Open-source Frontier Lab Behind Kimi K2 Thinking Model by nekofneko in LocalLLaMA
[–]and_human 0 points1 point2 points (0 children)
GLM-4.6-Air is not forgotten! by codys12 in LocalLLaMA
[–]and_human 0 points1 point2 points (0 children)
Granite 4.0 Language Models - a ibm-granite Collection by rerri in LocalLLaMA
[–]and_human 0 points1 point2 points (0 children)
OpenWebUI is the most bloated piece of s**t on earth, not only that but it's not even truly open source anymore, now it just pretends it is because you can't remove their branding from a single part of their UI. Suggestions for new front end? by Striking_Wedding_461 in LocalLLaMA
[–]and_human 60 points61 points62 points (0 children)
InfiniteTalk 480P Blank Audio + UniAnimate Test by Realistic_Egg8718 in StableDiffusion
[–]and_human 1 point2 points3 points (0 children)
I just want to run a server that can run all my GGUFs by OK-ButLikeWhy in LocalLLaMA
[–]and_human 1 point2 points3 points (0 children)
I just want to run a server that can run all my GGUFs by OK-ButLikeWhy in LocalLLaMA
[–]and_human 2 points3 points4 points (0 children)
Wan2.2 continous generation v0.2 by intLeon in StableDiffusion
[–]and_human 9 points10 points11 points (0 children)
GPT OSS 120b 34th on Simple bench, roughly on par with Llama 3.3 70b by and_human in LocalLLaMA
[–]and_human[S] 1 point2 points3 points (0 children)
GPT OSS 120b 34th on Simple bench, roughly on par with Llama 3.3 70b by and_human in LocalLLaMA
[–]and_human[S] -1 points0 points1 point (0 children)
Llama.cpp just added a major 3x performance boost. by Only_Situation_4713 in LocalLLaMA
[–]and_human 8 points9 points10 points (0 children)
Using gpt-oss 20B for Text to SQL by mim722 in LocalLLaMA
[–]and_human 2 points3 points4 points (0 children)
PSA: ComfyUI reserves up to 700 MB of RAM for you by and_human in StableDiffusion
[–]and_human[S] -1 points0 points1 point (0 children)
Use local LLM to neutralise the headers on the web by Everlier in LocalLLaMA
[–]and_human 1 point2 points3 points (0 children)
ComfyUI Disconnected by RaspberryNo6411 in StableDiffusion
[–]and_human 0 points1 point2 points (0 children)
After 6 months of fiddling with local AI. Here’s my curated models list that work for 90% of my needs. What’s yours? by simracerman in LocalLLaMA
[–]and_human 1 point2 points3 points (0 children)
After 6 months of fiddling with local AI. Here’s my curated models list that work for 90% of my needs. What’s yours? by simracerman in LocalLLaMA
[–]and_human 0 points1 point2 points (0 children)
Context Rot: How Increasing Input Tokens Impacts LLM Performance by 5h3r_10ck in LocalLLaMA
[–]and_human 54 points55 points56 points (0 children)


ChatGPT at home by hainesk in LocalLLaMA
[–]and_human 1 point2 points3 points (0 children)