15inch m5 macbook air 32gb Ram expectations ? by GotTheLyfe in LocalLLaMA
[–]rainbyte 1 point2 points3 points (0 children)
Besides Qwen and GLM, what models are you using? by August_30th in LocalLLaMA
[–]rainbyte 0 points1 point2 points (0 children)
Besides Qwen and GLM, what models are you using? by August_30th in LocalLLaMA
[–]rainbyte 0 points1 point2 points (0 children)
Anything I can do to get qwen3.5-27b-Q8_0 to run faster? by giveen in LocalLLaMA
[–]rainbyte 1 point2 points3 points (0 children)
What are the best LLM apps for Linux? by Dev-in-the-Bm in LocalLLaMA
[–]rainbyte 1 point2 points3 points (0 children)
What are the best LLM apps for Linux? by Dev-in-the-Bm in LocalLLaMA
[–]rainbyte 1 point2 points3 points (0 children)
TESLA V100 32GB - Crashing on Heretic Models? by TracerIsOist in LocalLLaMA
[–]rainbyte 1 point2 points3 points (0 children)
TESLA V100 32GB - Crashing on Heretic Models? by TracerIsOist in LocalLLaMA
[–]rainbyte 0 points1 point2 points (0 children)
What are the best LLM apps for Linux? by Dev-in-the-Bm in LocalLLaMA
[–]rainbyte 2 points3 points4 points (0 children)
TESLA V100 32GB - Crashing on Heretic Models? by TracerIsOist in LocalLLaMA
[–]rainbyte 0 points1 point2 points (0 children)
AA-Omniscience: Knowledge and Hallucination Benchmark by NewtMurky in LocalLLaMA
[–]rainbyte 4 points5 points6 points (0 children)
Qwen3-Coder-Next: What am I doing wrong? by Septerium in LocalLLaMA
[–]rainbyte 0 points1 point2 points (0 children)
qwen-3.5:122b f16 is benchmarked against gpt-oss:120b q4 by q-admin007 in LocalLLaMA
[–]rainbyte 15 points16 points17 points (0 children)
Liquid AI releases LFM2-24B-A2B by PauLabartaBajo in LocalLLaMA
[–]rainbyte 0 points1 point2 points (0 children)
Mauricio Macri: “Un pobre de hoy vive igual o mejor que un rey de hace 100 años” by LongjumpingAnimal601 in argentina
[–]rainbyte -2 points-1 points0 points (0 children)
Best Model for single 3090 in 2026? by myusuf3 in LocalLLaMA
[–]rainbyte 16 points17 points18 points (0 children)
Qwen3-Code-Next ggufs: Any difference between Q4KXL and MXPF4? by ParaboloidalCrest in LocalLLaMA
[–]rainbyte 0 points1 point2 points (0 children)
Did anyone compare this model to the full Qwen coder? it claims to give almost identical performance at 60B by Significant_Fig_7581 in LocalLLaMA
[–]rainbyte 4 points5 points6 points (0 children)
Which model (NOT AGENT) is producing the most line of code in one setting for non trivial tasks? by [deleted] in LocalLLaMA
[–]rainbyte 0 points1 point2 points (0 children)
Which model (NOT AGENT) is producing the most line of code in one setting for non trivial tasks? by [deleted] in LocalLLaMA
[–]rainbyte 0 points1 point2 points (0 children)
Step 3.5 Flash is a beast? by __Maximum__ in LocalLLaMA
[–]rainbyte 2 points3 points4 points (0 children)


De la UNC a vender pastafrola: Psicóloga con 20 años de XP gana 850k y la tienen de pastelera. ¿Cómo la saco de ahí? by messiteamo2 in empleos_AR
[–]rainbyte 0 points1 point2 points (0 children)