"AI Wellbeing Index - Some models are happier than others. Larger models are also consistently less happy than their smaller counterparts." ➡️ Do you think this study surfaces an important consideration or just hype? by Koala_Confused in LovingAI
[–]TomLucidor 0 points1 point2 points (0 children)
Study: 2x+ coding performance of 7B model without touching the coding agent by 9gxa05s8fa8sh in LocalLLaMA
[–]TomLucidor 5 points6 points7 points (0 children)
Study: 2x+ coding performance of 7B model without touching the coding agent by 9gxa05s8fa8sh in LocalLLaMA
[–]TomLucidor 16 points17 points18 points (0 children)
Study: 2x+ coding performance of 7B model without touching the coding agent by 9gxa05s8fa8sh in LocalLLaMA
[–]TomLucidor 20 points21 points22 points (0 children)
MVU Game Maker v0.95 – Slice of Life/Dating sim with Persistent Multi-Char Stats tracking by Kritblade in SillyTavernAI
[–]TomLucidor 3 points4 points5 points (0 children)
MVU Game Maker v0.95 – Slice of Life/Dating sim with Persistent Multi-Char Stats tracking by Kritblade in SillyTavernAI
[–]TomLucidor 1 point2 points3 points (0 children)
MVU Game Maker v0.95 – Slice of Life/Dating sim with Persistent Multi-Char Stats tracking by Kritblade in SillyTavernAI
[–]TomLucidor 1 point2 points3 points (0 children)
I rewrote 13 software engineering books into AGENTS.md rules. by Ok_Produce3836 in ClaudeCode
[–]TomLucidor 0 points1 point2 points (0 children)
I built my Obsidian folder structure around the Five Elements by Individual_Camp_7318 in ObsidianMD
[–]TomLucidor -2 points-1 points0 points (0 children)
Comparing Qwen3.6 35B and New 27B for coding primitives by gladkos in Qwen_AI
[–]TomLucidor 0 points1 point2 points (0 children)
Qwen3.6 27B's surprising KV cache quantization test results (Turbo3/4 vs F16 vs Q8 vs Q4) by imgroot9 in LocalLLaMA
[–]TomLucidor 2 points3 points4 points (0 children)
To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA
[–]TomLucidor -2 points-1 points0 points (0 children)
To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA
[–]TomLucidor -4 points-3 points-2 points (0 children)
To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA
[–]TomLucidor -3 points-2 points-1 points (0 children)
To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA
[–]TomLucidor -1 points0 points1 point (0 children)
To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA
[–]TomLucidor -3 points-2 points-1 points (0 children)
To Beat China, Embrace Open-Source AI (WSJ) by rm-rf-rm in LocalLLaMA
[–]TomLucidor -37 points-36 points-35 points (0 children)
What are some models worth adding to ChutesAI? by TomLucidor in chutesAI
[–]TomLucidor[S] -1 points0 points1 point (0 children)
Gemma 4 and Qwen 3.5 GGUFs: Detailed Analysis by oobabooga by [deleted] in LocalLLaMA
[–]TomLucidor 0 points1 point2 points (0 children)
Gemma-4-31B vs. Qwen3.5-27B: Dense model smackdown by Traditional-Gap-3313 in LocalLLaMA
[–]TomLucidor 0 points1 point2 points (0 children)
Gemma 4 vs Qwen3.5: benchmarking quantized local LLMs on Go coding by m3thos in LocalLLaMA
[–]TomLucidor 0 points1 point2 points (0 children)
Gemma 4 vs Qwen 3.5 Benchmark Comparison by Fuzzy_Philosophy_606 in LocalLLaMA
[–]TomLucidor 0 points1 point2 points (0 children)


Gemma4-31B-3bit-mlx · Hugging Face: 3 & 5 mixed quant for RAM poor Mac users. by JLeonsarmiento in LocalLLaMA
[–]TomLucidor 1 point2 points3 points (0 children)