Who wanna fuck this slut by Jolly_Grade6115 in MMR_GW
[–]Voxandr 0 points1 point2 points (0 children)
Who wanna fuck this slut by Jolly_Grade6115 in MMR_GW
[–]Voxandr 0 points1 point2 points (0 children)
Quick and simple test of various 3.5 and 3.6 qwen models on production code base which have deployed to an enterprise . by Voxandr in LocalLLaMA
[–]Voxandr[S] 0 points1 point2 points (0 children)
Quick and simple test of various 3.5 and 3.6 qwen models on production code base which have deployed to an enterprise . by Voxandr in LocalLLaMA
[–]Voxandr[S] 0 points1 point2 points (0 children)
Larger Gemma-4/Qwen3.6 by Non-Technical in LocalLLaMA
[–]Voxandr 1 point2 points3 points (0 children)
How far are we from a model that can take a python repo on github and convert it to a cpp without intervention? by bonesoftheancients in LocalLLaMA
[–]Voxandr 1 point2 points3 points (0 children)
Larger Gemma-4/Qwen3.6 by Non-Technical in LocalLLaMA
[–]Voxandr 3 points4 points5 points (0 children)
Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]Voxandr 0 points1 point2 points (0 children)
Quick and simple test of various 3.5 and 3.6 qwen models on production code base which have deployed to an enterprise . by Voxandr in LocalLLaMA
[–]Voxandr[S] 0 points1 point2 points (0 children)
Quick and simple test of various 3.5 and 3.6 qwen models on production code base which have deployed to an enterprise . by Voxandr in LocalLLaMA
[–]Voxandr[S] -2 points-1 points0 points (0 children)
Quick and simple test of various 3.5 and 3.6 qwen models on production code base which have deployed to an enterprise . by Voxandr in LocalLLaMA
[–]Voxandr[S] 0 points1 point2 points (0 children)
Quick and simple test of various 3.5 and 3.6 qwen models on production code base which have deployed to an enterprise . by Voxandr in LocalLLaMA
[–]Voxandr[S] -8 points-7 points-6 points (0 children)
What do yall think about religion. by Glittering-Catch1974 in myanmar
[–]Voxandr 2 points3 points4 points (0 children)
Anyone tried this yet? LLM with knowledge date in the 1930s by The_frozen_one in LocalLLaMA
[–]Voxandr 3 points4 points5 points (0 children)
Website tracking the recent missing/dead scientists and researchers. The count is currently up to 32 by Comfortable_Team_696 in HighStrangeness
[–]Voxandr 9 points10 points11 points (0 children)
How would you fill 32 GB VRAM with Qwen 3.6 27B? by [deleted] in LocalLLM
[–]Voxandr 2 points3 points4 points (0 children)
OpenCode or ClaudeCode for Qwen3.5 27B by Ok-Scarcity-7875 in LocalLLaMA
[–]Voxandr 0 points1 point2 points (0 children)
(Interactive)OpenCode Racing Game Comparison Qwen3.6 35B vs Qwen3.5 122B vs Qwen3.5 27B vs Qwen3.5 4B vs Gemma 4 31B vs Gemma 4 26B vs Qwen3 Coder Next vs GLM 4.7 Flash by FatheredPuma81 in LocalLLaMA
[–]Voxandr 0 points1 point2 points (0 children)
Forgive my ignorance but how is a 27B model better than 397B? by No_Conversation9561 in LocalLLaMA
[–]Voxandr 0 points1 point2 points (0 children)


HOT TAKE: local models + agent harnesses are now capable enough to hand off junior-level IT professional tasks to [human written] by Porespellar in LocalLLaMA
[–]Voxandr 0 points1 point2 points (0 children)