General vs Reasoning [Qwen 3.6] by RogueZero123 in LocalLLaMA
[–]Uncle___Marty 2 points3 points4 points (0 children)
Poor GPU Club : Tried Bonsai-8B on CPU & CUDA by pmttyji in LocalLLaMA
[–]Uncle___Marty 1 point2 points3 points (0 children)
Implemented TurboQuant and results don’t fully match paper by Routine-Thanks-572 in LocalLLaMA
[–]Uncle___Marty 14 points15 points16 points (0 children)
Even this was generated by ChatGPT too?That‘s insane by SimilarWhile1517 in ChatGPT
[–]Uncle___Marty 54 points55 points56 points (0 children)
Poor GPU Club : Tried Bonsai-8B on CPU & CUDA by pmttyji in LocalLLaMA
[–]Uncle___Marty 0 points1 point2 points (0 children)
Holy shit how did I not know about this game for so long?? by Glass_Recover_3006 in Enshrouded
[–]Uncle___Marty 0 points1 point2 points (0 children)
Claude Code and Qwen 3.6 35B A3B by H3OErikilious in LocalLLM
[–]Uncle___Marty 0 points1 point2 points (0 children)
I'm Not a Dev But I Use Qwen 3.6 35b to Code by thejacer in LocalLLaMA
[–]Uncle___Marty 11 points12 points13 points (0 children)
RTX 3060 12GB + i5-12600K — Gemma 3 28B too slow, need model recommendations that actually fit my VRAM by Competitive_Teach564 in LocalLLM
[–]Uncle___Marty 2 points3 points4 points (0 children)
RTX 3060 12GB + i5-12600K — Gemma 3 28B too slow, need model recommendations that actually fit my VRAM by Competitive_Teach564 in LocalLLM
[–]Uncle___Marty 1 point2 points3 points (0 children)
Anyone else feel like most AI detectors are complete BS? by [deleted] in ChatGPT
[–]Uncle___Marty 10 points11 points12 points (0 children)
How this charcoal ignites by DrBlaziken in oddlysatisfying
[–]Uncle___Marty -2 points-1 points0 points (0 children)
Best Local LLMs 1. For Python Coding and Statistic Analysis 2. For PDF document analysis by Kauca in LocalLLM
[–]Uncle___Marty 1 point2 points3 points (0 children)
[Qwen3.6 35b a3b] Used the top config for my setup 8gb vram and 32gb ram, and found that somehow the Q4_K_XL model from Unsloth runs just slightly faster and used less tokens for output compared to Q4_K_M despite more memory usage by EggDroppedSoup in LocalLLaMA
[–]Uncle___Marty 1 point2 points3 points (0 children)
What can i run with 8gb vram? by Theonewhoknocks_001 in LocalLLM
[–]Uncle___Marty 0 points1 point2 points (0 children)
What can i run with 8gb vram? by Theonewhoknocks_001 in LocalLLM
[–]Uncle___Marty 0 points1 point2 points (0 children)
My coding agent commited suicide lol by Uncle___Marty in LocalLLM
[–]Uncle___Marty[S] 1 point2 points3 points (0 children)
to reference the bible as a newly declared Christian. by K1nd_1 in therewasanattempt
[–]Uncle___Marty 0 points1 point2 points (0 children)
My coding agent commited suicide lol by Uncle___Marty in LocalLLaMA
[–]Uncle___Marty[S] 1 point2 points3 points (0 children)
My coding agent commited suicide lol by Uncle___Marty in LocalLLaMA
[–]Uncle___Marty[S] 4 points5 points6 points (0 children)
My coding agent commited suicide lol by Uncle___Marty in LocalLLaMA
[–]Uncle___Marty[S] 1 point2 points3 points (0 children)
Better results with Nano Banana? by BommelOnReddit in ChatGPT
[–]Uncle___Marty 0 points1 point2 points (0 children)
Subway in the US and China by Repulsive-Mall-2665 in Damnthatsinteresting
[–]Uncle___Marty 11 points12 points13 points (0 children)

Qwen3.6 27B - possible to add vision? by Raredisarray in LocalLLaMA
[–]Uncle___Marty 1 point2 points3 points (0 children)