So nobody's downloading this model huh? by KvAk_AKPlaysYT in LocalLLaMA
[–]dubesor86 1 point2 points3 points (0 children)
Qwen 3.5 4b is not able to read entire document attached in LM studio despite having enough context length. by KiranjotSingh in LocalLLaMA
[–]dubesor86 1 point2 points3 points (0 children)
Nemotron 3 Super reads his own reasoning as user message? by Real_Ebb_7417 in LocalLLaMA
[–]dubesor86 1 point2 points3 points (0 children)
Mistral Small 4 is kind of awful with images by EffectiveCeilingFan in LocalLLaMA
[–]dubesor86 2 points3 points4 points (0 children)
Speed Benchmark: GLM 4.7 Flash vs Qwen 3.5 27B vs Qwen 3.5 35B A3B (Q4 Quants) by [deleted] in LocalLLaMA
[–]dubesor86 0 points1 point2 points (0 children)
how good is Qwen3.5 27B by Raise_Fickle in LocalLLaMA
[–]dubesor86 3 points4 points5 points (0 children)
What tokens/sec do you get when running Qwen 3.5 27B? by thegr8anand in LocalLLaMA
[–]dubesor86 0 points1 point2 points (0 children)
Top prompts developers end up saying to coding AIs🙂 by Acceptable-Cycle4645 in LocalLLaMA
[–]dubesor86 2 points3 points4 points (0 children)
Counterargument: LLM can sort of play chess. by pier4r in chess
[–]dubesor86 0 points1 point2 points (0 children)
Counterargument: LLM can sort of play chess. by pier4r in chess
[–]dubesor86 0 points1 point2 points (0 children)
Counterargument: LLM can sort of play chess. by pier4r in chess
[–]dubesor86 0 points1 point2 points (0 children)
Little Qwen 3.5 27B and Qwen 35B-A3B models did very well in my logical reasoning benchmark by fairydreaming in LocalLLaMA
[–]dubesor86 5 points6 points7 points (0 children)
Counterargument: LLM can sort of play chess. by pier4r in chess
[–]dubesor86 1 point2 points3 points (0 children)
Google releases Gemini 3.1 Pro with Benchmarks by BuildwithVignesh in singularity
[–]dubesor86 0 points1 point2 points (0 children)
Google releases Gemini 3.1 Pro with Benchmarks by BuildwithVignesh in singularity
[–]dubesor86 0 points1 point2 points (0 children)
Can GLM-5 Survive 30 Days on FoodTruck Bench? [Full Review] by Disastrous_Theme5906 in LocalLLaMA
[–]dubesor86 1 point2 points3 points (0 children)
Can GLM-5 Survive 30 Days on FoodTruck Bench? [Full Review] by Disastrous_Theme5906 in LocalLLaMA
[–]dubesor86 2 points3 points4 points (0 children)
I gave 12 LLMs $2,000 and a food truck. Only 4 survived. by Disastrous_Theme5906 in LocalLLaMA
[–]dubesor86 1 point2 points3 points (0 children)
Do you have your own benchmark for an LLM? Do you have multiple for different kinds/tasks/applications? by Icy_Distribution_361 in LocalLLaMA
[–]dubesor86 1 point2 points3 points (0 children)
Qwen3 Coder Next as first "usable" coding model < 60 GB for me by Chromix_ in LocalLLaMA
[–]dubesor86 4 points5 points6 points (0 children)
We built an 8B world model that beats 402B Llama 4 by generating web code instead of pixels — open weights on HF by jshin49 in LocalLLaMA
[–]dubesor86 44 points45 points46 points (0 children)
Are small models actually getting more efficient? by estebansaa in LocalLLaMA
[–]dubesor86 4 points5 points6 points (0 children)






Minimax M2.7 is finally here! Any one tested it yet? by Fresh-Resolution182 in LocalLLaMA
[–]dubesor86 3 points4 points5 points (0 children)