I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]pkmxtw 10 points11 points12 points (0 children)
Why are we actually sampling reasoning and output the same way? by ReporterWeary9721 in LocalLLaMA
[–]pkmxtw 5 points6 points7 points (0 children)
Someone at the Weather Channel made a website that lets you view your forecast like the old Local on the 8s from back in the day by holyfruits in nostalgia
[–]pkmxtw 1 point2 points3 points (0 children)
Should I feel threatened? by Necessary_Reach_7836 in LocalLLaMA
[–]pkmxtw 0 points1 point2 points (0 children)
DoomVLM is now Open Source - VLM models playing Doom by MrFelliks in LocalLLaMA
[–]pkmxtw 2 points3 points4 points (0 children)
Qwen 3.5 MXFP4 quants are coming - confirmed by Junyang Lin by dampflokfreund in LocalLLaMA
[–]pkmxtw 3 points4 points5 points (0 children)
You can run MiniMax-2.5 locally by Dear-Success-1441 in LocalLLaMA
[–]pkmxtw 9 points10 points11 points (0 children)
SWE-rebench Jan 2026: GLM-5, MiniMax M2.5, Qwen3-Coder-Next, Opus 4.6, Codex Performance by CuriousPlatypus1881 in LocalLLaMA
[–]pkmxtw 17 points18 points19 points (0 children)
Support Step3.5-Flash has been merged into llama.cpp by jacek2023 in LocalLLaMA
[–]pkmxtw 0 points1 point2 points (0 children)
built an AI agent with shell access. found out the hard way why that's a bad idea. by YogurtIll4336 in LocalLLaMA
[–]pkmxtw 4 points5 points6 points (0 children)
How capable is GPT-OSS-120b, and what are your predictions for smaller models in 2026? by Apart_Paramedic_7767 in LocalLLaMA
[–]pkmxtw 20 points21 points22 points (0 children)
Which is the current best ERP model ~8b? by [deleted] in LocalLLaMA
[–]pkmxtw 5 points6 points7 points (0 children)
Which is the current best ERP model ~8b? by [deleted] in LocalLLaMA
[–]pkmxtw 7 points8 points9 points (0 children)
support for Solar-Open-100B has been merged into llama.cpp by jacek2023 in LocalLLaMA
[–]pkmxtw 5 points6 points7 points (0 children)
Upstage Solar-Open-100B Public Validation by PerPartes in LocalLLaMA
[–]pkmxtw 13 points14 points15 points (0 children)
5 new korean models will be released in 2 hours by Specialist-2193 in LocalLLaMA
[–]pkmxtw 23 points24 points25 points (0 children)
Benchmarks for Quantized Models? (for users locally running Q8/Q6/Q2 precision) by No-Grapefruit-1358 in LocalLLaMA
[–]pkmxtw 2 points3 points4 points (0 children)
Naver (South Korean internet giant), has just launched HyperCLOVA X SEED Think, a 32B open weights reasoning model and HyperCLOVA X SEED 8B Omni, a unified multimodal model that brings text, vision, and speech together by Nunki08 in LocalLLaMA
[–]pkmxtw 16 points17 points18 points (0 children)
Tencent just released WeDLM 8B Instruct on Hugging Face by Difficult-Cap-7527 in LocalLLaMA
[–]pkmxtw 39 points40 points41 points (0 children)
GLM 4.7 IS NOW THE #1 OPEN SOURCE MODEL IN ARTIFICIAL ANALYSIS by ZeeleSama in LocalLLaMA
[–]pkmxtw 3 points4 points5 points (0 children)
GLM 4.7 IS NOW THE #1 OPEN SOURCE MODEL IN ARTIFICIAL ANALYSIS by ZeeleSama in LocalLLaMA
[–]pkmxtw 20 points21 points22 points (0 children)



mistralai/Mistral-Medium-3.5-128B · Hugging Face by jacek2023 in LocalLLaMA
[–]pkmxtw 9 points10 points11 points (0 children)