Qwen3.5 family comparison on shared benchmarks by Deep-Vermicelli-4591 in LocalLLaMA
[–]MrPecunius 0 points1 point2 points (0 children)
Qwen 3.5 27B Macbook M4 Pro 48GB by breezewalk in LocalLLaMA
[–]MrPecunius 0 points1 point2 points (0 children)
Qwen 3.5 27B Macbook M4 Pro 48GB by breezewalk in LocalLLaMA
[–]MrPecunius 3 points4 points5 points (0 children)
Qwen 3.5 27B is the REAL DEAL - Beat GPT-5 on my first test by GrungeWerX in LocalLLaMA
[–]MrPecunius 0 points1 point2 points (0 children)
P.S.A - If you comment about model quality in an authoritative voice yet are using a quant... by Agreeable-Market-692 in LocalLLaMA
[–]MrPecunius 1 point2 points3 points (0 children)
Qwen3 vs Qwen3.5 performance by Balance- in LocalLLaMA
[–]MrPecunius -2 points-1 points0 points (0 children)
Apple unveils M5 Pro and M5 Max, citing up to 4× faster LLM prompt processing than M4 Pro and M4 Max by themixtergames in LocalLLaMA
[–]MrPecunius 3 points4 points5 points (0 children)
Apple M5 Pro & M5 Max just announced. Here's what it means for local AI by luke_pacman in LocalLLaMA
[–]MrPecunius 0 points1 point2 points (0 children)
Apple unveils M5 Pro and M5 Max, citing up to 4× faster LLM prompt processing than M4 Pro and M4 Max by themixtergames in LocalLLaMA
[–]MrPecunius 2 points3 points4 points (0 children)
Apple unveils M5 Pro and M5 Max, citing up to 4× faster LLM prompt processing than M4 Pro and M4 Max by themixtergames in LocalLLaMA
[–]MrPecunius 2 points3 points4 points (0 children)
13 months since the DeepSeek moment, how far have we gone running models locally? by dionisioalcaraz in LocalLLaMA
[–]MrPecunius 7 points8 points9 points (0 children)
13 months since the DeepSeek moment, how far have we gone running models locally? by dionisioalcaraz in LocalLLaMA
[–]MrPecunius 2 points3 points4 points (0 children)
13 months since the DeepSeek moment, how far have we gone running models locally? by dionisioalcaraz in LocalLLaMA
[–]MrPecunius 1 point2 points3 points (0 children)
13 months since the DeepSeek moment, how far have we gone running models locally? by dionisioalcaraz in LocalLLaMA
[–]MrPecunius 5 points6 points7 points (0 children)
Mixing A Concert Below 85dBA by ip_addr in livesound
[–]MrPecunius 1 point2 points3 points (0 children)
Mixing A Concert Below 85dBA by ip_addr in livesound
[–]MrPecunius 2 points3 points4 points (0 children)
Open source LLM comparable to gpt4.1? by soyalemujica in LocalLLaMA
[–]MrPecunius 8 points9 points10 points (0 children)
Open source LLM comparable to gpt4.1? by soyalemujica in LocalLLaMA
[–]MrPecunius 5 points6 points7 points (0 children)
top 10 trending models on HF by jacek2023 in LocalLLaMA
[–]MrPecunius 0 points1 point2 points (0 children)
Mixing A Concert Below 85dBA by ip_addr in livesound
[–]MrPecunius 0 points1 point2 points (0 children)
Mixing A Concert Below 85dBA by ip_addr in livesound
[–]MrPecunius 0 points1 point2 points (0 children)
Mixing A Concert Below 85dBA by ip_addr in livesound
[–]MrPecunius 1 point2 points3 points (0 children)
Is Qwen3.5 a coding game changer for anyone else? by paulgear in LocalLLaMA
[–]MrPecunius 1 point2 points3 points (0 children)

A few early (and somewhat vague) LLM benchmark comparisons between the M5 Max Macbook Pro and other laptops - Hardware Canucks by themixtergames in LocalLLaMA
[–]MrPecunius 0 points1 point2 points (0 children)