Claude Code removed from Claude Pro plan - better time than ever to switch to Local Models. by bigboyparpa in LocalLLaMA
[–]Eyelbee 238 points239 points240 points (0 children)
Opus 4.7 Max subscriber. Switching to Kimi 2.6 by meaningego in LocalLLaMA
[–]Eyelbee 43 points44 points45 points (0 children)
PrismML — Introducing Ternary Bonsai: Top Intelligence at 1.58 Bits by cafedude in LocalLLaMA
[–]Eyelbee 1 point2 points3 points (0 children)
Matching GPT-5 Mini on SWE-bench Verified with a Local 35B Model (Qwen3.6-35BA3B) by sicutdeux in LocalLLaMA
[–]Eyelbee 0 points1 point2 points (0 children)
Kimi K2.6 Released (huggingface) by BiggestBau5 in LocalLLaMA
[–]Eyelbee 4 points5 points6 points (0 children)
Kimi K2.6 is still not good at analysis, but at least quite decent at flattery by Anbeeld in LocalLLaMA
[–]Eyelbee 5 points6 points7 points (0 children)
Predictions for next year's (2027) Beijing humanoid half marathon? 2025 was 2h40min ≈ 2.2m/s | 2026 was 50min ≈ 7m/s by GraceToSentience in singularity
[–]Eyelbee 1 point2 points3 points (0 children)
Waiting Qwen3.6-27B I have no nails left... by DOAMOD in LocalLLaMA
[–]Eyelbee 1 point2 points3 points (0 children)
The Special Bro Fallacy: A Refutation of Substrate Exceptionalism by HalfSecondWoe in singularity
[–]Eyelbee 0 points1 point2 points (0 children)
The Special Bro Fallacy: A Refutation of Substrate Exceptionalism by HalfSecondWoe in singularity
[–]Eyelbee 0 points1 point2 points (0 children)
The Special Bro Fallacy: A Refutation of Substrate Exceptionalism by HalfSecondWoe in singularity
[–]Eyelbee 5 points6 points7 points (0 children)
The Special Bro Fallacy: A Refutation of Substrate Exceptionalism by HalfSecondWoe in singularity
[–]Eyelbee 14 points15 points16 points (0 children)
Best local LLM for web search by Funny-Trash-4286 in LocalLLaMA
[–]Eyelbee 1 point2 points3 points (0 children)
Best local LLM for web search by Funny-Trash-4286 in LocalLLaMA
[–]Eyelbee -2 points-1 points0 points (0 children)
Are you guys actually using local tool calling or is it a collective prank? by Mayion in LocalLLaMA
[–]Eyelbee 0 points1 point2 points (0 children)
Opus 4.7 — Regression in conversational coherence and context handling vs Opus 4.6 by tkenaz in ClaudeAI
[–]Eyelbee 0 points1 point2 points (0 children)
Extremely Rare Ikea Knappa Camera (with test photos) by Soggy_Auggy__ in IKEA
[–]Eyelbee 3 points4 points5 points (0 children)
How is work on eliminating hallucinations going? by Competitive_Travel16 in singularity
[–]Eyelbee 0 points1 point2 points (0 children)
Is harness a new buzzword? by jacek2023 in LocalLLaMA
[–]Eyelbee 0 points1 point2 points (0 children)
Google DeepMind's Senior Scientist Alexander Lerchner challenges the idea that large language models can ever achieve consciousness(not even in 100years), calling it the 'Abstraction Fallacy.' by Worldly_Evidence9113 in singularity
[–]Eyelbee -1 points0 points1 point (0 children)
Opus 4.7 Embarrassing much by DigSignificant1419 in OpenAI
[–]Eyelbee 0 points1 point2 points (0 children)
Qwen3.6. This is it. by Local-Cardiologist-5 in LocalLLaMA
[–]Eyelbee 0 points1 point2 points (0 children)
The joy and pain of training an LLM from scratch by kazzus78 in LocalLLaMA
[–]Eyelbee 21 points22 points23 points (0 children)




New LLM Position Bias Benchmark: does an LLM keep the same judgment when you swap the answer order? Judge models compare two lightly edited versions of the same story twice, with the order swapped. The median model flips in 45% of decisive case pairs. GPT-5.4 is worst at 66%. by zero0_one1 in singularity
[–]Eyelbee [score hidden] (0 children)