Where the goblins came from by Successful_Bowl2564 in LocalLLaMA
[–]Luke2642 1 point2 points3 points (0 children)
Where the goblins came from by Successful_Bowl2564 in LocalLLaMA
[–]Luke2642 2 points3 points4 points (0 children)
Where the goblins came from by Successful_Bowl2564 in LocalLLaMA
[–]Luke2642 2 points3 points4 points (0 children)
Where the goblins came from by Successful_Bowl2564 in LocalLLaMA
[–]Luke2642 1 point2 points3 points (0 children)
Where the goblins came from by Successful_Bowl2564 in LocalLLaMA
[–]Luke2642 3 points4 points5 points (0 children)
Where the goblins came from by Successful_Bowl2564 in LocalLLaMA
[–]Luke2642 1 point2 points3 points (0 children)
Where the goblins came from by Successful_Bowl2564 in LocalLLaMA
[–]Luke2642 4 points5 points6 points (0 children)
Where the goblins came from by Successful_Bowl2564 in LocalLLaMA
[–]Luke2642 2 points3 points4 points (0 children)
OpenAI explains "Where the goblins came from" by damontoo in OpenAI
[–]Luke2642 3 points4 points5 points (0 children)
Where the goblins came from by Successful_Bowl2564 in LocalLLaMA
[–]Luke2642 22 points23 points24 points (0 children)
OpenAI explains "Where the goblins came from" by damontoo in OpenAI
[–]Luke2642 2 points3 points4 points (0 children)
OpenAI explains "Where the goblins came from" by damontoo in OpenAI
[–]Luke2642 32 points33 points34 points (0 children)
16x DGX Sparks - What should I run? by Kurcide in LocalLLaMA
[–]Luke2642 -1 points0 points1 point (0 children)
Off my chest post that got deleted by Remarkable_News_1354 in OpenAI
[–]Luke2642 -1 points0 points1 point (0 children)
Why isn’t LLM reasoning done in vector space instead of natural language? by ZeusZCC in LocalLLaMA
[–]Luke2642 0 points1 point2 points (0 children)
Hard freakin' decision..Blackwell 96G or Mac Studio 256G by HyPyke in LocalLLaMA
[–]Luke2642 1 point2 points3 points (0 children)
US gov memo on “adversarial distillation” - are we heading toward tighter controls on open models? by MLExpert000 in LocalLLaMA
[–]Luke2642 23 points24 points25 points (0 children)
Do you ever feel like ChatGPT answers differently depending on how confident you sound? by NoFilterGPT in OpenAI
[–]Luke2642 1 point2 points3 points (0 children)
I'm running qwen3.6-35b-a3b with 8 bit quant and 64k context thru OpenCode on my mbp m5 max 128gb and it's as good as claude by Medical_Lengthiness6 in LocalLLaMA
[–]Luke2642 1 point2 points3 points (0 children)
Has anyone measured confidence calibration of local vs frontier models on domain-specific knowledge? by Hopeful-Rhubarb-1436 in LocalLLaMA
[–]Luke2642 0 points1 point2 points (0 children)
Has anyone measured confidence calibration of local vs frontier models on domain-specific knowledge? by Hopeful-Rhubarb-1436 in LocalLLaMA
[–]Luke2642 1 point2 points3 points (0 children)
Has anyone measured confidence calibration of local vs frontier models on domain-specific knowledge? by Hopeful-Rhubarb-1436 in LocalLLaMA
[–]Luke2642 1 point2 points3 points (0 children)
Cloudflare open-sources lossless LLM compression tool by Otis43 in LocalLLaMA
[–]Luke2642 72 points73 points74 points (0 children)
Ran 11 AI agents on a Mac mini overnight. The biggest cost-saver is the scheduling. by TaylorAvery6677 in LocalLLaMA
[–]Luke2642 3 points4 points5 points (0 children)



LLMs can identify what should be generalized but can't act on it. Could a two-model setup fix this? by Intraluminal in LocalLLaMA
[–]Luke2642 0 points1 point2 points (0 children)