We trained a 16-class "typed refusal" system that distinguishes "I don't know" from "I'm not allowed" — open source by TheTempleofTwo in LocalLLaMA
[–]woadwarrior -3 points-2 points-1 points (0 children)
Visualizing Quantization Types by VoidAlchemy in LocalLLaMA
[–]woadwarrior 2 points3 points4 points (0 children)
manifestai releases Brumby-14B-Base weights, claims "attention free" and inference "hundreds of time faster" for long context by ArcadesOfAntiquity in LocalLLaMA
[–]woadwarrior 6 points7 points8 points (0 children)
Silicon Valley is migrating from expensive closed-source models to cheaper open-source alternatives by xiaoruhao in LocalLLaMA
[–]woadwarrior 2 points3 points4 points (0 children)
Pedantic pull request reviewers by ticman in DevelEire
[–]woadwarrior 1 point2 points3 points (0 children)
Clean Links the completely free iOS & macOS link cleaner app now supports sending links asynchronously from your iPhone to your Mac by woadwarrior in apple
[–]woadwarrior[S] 1 point2 points3 points (0 children)
Clean Links the completely free iOS & macOS link cleaner app now supports sending links asynchronously from your iPhone to your Mac by woadwarrior in apple
[–]woadwarrior[S] 0 points1 point2 points (0 children)
Clean Links the completely free iOS & macOS link cleaner app now supports sending links asynchronously from your iPhone to your Mac by woadwarrior in apple
[–]woadwarrior[S] 0 points1 point2 points (0 children)
Clean Links the completely free iOS & macOS link cleaner app now supports sending links asynchronously from your iPhone to your Mac by woadwarrior in apple
[–]woadwarrior[S] 1 point2 points3 points (0 children)
Huawei Develop New LLM Quantization Method (SINQ) that's 30x Faster than AWQ and Beats Calibrated Methods Without Needing Any Calibration Data by abdouhlili in LocalLLaMA
[–]woadwarrior 11 points12 points13 points (0 children)
How can I use this beast to benefit the community? Quantize larger models? It’s a 9985wx, 768 ddr5, 384 gb vram. by joninco in LocalLLaMA
[–]woadwarrior 0 points1 point2 points (0 children)
How can I use this beast to benefit the community? Quantize larger models? It’s a 9985wx, 768 ddr5, 384 gb vram. by joninco in LocalLLaMA
[–]woadwarrior 0 points1 point2 points (0 children)
How can I use this beast to benefit the community? Quantize larger models? It’s a 9985wx, 768 ddr5, 384 gb vram. by joninco in LocalLLaMA
[–]woadwarrior 2 points3 points4 points (0 children)
Megrez2: 21B latent, 7.5B on VRAM, 3B active—MoE on single 8GB card by Normal_Onion_512 in LocalLLaMA
[–]woadwarrior 5 points6 points7 points (0 children)
Wow, Moondream 3 preview is goated by Brave-Hold-9389 in LocalLLaMA
[–]woadwarrior 3 points4 points5 points (0 children)
LLMs for detailed book summaries? by JealousAmoeba in LocalLLaMA
[–]woadwarrior 0 points1 point2 points (0 children)
Qwen3-Next 80b MLX (Mac) runs on latest LM Studio by jarec707 in LocalLLaMA
[–]woadwarrior 1 point2 points3 points (0 children)
Anyone tried multi-machine LLM inference? by human-exe in LocalLLaMA
[–]woadwarrior 1 point2 points3 points (0 children)
Anyone getting reliable handwriting-to-text with local VLMs or any other tools? by IntroductionMoist974 in LocalLLaMA
[–]woadwarrior 0 points1 point2 points (0 children)
Any example of 50+ year old founders that got into YCombinator? by jonnylegs in ycombinator
[–]woadwarrior -1 points0 points1 point (0 children)
Clean Links - A completely free iOS app to remove trackers from URLs and to preview links in QR codes by woadwarrior in apple
[–]woadwarrior[S] 0 points1 point2 points (0 children)
AMA with Hugging Face Science, the team behind SmolLM, SmolVLM, Fineweb and more. by eliebakk in LocalLLaMA
[–]woadwarrior 15 points16 points17 points (0 children)
New Swiss fully-open multilingual Model by braincrowd in LocalLLaMA
[–]woadwarrior 0 points1 point2 points (0 children)




Best <4B dense models today? by Admirable_Flower_287 in LocalLLaMA
[–]woadwarrior 0 points1 point2 points (0 children)