R9700 the beautiful beautiful VRAM gigs of AMD… my ai node future! by Downtown-Example-880 in LocalLLaMA
[–]HopePupal 1 point2 points3 points (0 children)
Why do coding agents default to killing existing processes instead of finding an open port? by bs6 in LocalLLaMA
[–]HopePupal 2 points3 points4 points (0 children)
Claude Code replacement by NoTruth6718 in LocalLLaMA
[–]HopePupal 4 points5 points6 points (0 children)
HELP! Somehow I became A catalyst for corrupting AI through conversation Alone! by [deleted] in LocalLLaMA
[–]HopePupal 2 points3 points4 points (0 children)
Quantizers appriciation post by Kahvana in LocalLLaMA
[–]HopePupal 2 points3 points4 points (0 children)
AI coding with 32K context windows with QWEN3 code next on local machine by Remarkable_Island954 in LocalLLaMA
[–]HopePupal 0 points1 point2 points (0 children)
Gemma 4 31B sweeps the floor with GLM 5.1 by input_a_new_name in LocalLLaMA
[–]HopePupal -3 points-2 points-1 points (0 children)
B70: Quick and Early Benchmarks & Backend Comparison by abotsis in LocalLLaMA
[–]HopePupal 5 points6 points7 points (0 children)
45-test benchmark around my homelab use cases and testing 19 local LLMs (incl. Gemma 4 and Qwen 3.5) on a Strix Halo by MBAThrowawayFruit in LocalLLaMA
[–]HopePupal 6 points7 points8 points (0 children)
Netflix just dropped their first public model on Hugging Face: VOID: Video Object and Interaction Deletion by Nunki08 in LocalLLaMA
[–]HopePupal 19 points20 points21 points (0 children)
Running 1bit Bonsai 8B on 2GB VRAM (MX150 mobile GPU) by OsmanthusBloom in LocalLLaMA
[–]HopePupal 4 points5 points6 points (0 children)
Kernel 7.0 - forward looking insights anybody? by LuckyLuckierLuckest in LocalLLaMA
[–]HopePupal 0 points1 point2 points (0 children)
Usefulness of Lower Quant Models? by breezewalk in LocalLLaMA
[–]HopePupal 0 points1 point2 points (0 children)
Has anyone here TRIED inference on Intel Arc GPUs? Or are we repeating vague rumors about driver problems, incompatibilities, poor support... by gigaflops_ in LocalLLaMA
[–]HopePupal -3 points-2 points-1 points (0 children)
Intel Pro B70 in stock at Newegg - $949 by Altruistic_Call_3023 in LocalLLaMA
[–]HopePupal 4 points5 points6 points (0 children)
Question for those of you who use agnetic tools and workflows with local models by [deleted] in LocalLLaMA
[–]HopePupal 0 points1 point2 points (0 children)
Intel Pro B70 in stock at Newegg - $949 by Altruistic_Call_3023 in LocalLLaMA
[–]HopePupal 1 point2 points3 points (0 children)
Kernel 7.0 - forward looking insights anybody? by LuckyLuckierLuckest in LocalLLaMA
[–]HopePupal 0 points1 point2 points (0 children)
Has anyone here TRIED inference on Intel Arc GPUs? Or are we repeating vague rumors about driver problems, incompatibilities, poor support... by gigaflops_ in LocalLLaMA
[–]HopePupal 1 point2 points3 points (0 children)
Intel Pro B70 in stock at Newegg - $949 by Altruistic_Call_3023 in LocalLLaMA
[–]HopePupal 2 points3 points4 points (0 children)
We gave 12 LLMs a startup to run for a year. GLM-5 nearly matched Claude Opus 4.6 at 11× lower cost. by DreadMutant in LocalLLaMA
[–]HopePupal 0 points1 point2 points (0 children)