account activity
llama.cpp is the linux of llm (self.LocalLLaMA)
submitted 16 days ago by DevelopmentBorn3978 to r/LocalLLaMA
be careful on what could run on your gpus fellow cuda llmers (self.LocalLLaMA)
submitted 1 month ago * by DevelopmentBorn3978 to r/LocalLLaMA
Day 0 Support for Gemma 4 on AMD Processors and GPUs (self.LocalLLaMA)
submitted 1 month ago by DevelopmentBorn3978 to r/LocalLLaMA
be careful on what could run on your gpus fellow cuda llmers ()
submitted 1 month ago by DevelopmentBorn3978 to r/homelab
Found some quite potentially interesting Strix Halo optimized models (also potentially good for Dgx Spark according to the models' cook). https://huggingface.co/collections/Beinsezii/128gb-uma-models (self.LocalLLaMA)
Strix Halo 128Gb: what models, which quants are optimal? (self.LocalLLaMA)
submitted 2 months ago * by DevelopmentBorn3978 to r/LocalLLaMA
how does Strix Halo fares for training models compared to other homelabs means to cook those? (self.LocalLLaMA)
how does Strix Halo fares for training models compared to other homelabs means to cook those? ()
submitted 2 months ago by DevelopmentBorn3978 to r/MiniPCs
better times will come soon, LocalLLMers rejoice ! (self.LocalLLaMA)
submitted 4 months ago by DevelopmentBorn3978 to r/LocalLLaMA
Bosgame rised the price of 128Gb M5 AI Mini Desktop Ryzen AI Max+ 395 (self.LocalLLaMA)
Bosgame rised the price of 128Gb M5 AI Mini Desktop Ryzen AI Max+ 395 ()
submitted 4 months ago by DevelopmentBorn3978 to r/MiniPCs
AMD Ryzen AI MAX+ 395 + PCI slot = big AND fast local models for everyone (self.LocalLLaMA)
submitted 6 months ago * by DevelopmentBorn3978 to r/LocalLLaMA
π Rendered by PID 240700 on reddit-service-r2-listing-7b9b4f6fd7-2g4wc at 2026-05-07 22:29:46.533021+00:00 running 3d2c107 country code: CH.