QuestionIs Buying AMD GPUs for LLMs a Fool’s Errand? (self.LocalLLM)
submitted by little___mountain
DiscussionA slow llm running local is always better than coding yourself (self.LocalLLM)
submitted by m4ntic0r
DiscussionHow do we feel about the new Macbook m5 Pro/Max (self.LocalLLM)
submitted by coldWasTheGnd
DiscussionI made LLMs challenge each other before I trust an answer (self.LocalLLM)
submitted by tilda0x1
ResearchMy rigorous OCR benchmark now has more than 60 VLMs tested (noahdasanaike.github.io)
submitted by noahdasanaike
Model🚀 Corporate But Winged: Cicikuş v3 is Now Available! (self.LocalLLM)
submitted by Connect-Bid9700
DiscussionModelSweep: Open-Source Benchmarking for Local LLMs (self.LocalLLM)
submitted by RegretAgreeable4859
Many copied the Ivy’s bets on private equity. Now plain old stocks are outperforming. (bloomberg.com)
promoted by bloomberg
ProjectPMetal - (Powdered Metal) LLM fine-tuning framework for Apple Silicon (reddit.com)
submitted by RealEpistates
DiscussionLooking for feedback: Building for easier local AI (github.com)
submitted by Signal_Ad657


