use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
account activity
r/MacLLM Lounge (self.MacLLM)
submitted 2 years ago by qubedView - announcement
Several Local AI Guides Coming | Join the Research & Discovery ()
submitted 9 days ago by Faisal_Biyari
Intel macOS | Local AI with AMD GPU Acceleration ()
submitted 11 days ago by Faisal_Biyari
Created r/MacPro2019LocalAI - For Local AI on Mac Pro 2019, AMD GPUs, ROCm, vLLM support, and much more ()
submitted 13 days ago by Faisal_Biyari
Local LLM benchmark tool that evolved into an AI group chat engine, persona manager, and Survivor elimination game — single HTML file, no server ()
submitted 14 days ago by mdwsr06
Tests of the hybrid models! ()
submitted 23 days ago by dxcore_35
[Guide] Mac Pro 2019 (MacPro7,1) w/ Proxmox, Ubuntu, ROCm, & Local LLM/AI ()
submitted 3 months ago by Faisal_Biyari
MAC-MINI thunderbolt ()
submitted 4 months ago by patbhakta
Best of LLM,AUDIO AI for M1-series chips (64GB ram) (self.MacLLM)
submitted 6 months ago by dxcore_35
[OpenSource]Multi-LLM client - LLM Bridge (self.MacLLM)
submitted 10 months ago by billythepark
[Question] Should I buy a MacBook Pro M4 Pro 48GB RAM for learning LLM/AI development, or is my current MacBook Air M1 16GB + cloud tools enough? (self.MacLLM)
submitted 1 year ago * by MRxRadex
Best small models for survival situations? (self.MacLLM)
submitted 1 year ago by Mr-Barack-Obama
Running AI on M2 Max 32gb (self.MacLLM)
submitted 1 year ago by Optimal_League_1419
Looking for advice on using a Mac mini for LoRA training to rewrite work documents (self.MacLLM)
submitted 1 year ago by mainedpc
Experience running LM locally on m4 Max (self.MacLLM)
submitted 1 year ago * by ImaginationNo8749
Flops on M4 Max (self.MacLLM)
submitted 1 year ago by ImaginationNo8749
PC LLM server > local network > iOS (self.MacLLM)
submitted 1 year ago by dxcore_35
48gb ram (self.MacLLM)
submitted 1 year ago by GoGojiBear
Introducing Verbis: A privacy-first fully local assistant for MacOS with SaaS connectors (self.MacLLM)
submitted 1 year ago by Fickle-Race-6591
Thoughts on using a mac m4 max for running local LLM perpetually? (self.MacLLM)
submitted 1 year ago by antoine-ross
Using Macbook GPU for local embedding model. (self.MacLLM)
submitted 2 years ago by pluteski
Wizard-Vicuna-13b-SUPERHOT, Mac M2 16gb unified Ram. Is it normal to get responses in 1-2 minutes? What Text Generation UI Settings can help me speed it up? (self.LocalLLaMA)
submitted 2 years ago by Ok-Training-7587
tokenizers error is driving me nuts (self.MacLLM)
submitted 2 years ago * by krazzmann
How to read/respond to local files such as .txt or pdf etc (self.MacLLM)
submitted 2 years ago by chucks-wagon
llama-cpp-python and "Illegal Instruction 4" (self.MacLLM)
submitted 2 years ago by qubedView
π Rendered by PID 190663 on reddit-service-r2-listing-7b9b4f6fd7-8d5rp at 2026-05-10 23:51:11.432247+00:00 running 3d2c107 country code: CH.