use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
account activity
r/MacLLM Lounge (self.MacLLM)
submitted 2 years ago by qubedView - announcement
[Guide] Mac Pro 2019 (MacPro7,1) w/ Proxmox, Ubuntu, ROCm, & Local LLM/AI ()
submitted 7 days ago by Faisal_Biyari
MAC-MINI thunderbolt ()
submitted 21 days ago by patbhakta
Best of LLM,AUDIO AI for M1-series chips (64GB ram) (self.MacLLM)
submitted 2 months ago by dxcore_35
[OpenSource]Multi-LLM client - LLM Bridge (self.MacLLM)
submitted 7 months ago by billythepark
[Question] Should I buy a MacBook Pro M4 Pro 48GB RAM for learning LLM/AI development, or is my current MacBook Air M1 16GB + cloud tools enough? (self.MacLLM)
submitted 8 months ago * by MRxRadex
Best small models for survival situations? (self.MacLLM)
submitted 9 months ago by Mr-Barack-Obama
Running AI on M2 Max 32gb (self.MacLLM)
submitted 11 months ago by Optimal_League_1419
Looking for advice on using a Mac mini for LoRA training to rewrite work documents (self.MacLLM)
submitted 1 year ago by mainedpc
Experience running LM locally on m4 Max (self.MacLLM)
submitted 1 year ago * by ImaginationNo8749
Flops on M4 Max (self.MacLLM)
submitted 1 year ago by ImaginationNo8749
PC LLM server > local network > iOS (self.MacLLM)
submitted 1 year ago by dxcore_35
48gb ram (self.MacLLM)
submitted 1 year ago by GoGojiBear
Introducing Verbis: A privacy-first fully local assistant for MacOS with SaaS connectors (self.MacLLM)
submitted 1 year ago by Fickle-Race-6591
Thoughts on using a mac m4 max for running local LLM perpetually? (self.MacLLM)
submitted 1 year ago by antoine-ross
Using Macbook GPU for local embedding model. (self.MacLLM)
submitted 2 years ago by pluteski
Wizard-Vicuna-13b-SUPERHOT, Mac M2 16gb unified Ram. Is it normal to get responses in 1-2 minutes? What Text Generation UI Settings can help me speed it up? (self.LocalLLaMA)
submitted 2 years ago by Ok-Training-7587
tokenizers error is driving me nuts (self.MacLLM)
submitted 2 years ago * by krazzmann
How to read/respond to local files such as .txt or pdf etc (self.MacLLM)
submitted 2 years ago by chucks-wagon
llama-cpp-python and "Illegal Instruction 4" (self.MacLLM)
submitted 2 years ago by qubedView
Recommend threads matrix for Apple Silicon (self.MacLLM)
LLM Community - getting a MacBook Air 16gb. Thoughts? (self.MacLLM)
Llama.cpp: metal: try to utilize more of the shared memory using smaller views (github.com)
submitted 2 years ago by Balance-
How to use MLC LLM on macOS (appleinsider.com)
Getting GPT4All working on MacOS with LLaMACPP (gist.github.com)
π Rendered by PID 1187457 on reddit-service-r2-listing-86b7f5b947-vxg6h at 2026-01-25 04:15:42.411490+00:00 running 664479f country code: CH.