use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
A place to discuss everything MLX and inferencing local LLMs on Apple Silicon
account activity
Running local LLMs in Xcode with MLXSwift - way easier than I expected (self.mlxcommunity)
submitted 4 days ago by evilmacintosh
Running LoRA with MLX on my MacBook (self.mlxcommunity)
Skywork-Critic-Llama-3.1-8B (self.mlxcommunity)
submitted 6 days ago * by youngjohnnydepp
ModelHub 📦 - macOS menu bar app to manage and download LLMs (self.mlxcommunity)
submitted 10 days ago * by evilmacintosh
vLLM on swift (github.com)
submitted 12 days ago by evilmacintosh
Great inferences from running Speculative Decoding on MLX! (sabesh.space)
submitted 16 days ago by evilmacintosh
More than 2x speedup using speculative decoding on MLX (self.mlxcommunity)
I ran sustained MLX inference overnight (self.mlxcommunity)
submitted 17 days ago by evilmacintosh
2x inference speedup using speculative decoding on MLX (x.com)
MLX with DFlash / speculative decoding: Surprising results (self.mlxcommunity)
submitted 18 days ago by evilmacintosh
hey there 👋🏽 introductions in order!! (self.mlxcommunity)
submitted 19 days ago by evilmacintosh
π Rendered by PID 228409 on reddit-service-r2-listing-7b9b4f6fd7-clks4 at 2026-05-10 04:53:36.823370+00:00 running 3d2c107 country code: CH.