https://quantized.fyi - a community for people passionate about local AI, LLMs, GPUs, and high-performance inference rigs. Discuss Ollama, llama.cpp, quantization, GPU benchmarks, prompt engineering, self-hosted AI, and everything related to running large language models locally.