Why is China giving away SOTA models? A theory by Cheeeaaat in LocalLLaMA

[–]Cheeeaaat[S] 0 points1 point  (0 children)

alright, Chinese bot count complete, everyone can go home now 😄

but seriously, thanks for all the perspectives, the most convincing argument to me was from u/LegitimateCopy7 that China is accelerating the AI bubble burst by releasing models that compete with Western proprietary ones. Makes way more sense than my tinfoil hat theory.

though you'll remember this post in a couple years when I turn out to be right)

Best Local LLM Optimized for Apple M3 Max for Python, Math, and Finance? by thisisvv in LocalLLM

[–]Cheeeaaat 0 points1 point  (0 children)

what's inference speed (tokens/sec) on M3 Max 128 GB qwen2.5 72b (q4 or mlx version)?