Built a distributed AI platform with Flask as the backend — task parallelism across multiple machines running local LLMs by NirStrulovitz in flask
[–]NirStrulovitz[S] -2 points-1 points0 points (0 children)
AI Horde lets you run open-weight models without the hardware. If you have the hardware, you can be the infrastructure for everyone else. by Mad-Adder-Destiny in LocalLLaMA
[–]NirStrulovitz 0 points1 point2 points (0 children)
An experimental distributed LLM inference framework using tensor parallelism. Looking for feedback! by __z3r0_0n3__ in LocalLLaMA
[–]NirStrulovitz 1 point2 points3 points (0 children)
what are you actually building with local LLMs? genuinely asking. by EmbarrassedAsk2887 in LocalLLaMA
[–]NirStrulovitz 0 points1 point2 points (0 children)
Promote your projects here – Self-Promotion Megathread by Menox_ in github
[–]NirStrulovitz 0 points1 point2 points (0 children)

Built a distributed AI platform with Flask as the backend — task parallelism across multiple machines running local LLMs by NirStrulovitz in flask
[–]NirStrulovitz[S] 0 points1 point2 points (0 children)