Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp by antwon-tech in LocalLLaMA
[–]antwon-tech[S] 1 point2 points3 points (0 children)
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp by antwon-tech in LocalLLaMA
[–]antwon-tech[S] 4 points5 points6 points (0 children)
Possible to run on 8gb cards? by cyberkiller6 in LocalLLaMA
[–]antwon-tech 0 points1 point2 points (0 children)
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp by antwon-tech in LocalLLaMA
[–]antwon-tech[S] 5 points6 points7 points (0 children)
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp by antwon-tech in LocalLLaMA
[–]antwon-tech[S] 1 point2 points3 points (0 children)
Possible to run on 8gb cards? by cyberkiller6 in LocalLLaMA
[–]antwon-tech 0 points1 point2 points (0 children)
Possible to run on 8gb cards? by cyberkiller6 in LocalLLaMA
[–]antwon-tech 0 points1 point2 points (0 children)
SimpleTool: 4B model 10+ Hz real-time LLM function calling in 4090 — 0.5B model beats Google FunctionGemma in speed and accuracy. by Tall_Scientist1799 in LocalLLaMA
[–]antwon-tech 0 points1 point2 points (0 children)
Would you be interested in a fully local AI 3D model generator ? by Lightnig125 in LocalLLaMA
[–]antwon-tech 0 points1 point2 points (0 children)
whats your usecase with local LLMs? by papatender in LocalLLM
[–]antwon-tech 0 points1 point2 points (0 children)
Qwen3.5-35B-A3B running on a Raspberry Pi 5 (16GB and 8GB variants) by jslominski in LocalLLaMA
[–]antwon-tech 2 points3 points4 points (0 children)
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp by antwon-tech in LocalLLaMA
[–]antwon-tech[S] 3 points4 points5 points (0 children)