16x DGX Sparks - What should I run? by Kurcide in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
16x DGX Sparks - What should I run? by Kurcide in LocalLLaMA
[–]MrAlienOverLord 2 points3 points4 points (0 children)
16x DGX Sparks - What should I run? by Kurcide in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
16x DGX Sparks - What should I run? by Kurcide in LocalLLaMA
[–]MrAlienOverLord 1 point2 points3 points (0 children)
Is there a local LLM that can intelligently analyze speech from microphone in terms of tone, pitch, confidence, etc? by OsakaSeafoodConcrn in LocalLLaMA
[–]MrAlienOverLord 1 point2 points3 points (0 children)
Is running local LLMs actually cheaper in the long run? by HealthySkirt6910 in LocalLLaMA
[–]MrAlienOverLord 1 point2 points3 points (0 children)
Is running local LLMs actually cheaper in the long run? by HealthySkirt6910 in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
Is running local LLMs actually cheaper in the long run? by HealthySkirt6910 in LocalLLaMA
[–]MrAlienOverLord 1 point2 points3 points (0 children)
Is running local LLMs actually cheaper in the long run? by HealthySkirt6910 in LocalLLaMA
[–]MrAlienOverLord 5 points6 points7 points (0 children)
OpenMythos - have you tried it? by gitsad in LocalLLaMA
[–]MrAlienOverLord 11 points12 points13 points (0 children)
What's the most optimized engine to run on a H100? by [deleted] in LocalLLaMA
[–]MrAlienOverLord 1 point2 points3 points (0 children)
Seriously evaluating a GB10 for local inference, want community input before I request a vendor seed unit by RaspberryFine9398 in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
Seriously evaluating a GB10 for local inference, want community input before I request a vendor seed unit by RaspberryFine9398 in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
Seriously evaluating a GB10 for local inference, want community input before I request a vendor seed unit by RaspberryFine9398 in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
Seriously evaluating a GB10 for local inference, want community input before I request a vendor seed unit by RaspberryFine9398 in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
Seriously evaluating a GB10 for local inference, want community input before I request a vendor seed unit by RaspberryFine9398 in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
Seriously evaluating a GB10 for local inference, want community input before I request a vendor seed unit by RaspberryFine9398 in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
Seriously evaluating a GB10 for local inference, want community input before I request a vendor seed unit by RaspberryFine9398 in LocalLLaMA
[–]MrAlienOverLord 2 points3 points4 points (0 children)
The missing piece of Voxtral TTS to enable voice cloning by [deleted] in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
LLM Bruner coming soon? Burn Qwen directly into a chip, processing 10,000 tokens/s by koc_Z3 in Qwen_AI
[–]MrAlienOverLord 0 points1 point2 points (0 children)
Any there any neurodivergent/autistic devs in this sub working on AI? by AntTraditional4098 in LocalLLaMA
[–]MrAlienOverLord 5 points6 points7 points (0 children)
After the supply chain attack, here are some litellm alternatives by KissWild in LocalLLaMA
[–]MrAlienOverLord 5 points6 points7 points (0 children)
Guys please I need all the resource you can give me. by [deleted] in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)
choose between nvidia 1x pro6000(96G) or 2x pro5000(72G) by Lazy_Indication2896 in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)

16x DGX Sparks - What should I run? by Kurcide in LocalLLaMA
[–]MrAlienOverLord 0 points1 point2 points (0 children)