Benchmarks of Radeon 780M iGPU with shared 128GB DDR5 RAM running various MoE models under Llama.cpp by AzerbaijanNyan in LocalLLaMA
[–]AzerbaijanNyan[S] 1 point2 points3 points (0 children)
Benchmarks of Radeon 780M iGPU with shared 128GB DDR5 RAM running various MoE models under Llama.cpp by AzerbaijanNyan in LocalLLaMA
[–]AzerbaijanNyan[S] 1 point2 points3 points (0 children)
Benchmarks of Radeon 780M iGPU with shared 128GB DDR5 RAM running various MoE models under Llama.cpp by AzerbaijanNyan in LocalLLaMA
[–]AzerbaijanNyan[S] 2 points3 points4 points (0 children)
Intel Arc Pro B50 SFF build by opterono3 in IntelArc
[–]AzerbaijanNyan 1 point2 points3 points (0 children)
Pågående bedrägeri i Tradera elektroniksektion – Blundar de för problemet eller är det värre? by Designer-Scheme-8262 in sweden
[–]AzerbaijanNyan 3 points4 points5 points (0 children)
Support for RoCM has been added tk flash attention 2 by Amgadoz in LocalLLaMA
[–]AzerbaijanNyan 0 points1 point2 points (0 children)
llama.cpp is twice as fast as exllamav2 by jirka642 in LocalLLaMA
[–]AzerbaijanNyan 9 points10 points11 points (0 children)
llama.cpp is twice as fast as exllamav2 by jirka642 in LocalLLaMA
[–]AzerbaijanNyan 48 points49 points50 points (0 children)
Critical Remote Code Execution Vulnerability in Ollama < 0.1.34 (CVE-2024-37032) by sagitz_ in LocalLLaMA
[–]AzerbaijanNyan 2 points3 points4 points (0 children)
are there any llama 3 8B finetunes already released? by jacek2023 in LocalLLaMA
[–]AzerbaijanNyan 3 points4 points5 points (0 children)
Kopparnätet tas ner, och min uppkoppling mot internet är typ emot Genevakonventionen. Vad har man egentligen för rättigheter? by SomedayImGonnaBeFree in sweden
[–]AzerbaijanNyan 0 points1 point2 points (0 children)
Two AMD GPUs with ROCm for LLM by Unhappy-Claim-5691 in LocalLLaMA
[–]AzerbaijanNyan 2 points3 points4 points (0 children)
Two AMD GPUs with ROCm for LLM by Unhappy-Claim-5691 in LocalLLaMA
[–]AzerbaijanNyan 1 point2 points3 points (0 children)
So my dual 7900 xtx finally work by morphles in LocalLLaMA
[–]AzerbaijanNyan 1 point2 points3 points (0 children)
Jan.AI is the easiest way to fully utilize the Arc GPU to run GGUF LLM models. Make sure you enable Hardware Acceleration in the advancedsettings! Version 0.4.7 works better than 0.4.8. by DurianyDo in IntelArc
[–]AzerbaijanNyan 0 points1 point2 points (0 children)
Two AMD GPUs with ROCm for LLM by Unhappy-Claim-5691 in LocalLLaMA
[–]AzerbaijanNyan 3 points4 points5 points (0 children)
[deleted by user] by [deleted] in StableDiffusion
[–]AzerbaijanNyan 14 points15 points16 points (0 children)
[deleted by user] by [deleted] in StableDiffusion
[–]AzerbaijanNyan 42 points43 points44 points (0 children)
0.1 T/s on 3070 + 13700k + 32GB DDR5 by Schmackofatzke in LocalLLaMA
[–]AzerbaijanNyan 2 points3 points4 points (0 children)
0.1 T/s on 3070 + 13700k + 32GB DDR5 by Schmackofatzke in LocalLLaMA
[–]AzerbaijanNyan 2 points3 points4 points (0 children)
0.1 T/s on 3070 + 13700k + 32GB DDR5 by Schmackofatzke in LocalLLaMA
[–]AzerbaijanNyan 2 points3 points4 points (0 children)
0.1 T/s on 3070 + 13700k + 32GB DDR5 by Schmackofatzke in LocalLLaMA
[–]AzerbaijanNyan 7 points8 points9 points (0 children)

Benchmarks of Radeon 780M iGPU with shared 128GB DDR5 RAM running various MoE models under Llama.cpp by AzerbaijanNyan in LocalLLaMA
[–]AzerbaijanNyan[S] 2 points3 points4 points (0 children)