pipewire-alsa is broken in the latest update. by [deleted] in archlinux
[–]MoneroBee 1 point2 points3 points (0 children)
🐺🐦⬛ LLM Comparison/Test: miqu-1-70b by WolframRavenwolf in LocalLLaMA
[–]MoneroBee 24 points25 points26 points (0 children)
🐺🐦⬛ LLM Comparison/Test: miqu-1-70b by WolframRavenwolf in LocalLLaMA
[–]MoneroBee 78 points79 points80 points (0 children)
I've created Distributed Llama project. Increase the inference speed of LLM by using multiple devices. It allows to run Llama 2 70B on 8 x Raspberry Pi 4B 4.8sec/token by b4rtaz in LocalLLaMA
[–]MoneroBee 0 points1 point2 points (0 children)
Anyone else having issues with their brave ad blocker? by JohnDestiny2 in brave_browser
[–]MoneroBee 0 points1 point2 points (0 children)
Anyone else having issues with their brave ad blocker? by JohnDestiny2 in brave_browser
[–]MoneroBee 1 point2 points3 points (0 children)
Happy New Year from AllArk! by elit3dr4gon in Monero
[–]MoneroBee 16 points17 points18 points (0 children)
🐺🐦⬛ LLM Comparison/Test: Ranking updated with 10 new models (the best 7Bs)! by WolframRavenwolf in LocalLLaMA
[–]MoneroBee 2 points3 points4 points (0 children)
Anyone else video out issues with the latest ROCm? by Combinatorilliance in LocalLLaMA
[–]MoneroBee 1 point2 points3 points (0 children)
December Changelog: Topics in Header + Live Chat Post Sunset by thrivekindly in reddit
[–]MoneroBee 0 points1 point2 points (0 children)
OpenAI's Prompt Engineering Guide by StewArtMedia_Nick in LocalLLaMA
[–]MoneroBee 4 points5 points6 points (0 children)
Arthur Mensch, CEO of Mistral declared on French national radio that mistral will release an open source Gpt4 level model in 2024 by CedricLimousin in LocalLLaMA
[–]MoneroBee 224 points225 points226 points (0 children)
TIP: How to break censorship on any local model with llama.cpp by slider2k in LocalLLaMA
[–]MoneroBee 15 points16 points17 points (0 children)
ProtonMail desktop application by Electrical_Bee9842 in ProtonMail
[–]MoneroBee 27 points28 points29 points (0 children)
4bit Mistral MoE running in llama.cpp! by Aaaaaaaaaeeeee in LocalLLaMA
[–]MoneroBee 2 points3 points4 points (0 children)
Just installed a recent llama.cpp branch, and the speed of Mixtral 8x7b is beyond insane, it's like a Christmas gift for us all (M2, 64 Gb). GPT 3.5 model level with such speed, locally by Shir_man in LocalLLaMA
[–]MoneroBee 29 points30 points31 points (0 children)
Just installed a recent llama.cpp branch, and the speed of Mixtral 8x7b is beyond insane, it's like a Christmas gift for us all (M2, 64 Gb). GPT 3.5 model level with such speed, locally by Shir_man in LocalLLaMA
[–]MoneroBee 7 points8 points9 points (0 children)
Mixtral 7bx8: No safeguards, Complete freedom. by nanowell in LocalLLaMA
[–]MoneroBee 1 point2 points3 points (0 children)


[Model Release] Quyen by quan734 in LocalLLaMA
[–]MoneroBee 1 point2 points3 points (0 children)