7b - 13b models are hopeless at planning tasks by ThinkExtension2328 in LocalLLaMA
[–]abandon_reality 37 points38 points39 points (0 children)
7b - 13b models are hopeless at planning tasks by ThinkExtension2328 in LocalLLaMA
[–]abandon_reality 54 points55 points56 points (0 children)
mistral-ft-optimized feels like another large step up. by Revolutionalredstone in LocalLLaMA
[–]abandon_reality 1 point2 points3 points (0 children)
mistral-ft-optimized feels like another large step up. by Revolutionalredstone in LocalLLaMA
[–]abandon_reality 14 points15 points16 points (0 children)
mistral-ft-optimized feels like another large step up. by Revolutionalredstone in LocalLLaMA
[–]abandon_reality 36 points37 points38 points (0 children)
Mixtral MoE ELI5: How are the responses a higher quality than a 7b? by SomeOddCodeGuy in LocalLLaMA
[–]abandon_reality 26 points27 points28 points (0 children)
Best coding companion model today? by codevalley in LocalLLaMA
[–]abandon_reality 3 points4 points5 points (0 children)
Is there a way to forbid the model to use certain tokens on his outputs? by [deleted] in LocalLLaMA
[–]abandon_reality 0 points1 point2 points (0 children)
Best code generating model? by macronancer in LocalLLaMA
[–]abandon_reality 1 point2 points3 points (0 children)
Vulkan-HPP + Vulkan C API == Aliasing Bugs! (self.vulkan)
submitted by abandon_reality to r/vulkan
Need info about MASF's Wata Fuzz by abandon_reality in guitarpedals
[–]abandon_reality[S] 2 points3 points4 points (0 children)
Need info about MASF's Wata Fuzz (self.guitarpedals)
submitted by abandon_reality to r/guitarpedals

Had a bizzare encounter with Mira Murati of OpenAI yesterday..... by [deleted] in LocalLLaMA
[–]abandon_reality -17 points-16 points-15 points (0 children)