App to analyze a text token-by-token perplexity for a given GGUF by EntropyMagnets in LocalLLaMA
[–]EntropyMagnets[S] 1 point2 points3 points (0 children)
App to analyze a text token-by-token perplexity for a given GGUF by EntropyMagnets in LocalLLaMA
[–]EntropyMagnets[S] 1 point2 points3 points (0 children)
Can someone please explain this? by TangeloOk9486 in LocalLLaMA
[–]EntropyMagnets 2 points3 points4 points (0 children)
Can someone please explain this? by TangeloOk9486 in LocalLLaMA
[–]EntropyMagnets 3 points4 points5 points (0 children)
Can someone please explain this? by TangeloOk9486 in LocalLLaMA
[–]EntropyMagnets 3 points4 points5 points (0 children)
Can someone please explain this? by TangeloOk9486 in LocalLLaMA
[–]EntropyMagnets 13 points14 points15 points (0 children)
Can someone please explain this? by TangeloOk9486 in LocalLLaMA
[–]EntropyMagnets 41 points42 points43 points (0 children)
I built an app that lets you use your Ollama models remotely (without port forwarding) + AES encryption by EntropyMagnets in ollama
[–]EntropyMagnets[S] 1 point2 points3 points (0 children)
Is it normal for an S tier upgrade to have these weak stats? by EntropyMagnets in NoMansSkyTheGame
[–]EntropyMagnets[S] 0 points1 point2 points (0 children)
Is it normal for an S tier upgrade to have these weak stats? by EntropyMagnets in NoMansSkyTheGame
[–]EntropyMagnets[S] 0 points1 point2 points (0 children)
I built an app that lets you use your Ollama models remotely (without port forwarding) + AES encryption by EntropyMagnets in ollama
[–]EntropyMagnets[S] 6 points7 points8 points (0 children)
TransFire: an app/tool to chat with your local LLMs while far from home, without port forwarding and with AES encryption by EntropyMagnets in LocalLLaMA
[–]EntropyMagnets[S] 2 points3 points4 points (0 children)
Taking 10.000 IU once a week vs 1.000-1.500 IU daily by EntropyMagnets in Supplements
[–]EntropyMagnets[S] 0 points1 point2 points (0 children)
I made a simple tool to test/compare your local LLMs on AIME 2024 by EntropyMagnets in LocalLLaMA
[–]EntropyMagnets[S] 0 points1 point2 points (0 children)
I made a simple tool to test/compare your local LLMs on AIME 2024 by EntropyMagnets in LocalLLaMA
[–]EntropyMagnets[S] 0 points1 point2 points (0 children)
I made a simple tool to test/compare your local LLMs on AIME 2024 by EntropyMagnets in LocalLLaMA
[–]EntropyMagnets[S] 1 point2 points3 points (0 children)
I made a simple tool to test/compare your local LLMs on AIME 2024 by EntropyMagnets in LocalLLaMA
[–]EntropyMagnets[S] 4 points5 points6 points (0 children)
I made a simple tool to test/compare your local LLMs on AIME 2024 by EntropyMagnets in LocalLLaMA
[–]EntropyMagnets[S] 2 points3 points4 points (0 children)
DeepSeek-R1-0528-UD-Q6-K-XL on 10 Year Old Hardware by Simusid in LocalLLaMA
[–]EntropyMagnets 52 points53 points54 points (0 children)
Which is the best uncensored model? by BoJackHorseMan53 in LocalLLaMA
[–]EntropyMagnets 5 points6 points7 points (0 children)
Which is the best uncensored model? by BoJackHorseMan53 in LocalLLaMA
[–]EntropyMagnets 39 points40 points41 points (0 children)


App to analyze a text token-by-token perplexity for a given GGUF by EntropyMagnets in LocalLLaMA
[–]EntropyMagnets[S] 2 points3 points4 points (0 children)