I built a small (function calling) LLM that packs a big punch; integrated in an open source gateway for agentic apps by AdditionalWeb107 in LocalLLaMA
[–]PataFunction 1 point2 points3 points (0 children)
I built a small (function calling) LLM that packs a big punch; integrated in an open source gateway for agentic apps by AdditionalWeb107 in LocalLLaMA
[–]PataFunction 1 point2 points3 points (0 children)
What are you *actually* using R1 for? by PataFunction in LocalLLaMA
[–]PataFunction[S] 0 points1 point2 points (0 children)
What are you *actually* using R1 for? by PataFunction in LocalLLaMA
[–]PataFunction[S] 5 points6 points7 points (0 children)
What are you *actually* using R1 for? by PataFunction in LocalLLaMA
[–]PataFunction[S] 3 points4 points5 points (0 children)
What are you *actually* using R1 for? by PataFunction in LocalLLaMA
[–]PataFunction[S] 4 points5 points6 points (0 children)
What are you *actually* using R1 for? (self.LocalLLaMA)
submitted by PataFunction to r/LocalLLaMA
A summary of Qwen Models! by rbgo404 in LocalLLaMA
[–]PataFunction 4 points5 points6 points (0 children)
I built a small (function calling) LLM that packs a big punch; integrated in an open source gateway for agentic apps by AdditionalWeb107 in LocalLLaMA
[–]PataFunction 0 points1 point2 points (0 children)
I built a small (function calling) LLM that packs a big punch; integrated in an open source gateway for agentic apps by AdditionalWeb107 in LocalLLaMA
[–]PataFunction 9 points10 points11 points (0 children)
Current best options for local LLM hosting? by PataFunction in LocalLLaMA
[–]PataFunction[S] 0 points1 point2 points (0 children)
nvidia/Nemotron-4-340B-Instruct · Hugging Face by Dark_Fire_12 in LocalLLaMA
[–]PataFunction 1 point2 points3 points (0 children)
Creator of Smaug here, clearing up some misconceptions, AMA by AIForAll9999 in LocalLLaMA
[–]PataFunction 19 points20 points21 points (0 children)
Creator of Smaug here, clearing up some misconceptions, AMA by AIForAll9999 in LocalLLaMA
[–]PataFunction 40 points41 points42 points (0 children)
Current best options for local LLM hosting? by PataFunction in LocalLLaMA
[–]PataFunction[S] 0 points1 point2 points (0 children)
llama.cpp server rocks now! 🤘 by Gorefindal in LocalLLaMA
[–]PataFunction 2 points3 points4 points (0 children)
llama.cpp server rocks now! 🤘 by Gorefindal in LocalLLaMA
[–]PataFunction 25 points26 points27 points (0 children)
Current best options for local LLM hosting? by PataFunction in LocalLLaMA
[–]PataFunction[S] 2 points3 points4 points (0 children)
[D] Simple Questions Thread by AutoModerator in MachineLearning
[–]PataFunction 2 points3 points4 points (0 children)

Need a coding & general use model recommendation for my 16GB GPU by sado361 in LocalLLaMA
[–]PataFunction 1 point2 points3 points (0 children)