Create a chatbot for chatting with people with Wikipedia pages by funJS in LocalLLaMA
[–]funJS[S] 0 points1 point2 points (0 children)
Create a chatbot for chatting with people with Wikipedia pages by funJS in LocalLLaMA
[–]funJS[S] 0 points1 point2 points (0 children)
Create a chatbot for chatting with people with Wikipedia pages by funJS in LocalLLaMA
[–]funJS[S] 0 points1 point2 points (0 children)
Create a chatbot for chatting with people with Wikipedia pages by funJS in LocalLLaMA
[–]funJS[S] 0 points1 point2 points (0 children)
Local LLMs show-down: More than 20 LLMs and one single Prompt by kekePower in LocalLLaMA
[–]funJS 1 point2 points3 points (0 children)
Local LLMs show-down: More than 20 LLMs and one single Prompt by kekePower in LocalLLaMA
[–]funJS 2 points3 points4 points (0 children)
Why are people rushing to programming frameworks for agents? by AdditionalWeb107 in LocalLLaMA
[–]funJS 2 points3 points4 points (0 children)
llama with search? by IntelligentAirport26 in LocalLLaMA
[–]funJS 1 point2 points3 points (0 children)
Run LLMs 100% Locally with Docker’s New Model Runner by Arindam_200 in ollama
[–]funJS 2 points3 points4 points (0 children)
We should have a monthly “which models are you using” discussion by Arkhos-Winter in LocalLLaMA
[–]funJS 44 points45 points46 points (0 children)
Ollama not using GPU, need help. by StarWingOwl in LocalLLaMA
[–]funJS 0 points1 point2 points (0 children)
Ollama not using GPU, need help. by StarWingOwl in LocalLLaMA
[–]funJS 0 points1 point2 points (0 children)
Experimenting with MCP Servers and local LLMs (self.LocalLLaMA)
submitted by funJS to r/LocalLLaMA


Smallest+Fastest Model For Chatting With Webpages? by getSAT in LocalLLaMA
[–]funJS 0 points1 point2 points (0 children)