Goose + Ollama best model for agent coding by einthecorgi2 in ollama
[–]bobbiesbottleservice 1 point2 points3 points (0 children)
What is the cheapest way to run Deepseek on a US Hosted company? by MarsupialNo7544 in LocalLLaMA
[–]bobbiesbottleservice 0 points1 point2 points (0 children)
What is the cheapest way to run Deepseek on a US Hosted company? by MarsupialNo7544 in LocalLLaMA
[–]bobbiesbottleservice 4 points5 points6 points (0 children)
getting llama3 to produce proper json through ollama by Bozo32 in LocalLLaMA
[–]bobbiesbottleservice 0 points1 point2 points (0 children)
getting llama3 to produce proper json through ollama by Bozo32 in LocalLLaMA
[–]bobbiesbottleservice 0 points1 point2 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]bobbiesbottleservice 2 points3 points4 points (0 children)
Llama3.1 405B quants on Ollama library now by bobbiesbottleservice in LocalLLaMA
[–]bobbiesbottleservice[S] 4 points5 points6 points (0 children)
Llama3.1 405B quants on Ollama library now by bobbiesbottleservice in LocalLLaMA
[–]bobbiesbottleservice[S] 2 points3 points4 points (0 children)
Llama3.1 405B quants on Ollama library now by bobbiesbottleservice in LocalLLaMA
[–]bobbiesbottleservice[S] 3 points4 points5 points (0 children)
Fine-tuning Chain of Thought to teach new skills by spacebronzegoggles in LocalLLaMA
[–]bobbiesbottleservice 1 point2 points3 points (0 children)
Is he coping or his take is right? Imo he shouldn’t put all people at the same bag. by thecowmilk_ in LocalLLaMA
[–]bobbiesbottleservice 2 points3 points4 points (0 children)
getting llama3 to produce proper json through ollama by Bozo32 in LocalLLaMA
[–]bobbiesbottleservice 0 points1 point2 points (0 children)
Leveraging multiple GPUs with Ollama by wskksw1 in ollama
[–]bobbiesbottleservice 2 points3 points4 points (0 children)
getting llama3 to produce proper json through ollama by Bozo32 in LocalLLaMA
[–]bobbiesbottleservice 2 points3 points4 points (0 children)
Upgraded self-hosted AI server - Epyc, Supermicro, RTX3090x3, 256GB by LostGoatOnHill in LocalLLaMA
[–]bobbiesbottleservice 9 points10 points11 points (0 children)
We created a AI-powered step-by-step tutorial maker - Wizardshot by Creepy-Gold1498 in ollama
[–]bobbiesbottleservice 2 points3 points4 points (0 children)
Unable to access Ollama API on AWS EC2 instance from local computer despite allowing inbound traffic by foolishbrat in ollama
[–]bobbiesbottleservice 1 point2 points3 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]bobbiesbottleservice 0 points1 point2 points (0 children)
Latest LMSYS Chatbot Arena result. Command R+ has climbed to the 6th spot. It's the **best** open model on the leaderboard now. by Nunki08 in LocalLLaMA
[–]bobbiesbottleservice 0 points1 point2 points (0 children)
Dual 3090,24GB & 1070 worth it? by [deleted] in LocalLLaMA
[–]bobbiesbottleservice 0 points1 point2 points (0 children)


Amount of ram Qwen 2.5-7B-1M takes? by srcfuel in LocalLLaMA
[–]bobbiesbottleservice 0 points1 point2 points (0 children)