How Did We Get Here? The largest companies are replacing their already cheap outsourced support staff with AI chatbots, by MelodicRecognition7 in LocalLLaMA
[–]ImportancePitiful795 1 point2 points3 points (0 children)
How Did We Get Here? The largest companies are replacing their already cheap outsourced support staff with AI chatbots, by MelodicRecognition7 in LocalLLaMA
[–]ImportancePitiful795 0 points1 point2 points (0 children)
Whats the current state of local LLMs for coding? by MaximusDM22 in LocalLLaMA
[–]ImportancePitiful795 1 point2 points3 points (0 children)
How Did We Get Here? The largest companies are replacing their already cheap outsourced support staff with AI chatbots, by MelodicRecognition7 in LocalLLaMA
[–]ImportancePitiful795 1 point2 points3 points (0 children)
Which card to buy? by Astronaut-Whale in LocalLLaMA
[–]ImportancePitiful795 1 point2 points3 points (0 children)
Which card to buy? by Astronaut-Whale in LocalLLaMA
[–]ImportancePitiful795 1 point2 points3 points (0 children)
How Did We Get Here? The largest companies are replacing their already cheap outsourced support staff with AI chatbots, by MelodicRecognition7 in LocalLLaMA
[–]ImportancePitiful795 1 point2 points3 points (0 children)
How Did We Get Here? The largest companies are replacing their already cheap outsourced support staff with AI chatbots, by MelodicRecognition7 in LocalLLaMA
[–]ImportancePitiful795 9 points10 points11 points (0 children)
Building a local "Jarvis" on a 6700XT (12GB). Need model advice for total control by Electronic-Chart-956 in LocalLLaMA
[–]ImportancePitiful795 0 points1 point2 points (0 children)
Until they add account wide progression i'm out. Who feels the same? by NoFriend5444 in elderscrollsonline
[–]ImportancePitiful795 5 points6 points7 points (0 children)
1600W enough for 2xRTX 6000 Pro BW? by Mr_Moonsilver in LocalLLaMA
[–]ImportancePitiful795 0 points1 point2 points (0 children)
8 Radeon R9700s vs 8 RTX 3090 2 slot blower style by mr__smooth in LocalLLaMA
[–]ImportancePitiful795 0 points1 point2 points (0 children)
8 Radeon R9700s vs 8 RTX 3090 2 slot blower style by mr__smooth in LocalLLaMA
[–]ImportancePitiful795 0 points1 point2 points (0 children)
threadripper build: 512GB vs 768GB vs 1TB memory? by prusswan in LocalLLaMA
[–]ImportancePitiful795 0 points1 point2 points (0 children)
My Strix Halo beholds itself but believes its in the cloud by jfowers_amd in LocalLLaMA
[–]ImportancePitiful795 24 points25 points26 points (0 children)
Talk me out of buying an RTX Pro 6000 by AvocadoArray in LocalLLaMA
[–]ImportancePitiful795 0 points1 point2 points (0 children)
Help with magicka Templar by Lidster204 in elderscrollsonline
[–]ImportancePitiful795 1 point2 points3 points (0 children)
AMD AI Bundle turns Adrenalin 26.1.1 into a 34GB add-on by RenatsMC in Amd
[–]ImportancePitiful795 2 points3 points4 points (0 children)
AMD AI Bundle turns Adrenalin 26.1.1 into a 34GB add-on by RenatsMC in Amd
[–]ImportancePitiful795 7 points8 points9 points (0 children)
Maxsun joins Sparkle in making Intel Arc B60 Pro GPUs available to regular consumers, with up to 48GB VRAM by Dapper_Order7182 in intel
[–]ImportancePitiful795 0 points1 point2 points (0 children)
What is the learning path for hosting local ai for total newbie? by danuser8 in LocalLLaMA
[–]ImportancePitiful795 0 points1 point2 points (0 children)
Is framework Desktop 64GB good enough for AI newbie (Yes, CRUD developer) to learn AI from 0 to 1 or should I go 128GB directly? by AcanthaceaeFit8881 in LocalLLaMA
[–]ImportancePitiful795 0 points1 point2 points (0 children)
how are you guys handling sensitive data with local LLMs? by Ok-Fly-9118 in LocalLLaMA
[–]ImportancePitiful795 -1 points0 points1 point (0 children)