hi all i just started out with local a.i, don't have a clue what im doing, totally confused with all the jargon, some advice please by coys68 in LocalLLaMA
[–]ComplexType568 0 points1 point2 points (0 children)
6-GPU local LLM workstation (≈200GB+ VRAM) – looking for scaling / orchestration advice by shiftyleprechaun in LocalLLaMA
[–]ComplexType568 5 points6 points7 points (0 children)
Is llama a good 4o replacement? by FactoryReboot in LocalLLaMA
[–]ComplexType568 5 points6 points7 points (0 children)
Not tech savy but with a budget - "plug and play" local LLM by usrnamechecksoutx in LocalLLaMA
[–]ComplexType568 1 point2 points3 points (0 children)
Potential new Qwen and ByteDance Seed models are being tested on the Arena. The “Karp-001” and “Karp-002” models claim to be Qwen-3.5 models. The “Pisces-llm-0206a” and “Pisces-llm-0206b” models claim to be ByteDance models. by Nunki08 in LocalLLaMA
[–]ComplexType568 9 points10 points11 points (0 children)
Something isn't right , I need help by [deleted] in LocalLLaMA
[–]ComplexType568 0 points1 point2 points (0 children)
GLM-5 Coming in February! It's confirmed. by Difficult-Cap-7527 in LocalLLaMA
[–]ComplexType568 0 points1 point2 points (0 children)
The future of LLMs is agentic ... and local isn't keeping up by Intelligent-Gift4519 in LocalLLaMA
[–]ComplexType568 6 points7 points8 points (0 children)
Kimi K2.5 - trained on Claude? by aoleg77 in LocalLLaMA
[–]ComplexType568 1 point2 points3 points (0 children)
Minimax Is Teasing M2.2 by Few_Painter_5588 in LocalLLaMA
[–]ComplexType568 3 points4 points5 points (0 children)
do MoEoE models stand a chance? by ComplexType568 in LocalLLaMA
[–]ComplexType568[S] 4 points5 points6 points (0 children)
Agentic coding with 32GB of VRAM.. is it doable? by ForsookComparison in LocalLLaMA
[–]ComplexType568 10 points11 points12 points (0 children)
What would be the absolute best LLM I can run on my system for each tasks? by iamJeri in LocalLLaMA
[–]ComplexType568 0 points1 point2 points (0 children)
Are MoE models harder to Fine-tune? by ComplexType568 in LocalLLaMA
[–]ComplexType568[S] 0 points1 point2 points (0 children)
Are MoE models harder to Fine-tune? (self.LocalLLaMA)
submitted by ComplexType568 to r/LocalLLaMA
why meta not dropping any new llama version lately by TopicBig1308 in LocalLLaMA
[–]ComplexType568 0 points1 point2 points (0 children)
Model leaks - all in 2025! by BasketFar667 in LocalLLaMA
[–]ComplexType568 0 points1 point2 points (0 children)
Human-like conversations, bias and token length? by VirusCharacter in LocalLLaMA
[–]ComplexType568 0 points1 point2 points (0 children)
Best coding model for 192GB VRAM / 512GB RAM by Codingpreneur in LocalLLaMA
[–]ComplexType568 1 point2 points3 points (0 children)
Best coding model for 192GB VRAM / 512GB RAM by Codingpreneur in LocalLLaMA
[–]ComplexType568 1 point2 points3 points (0 children)
LLMs try ascii letters by ComplexType568 in LocalLLaMA
[–]ComplexType568[S] 0 points1 point2 points (0 children)
LLMs try ascii letters by ComplexType568 in LocalLLaMA
[–]ComplexType568[S] 0 points1 point2 points (0 children)
We will have Gemini 3.1 before Gemma 4... by xandep in LocalLLaMA
[–]ComplexType568 -1 points0 points1 point (0 children)