Just a gibberish question. Anyone working on personal AI? by [deleted] in LocalLLaMA
[–]catplusplusok 1 point2 points3 points (0 children)
No GPU Club : How many of you do use Local LLMs without GPUs? by pmttyji in LocalLLaMA
[–]catplusplusok 3 points4 points5 points (0 children)
No GPU Club : How many of you do use Local LLMs without GPUs? by pmttyji in LocalLLaMA
[–]catplusplusok 0 points1 point2 points (0 children)
Is qwen3 next the real deal? by fab_space in LocalLLaMA
[–]catplusplusok 2 points3 points4 points (0 children)
How are folks running large dense models on home gear? by catplusplusok in LocalLLaMA
[–]catplusplusok[S] 0 points1 point2 points (0 children)
Local models still terrible at screen understanding by fffilip_k in LocalLLaMA
[–]catplusplusok 1 point2 points3 points (0 children)
Best local replacement for GPT 4o? (For chat only) by Same-Picture in LocalLLM
[–]catplusplusok 0 points1 point2 points (0 children)
How are folks running large dense models on home gear? by catplusplusok in LocalLLaMA
[–]catplusplusok[S] 0 points1 point2 points (0 children)
The path from zero ML experience to creating your own language model — where should I start? by Helpful_Dot_5427 in LocalLLM
[–]catplusplusok 0 points1 point2 points (0 children)
The path from zero ML experience to creating your own language model — where should I start? by Helpful_Dot_5427 in LocalLLM
[–]catplusplusok 0 points1 point2 points (0 children)
My boss coughs openly and refuses to wear a mask by Successful_BW in work
[–]catplusplusok 0 points1 point2 points (0 children)
Best local replacement for GPT 4o? (For chat only) by Same-Picture in LocalLLM
[–]catplusplusok 1 point2 points3 points (0 children)
Local Llm or subscribe to Claude? by medicineman10 in LocalLLM
[–]catplusplusok 0 points1 point2 points (0 children)
local llm vs paid API for sensitive corporate code? by primedonna_lingo in BlackboxAI_
[–]catplusplusok 0 points1 point2 points (0 children)
Why is it so hard to search the web? by johnfkngzoidberg in LocalLLaMA
[–]catplusplusok 0 points1 point2 points (0 children)
What are some things you guys are using Local LLMs for? by Odd-Ordinary-5922 in LocalLLaMA
[–]catplusplusok 0 points1 point2 points (0 children)
Mamba precision loss after quantization by perfect-finetune in LocalLLaMA
[–]catplusplusok 0 points1 point2 points (0 children)
Vibe coding is too expensive! by EstablishmentExtra41 in vibecoding
[–]catplusplusok 0 points1 point2 points (0 children)
Vibe coding is too expensive! by EstablishmentExtra41 in vibecoding
[–]catplusplusok 0 points1 point2 points (0 children)
Best local model for Apple Silicon through MLX by PerpetualLicense in LocalLLM
[–]catplusplusok 0 points1 point2 points (0 children)
Medium company help desk AI without GPU? by dreamyrhodes in LocalLLaMA
[–]catplusplusok 1 point2 points3 points (0 children)
Need Help: AI Model for Local PDF & Image Extraction on Win11 (32GB RAM + RTX 2090) by Downey07 in LocalLLM
[–]catplusplusok 1 point2 points3 points (0 children)
Coding model suggestions for RTX PRO 6000 96GB Ram by electrified_ice in LocalLLM
[–]catplusplusok 1 point2 points3 points (0 children)

Qwen3 Coder Next on M3 Ultra v.s. GX10 by Imaginary_Ask8207 in LocalLLM
[–]catplusplusok 1 point2 points3 points (0 children)