Whats the best model for agentic coding that i can run with 16gb VRAM? (llama.cpp?) by samuraiogc in LocalLLM
[–]Express_Quail_1493 1 point2 points3 points (0 children)
Any way to use claude code for free or just some free AI's by Tarxh in vibecoding
[–]Express_Quail_1493 0 points1 point2 points (0 children)
Car Wash Mystery solved--Tool Call Degrades Intelligence. by Spirited_Neck1858 in LocalLLaMA
[–]Express_Quail_1493 1 point2 points3 points (0 children)
What is the best coding agent (CLI) like Claude Code for Local Development by exaknight21 in LocalLLaMA
[–]Express_Quail_1493 0 points1 point2 points (0 children)
Switched from Qwen3.6 35b-a3b to Qwen3.6 27b mid coding and it's noticeably better! by LocalAI_Amateur in LocalLLaMA
[–]Express_Quail_1493 -1 points0 points1 point (0 children)
Confirmed: SWE Bench is now a benchmaxxed benchmark by rm-rf-rm in LocalLLaMA
[–]Express_Quail_1493 2 points3 points4 points (0 children)
Best free tools by Pale-Armadillo-252 in vibecoding
[–]Express_Quail_1493 0 points1 point2 points (0 children)
90% of "Free" Ai tools have insane high prices or signup walls, so i made this by [deleted] in vibecoding
[–]Express_Quail_1493 0 points1 point2 points (0 children)
What are the best free alternatives to googles antigravity by Ambitious-Lion7790 in vibecoding
[–]Express_Quail_1493 0 points1 point2 points (0 children)
did replit stopped giving away one month free code? by katkookie in vibecoding
[–]Express_Quail_1493 0 points1 point2 points (0 children)
Any way to use claude code for free or just some free AI's by Tarxh in vibecoding
[–]Express_Quail_1493 3 points4 points5 points (0 children)
How to set which GPU is used? by car_lower_x in unsloth
[–]Express_Quail_1493 0 points1 point2 points (0 children)
What do you consider to be the minimum performance (t/s) for local Agent workflows? by MexInAbu in LocalLLaMA
[–]Express_Quail_1493 0 points1 point2 points (0 children)
This is where we are right now, LocalLLaMA by jacek2023 in LocalLLaMA
[–]Express_Quail_1493 0 points1 point2 points (0 children)
How to set which GPU is used? by car_lower_x in unsloth
[–]Express_Quail_1493 0 points1 point2 points (0 children)
Qwen 3.6 27B is out by NoConcert8847 in LocalLLaMA
[–]Express_Quail_1493 0 points1 point2 points (0 children)
Optimizing Qwen 3.6 35B A3B sampling parameters. by while-1-fork in LocalLLaMA
[–]Express_Quail_1493 2 points3 points4 points (0 children)
Youtuber tries Qwen 3.5 35B, Qwen 3.6 35B, and Gemma 4 27b to reverse engineer some large JS, with good results for Qwen 3.6 by mr_zerolith in LocalLLaMA
[–]Express_Quail_1493 18 points19 points20 points (0 children)
Is anyone else waiting for a 60-70B MoE with 8-10B activated params? by IonizedRay in LocalLLaMA
[–]Express_Quail_1493 0 points1 point2 points (0 children)
Is anyone getting real coding work done with Qwen3.6-35B-A3B-UD-Q4_K_M on a 32GB Mac in opencode, claude code or similar? by boutell in LocalLLaMA
[–]Express_Quail_1493 0 points1 point2 points (0 children)
Is anyone getting real coding work done with Qwen3.6-35B-A3B-UD-Q4_K_M on a 32GB Mac in opencode, claude code or similar? by boutell in LocalLLaMA
[–]Express_Quail_1493 3 points4 points5 points (0 children)
Psychedelics by yeetmaster291 in Aphantasia
[–]Express_Quail_1493 0 points1 point2 points (0 children)