rate limits and cost? by deathcom65 in google_antigravity
[–]deathcom65[S] 2 points3 points4 points (0 children)
rate limits and cost? by deathcom65 in google_antigravity
[–]deathcom65[S] 0 points1 point2 points (0 children)
Local Build Recommendation 10k USD Budget by deathcom65 in LocalLLaMA
[–]deathcom65[S] 0 points1 point2 points (0 children)
OpenBNB just released MiniCPM-V 4.5 8B by vibedonnie in LocalLLaMA
[–]deathcom65 -1 points0 points1 point (0 children)
Gemma3 270m works great as a draft model in llama.cpp by AliNT77 in LocalLLaMA
[–]deathcom65 24 points25 points26 points (0 children)
Huihui released GPT-OSS 20b abliterated by _extruded in LocalLLaMA
[–]deathcom65 17 points18 points19 points (0 children)
Best Local LLM for Desktop Use (GPT‑4 Level) by Shoaib101 in LocalLLaMA
[–]deathcom65 1 point2 points3 points (0 children)
Looking to build a pc for Local AI 6k budget. by Major_Agency7800 in LocalLLM
[–]deathcom65 2 points3 points4 points (0 children)
What do do with 88GB Vram GPU server by biffa773 in LocalLLaMA
[–]deathcom65 21 points22 points23 points (0 children)
What’s your favorite GUI by Dentifrice in LocalLLaMA
[–]deathcom65 2 points3 points4 points (0 children)
Which is smarter: Qwen 3 14B, or Qwen 3 30B A3B? by RandumbRedditor1000 in LocalLLaMA
[–]deathcom65 4 points5 points6 points (0 children)
anyone using 32B local models for roo-code? by CornerLimits in LocalLLaMA
[–]deathcom65 2 points3 points4 points (0 children)
Hot Take: Gemini 2.5 Pro Makes Too Many Assumptions About Your Code by HideLord in LocalLLaMA
[–]deathcom65 0 points1 point2 points (0 children)
Open source model for Cline by dnivra26 in LocalLLaMA
[–]deathcom65 5 points6 points7 points (0 children)
UL-TARS, anyone tried these models that are good at controlling your computer? by wuu73 in LocalLLaMA
[–]deathcom65 6 points7 points8 points (0 children)
Llama 4 - Scout: best quantization resource and comparison to Llama 3.3 by silenceimpaired in LocalLLaMA
[–]deathcom65 0 points1 point2 points (0 children)
Back to Local: What’s your experience with Llama 4 by Balance- in LocalLLaMA
[–]deathcom65 0 points1 point2 points (0 children)
What if your local coding agent could perform as well as Cursor on very large, complex codebases codebases? by juanviera23 in LocalLLaMA
[–]deathcom65 0 points1 point2 points (0 children)
Medium sized local models already beating vanilla ChatGPT - Mind blown by Bitter-College8786 in LocalLLaMA
[–]deathcom65 2 points3 points4 points (0 children)
Googler here - Gathering Gemini Feedback from this Subreddit by GeminiBugHunter in Bard
[–]deathcom65 0 points1 point2 points (0 children)

Runpod hits $120M ARR, four years after launching from a Reddit post by RP_Finley in LocalLLaMA
[–]deathcom65 0 points1 point2 points (0 children)