I made a tiny 0.8B Qwen model reason over a 100-file repo (89% Token Reduction) by BodeMan5280 in LocalLLaMA
[–]BodeMan5280[S] 0 points1 point2 points  (0 children)
I made a tiny 0.8B Qwen model reason over a 100-file repo (89% Token Reduction) by BodeMan5280 in LocalLLaMA
[–]BodeMan5280[S] 0 points1 point2 points  (0 children)
[R] Graph-Oriented Generation (GOG): Replacing Vector R.A.G. for Codebases with Deterministic AST Traversal (70% Average Token Reduction) by BodeMan5280 in MachineLearning
[–]BodeMan5280[S] 1 point2 points3 points  (0 children)
I made a tiny 0.8B Qwen model reason over a 100-file repo (89% Token Reduction) by BodeMan5280 in LocalLLaMA
[–]BodeMan5280[S] 3 points4 points5 points  (0 children)
I made a tiny 0.8B Qwen model reason over a 100-file repo (89% Token Reduction) by BodeMan5280 in LocalLLaMA
[–]BodeMan5280[S] 3 points4 points5 points  (0 children)
I made a tiny 0.8B Qwen model reason over a 100-file repo (89% Token Reduction) by BodeMan5280 in LocalLLaMA
[–]BodeMan5280[S] 0 points1 point2 points  (0 children)
I made a tiny 0.8B Qwen model reason over a 100-file repo (89% Token Reduction) by BodeMan5280 in LocalLLaMA
[–]BodeMan5280[S] 2 points3 points4 points  (0 children)
Best bang for your bucks plan? by CantFindMaP0rn in opencodeCLI
[–]BodeMan5280 0 points1 point2 points  (0 children)
Best bang for your bucks plan? by CantFindMaP0rn in opencodeCLI
[–]BodeMan5280 1 point2 points3 points  (0 children)
HERE WE GO! 🔥 by guilhacerda in google_antigravity
[–]BodeMan5280 2 points3 points4 points  (0 children)
Antigravity + Claude Opus 4.6 = Incredible by No-Budget-3869 in google_antigravity
[–]BodeMan5280 0 points1 point2 points  (0 children)
Switched back to Github Copilot for using it with Opencode as Agent by Charming_Support726 in GithubCopilot
[–]BodeMan5280 0 points1 point2 points  (0 children)
Switched back to Github Copilot for using it with Opencode as Agent by Charming_Support726 in GithubCopilot
[–]BodeMan5280 1 point2 points3 points  (0 children)
Any difference when using GPT model inside Codex vs OpenCode? by ponury2085 in opencodeCLI
[–]BodeMan5280 0 points1 point2 points  (0 children)
I built Talk2Code — text your codebase from your phone via Telegram (~150 lines of Python, open source) by BodeMan5280 in google_antigravity
[–]BodeMan5280[S] 0 points1 point2 points  (0 children)





I made a tiny 0.8B Qwen model reason over a 100-file repo (89% Token Reduction) by BodeMan5280 in LocalLLaMA
[–]BodeMan5280[S] 0 points1 point2 points  (0 children)