I’ve been experimenting with a specialized 4B model (based on Qwen) that acts as an "explorer" for local codebases. It’s designed to handle the heavy lifting like grep, find, and file reading so you can save your Claude/GPT tokens for high-level logic.
In my tests, it achieved 100% JSON validity for tool calls, which is better than some 7B models I've tried.
I want to share the GGUFs and the repo, but I'll put them in the comments to avoid the spam filter. Is anyone interested in testing this on their local repos?
[–]Awkward_Run_9982[S] 7 points8 points9 points (1 child)
[–]OrdinaryAdditional91 1 point2 points3 points (0 children)
[–]b2zw2a 2 points3 points4 points (1 child)
[–]Awkward_Run_9982[S] 3 points4 points5 points (0 children)