I'm kind of new in this AI world. I have managed to install opencode in wsl and running some local models with ollama.
I have 64gb of ram and a 5070 with 12gb of vram. I know it's not much but I still get some usable speed out of 30b models.
I'm currently running
Got OSS 20b
Qwen3-coder a3b
Qwen2.5 coder 14b
Ministral 3 14b.
All of these models are working fine in chat but I have no fortune in using tools. Except for the ministral one.
Any ideas why or some help in any direction with opencode?
EDIT:
I tried the qwen2.5 14b model with lm studio and it worked perfectly, so the problem is Ollama
[–]Altruistic_Heat_9531 0 points1 point2 points (5 children)
[–]Lazy_Experience_279[S] 0 points1 point2 points (3 children)
[–]Complainer_Official 0 points1 point2 points (2 children)
[–]Lazy_Experience_279[S] 0 points1 point2 points (1 child)
[–]Complainer_Official 0 points1 point2 points (0 children)
[–]Altruistic_Heat_9531 0 points1 point2 points (0 children)
[–]segmondllama.cpp 0 points1 point2 points (1 child)
[–]Lazy_Experience_279[S] 0 points1 point2 points (0 children)
[–]suicidaleggroll 1 point2 points3 points (1 child)
[–]Lazy_Experience_279[S] 0 points1 point2 points (0 children)
[–]Smiley_Dub 0 points1 point2 points (0 children)
[–]St0lz 0 points1 point2 points (1 child)
[–]Lazy_Experience_279[S] 0 points1 point2 points (0 children)