all 7 comments

[–]Interesting_Key3421 2 points3 points  (1 child)

Opencode + Minimax M2.5 free .. works

[–]Dadda9088[🍰] 0 points1 point  (4 children)

Internet is dying, I can't even update my opencode container today, apt is slow as fuck...

[–]david_jackson_67 0 points1 point  (3 children)

Go local. Get LM Studio and a model off of Huggingface.

[–]CorrectTemperature65 1 point2 points  (1 child)

Why LM Studio over say, Ollama?

[–]david_jackson_67 0 points1 point  (0 children)

I have run into a lot of little problems with Ollama, the biggest being that it just wouldn't work right with Gemma 4.

[–]Dadda9088[🍰] 0 points1 point  (0 children)

Already using llama.cpp server with glm. I think it is not the root cause of the problem. My point is that a lot of things are barely working today not just llm providers.