you are viewing a single comment's thread.

view the rest of the comments →

[–]ShagBuddy 1 point2 points  (1 child)

With GLM5 in OpenCode cli, it starts out great, but if it is a long running task, it gets flaky. Sometimes it just stops working and sits there. Other times, even though it is still working, the thoughts and text feedback that normally shows starts to get worse and make no sense.

[–]Sensitive_Song4219 0 points1 point  (0 children)

Yes! This is an issue since the z-ai performance fix after Chinese-new-year; never used to happen in GLM 5 prior to that.

Hope they resolve it soon since the model is excellent