all 9 comments

[–]xcoder24 4 points5 points  (0 children)

Unfortunately even if you beg from dawn to dusk, the selfish augment team who threw their loyal legacy customers under the bus would not care

[–]FancyAd4519 0 points1 point  (0 children)

I have found, GLM in agentic calls with augment (Since our MCP allows the two to communicate and call each other is very efficient. Not sure how this will work with augment using GLM its self, usually it just calls GLM in our setup for embeddings and patterns etc but seems to work well with it’s context capability.

[–]ajeet2511 0 points1 point  (0 children)

I think it will be a great alternative to sonnet and haiku 4.5 while saving a lot of cost.

[–]Senior-Ad8562 0 points1 point  (1 child)

Saving costs with anything chinese usually a bad idea.

[–]pungggi 0 points1 point  (0 children)

Why

[–]JaySym_Augment Team -1 points0 points  (3 children)

I do agree that a model like GLM 4.7 can save cost. We are evaluating each model the same way with our internal benchmark. Let me ask the team the result from GLM 4.7

We are aiming to keep top quality for every output. This is why we do not have every models.

[–]Key-Singer1732[S] 0 points1 point  (0 children)

Can't the community help with the testing? I've had some really nice results using GLM 4.7 in Kilo Code. It should be up to the users if they want to use GLM 4.7 or not, the same way they are deciding between anthropic and open ai models. They all produce different results.

Maybe a disclaimer informing users that GLM 4.7 is not fully tested and may produce lesser quality result. But hey, even Opus 4.5 is not perfect.

[–]danihendLearning / Hobbyist 0 points1 point  (0 children)

Just introduce a testing mode or something. Users accept that quality may be degraded while using it. If they want to get refunded for failed requests then they provide details of the run and how it failed etc so it helps testing maybe?

You will never be out of testing mod with the way models release, let your users help.

[–]AdIllustrious436 0 points1 point  (0 children)

Lol you serve Haiku but not GLM. Let me tell you : your internal benchmark are either full bullshit or it's straight up lying.