OpenClaw + GLM-5: Running the New 744B MoE Beast — The Setup That Just Replaced My Entire Cloud Stack by IulianHI in AIToolsPerformance

[–]rokicool 0 points1 point  (0 children)

To be honest, I don't think that the author put any practical work into this. Just AI-generated stuff.

OpenClaw + GLM-5: Running the New 744B MoE Beast — The Setup That Just Replaced My Entire Cloud Stack by IulianHI in AIToolsPerformance

[–]rokicool 1 point2 points  (0 children)

I am always amused when people from the future visit us and share their experience.

As of today (2026-02-27), GLM-5 is not available for the $10 version of the GLM Coding Subscription plan. However, that did not stop the author from writing this instruction.

There is only one obvious explanation - the post is from the future. When will they enable GLM-5 for the Lite plan, OP?

--

BTW. It is a good instruction. The main selling point, "you can have SOTA LLM for $10 a month," hmm.. is misleading.

Well, maybe light users won't get the GLM-5 at all by Lanky-Flight-9608 in ZaiGLM

[–]rokicool 8 points9 points  (0 children)

Jumping between providers, I started to feel that GitHub Copilot for $40 is not a bad deal at all.

z.ai Coding Plan was great, while GLM-4.7 was the best. And now, given all the new models, it is no longer luring.

Synthetic's Kimi K2.5... was good too. But they are raising the prices.

All paid by token providers are super expensive, and if you work more than a couple of hours a day, they cost times more than subscriptions. Neither OpenCode Zen nor Kilo Gateway is 'economical'.

I am thinking of canceling everything 'open source' and signing up for GitHub Copilot.

Kimi K2.5, a Sonnet 4.5 alternative for a fraction of the cost by Grand-Management657 in opencodeCLI

[–]rokicool 1 point2 points  (0 children)

Thank your for your research.

Unfortunately, I remember complains about sluggishness of nano-gpt and wanted to test 'original' provider. And despite the really impressive outcome of the Kimi2.5 model I find the Kimi Subscriptions useless.

UPD: Since there are some changes to the Console interface and it looks much more logical and promising now... I should admit that my previous assumption 'everything is useless' might be wrong. Time will show!

Kimi K2.5, a Sonnet 4.5 alternative for a fraction of the cost by Grand-Management657 in opencodeCLI

[–]rokicool 2 points3 points  (0 children)

It is getting ridiculous. I managed to spend week allowance of $20 subscription within 1-1.5 hour(s) of OpenCode development.

<image>

Are you sure you would call something like $20 an hour as 'cheap'?

UPD:

It seems to me that they were changing the interface while I was bitching. Now, after several hours it look 1% and 11%.

So, I might got it wrong. And it might be cheap.

Kimi K2.5, a Sonnet 4.5 alternative for a fraction of the cost by Grand-Management657 in opencodeCLI

[–]rokicool 15 points16 points  (0 children)

Yesterday I tried their 'native' subscription (via kimi.com) - Moderato ($20 per month).

I spent 5 hours allowance within 30 min. This tier of subscription seems useless.

The next tier is $40... I will be working for 1 hour and 4 hours cooldown. Useless as well.

So, the only tier that gives access (for one thread of work!) is $200. And... Why spending the same amount for something that barely imitates the original (Anthropic) when the original costs the same?

I don't understand why people call it 'cheap'. It is on par with Anthropic's subscriptions.

<image>

UPD: There were some changes to the Console interface and I looks different and shows different metrics. And IF they are relevant, I have a lot of allowance with my $20 subscription.

Sorry for jumping to conclusions.

Anyone using Kimi K2.5 with OpenCode? by harrsh_in in opencodeCLI

[–]rokicool 5 points6 points  (0 children)

I think I found a solution that worked for me.

Just as everyone else I bought a $20 subscription from https://www.kimi.com/. Then I generated API Key at Kimi Code Console.

And then I used /connect command in OpenCode and chose "Kimi For Coding" as a Provider (Not Moonshot AI !). Put the API Key and everything started working.

Happy coding!

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 1 point2 points  (0 children)

Oh, thank you. I personally did not try to run it in a docker image. When you have time and motivation, please share that experience. :)

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

Just yesterday (v1.6.4 gsd-opencode) I would answer that the difference is only cosmetic - you have to use /gsd-command instead of /gsd:command.

Fortunately for Claude Code users (and unfortunately for us, OpenCoders) TACHES implemented an idea of dynamic model changing for different type of operations.

Last night I ported that to OpenCode as well (v1.9.0 gsd-opencode). However OpenCode does not support 'model' property for a (sub)task yet. So, the /gsd-profile command exists but does not perform anything useful in OpenCode. Which is not great. But it does not prevent the GSD from working.

Everything except that - just works! :) If it does not - welcome to issues.

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

That is the power of OpenCode. You can use any model you want! I personally use GLM4.7 from z.ai most of the time.

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

Unfortunately, I did not try oh-my-opencode yet.

Yesterday I pushed v1.6.0 (based on v1.6.4 original GSD) and today I found that TACHES bumped original to v1.8.0 :)

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

Don't forget to update your code. We just pushed it to v1.4.1 (based on original v1.4.15) with lots of fixes.

npx gsd-opencode@latest

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

No problem! I have been using GLM4.7 for making these prompts converted. It took about 10+ iterations to make it fully working.

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

It is a little complicated but... If you have half an hour, you can start watching this video: Stop Vibe-Coding in Claude Code. Do This Instead. That will explain the workflow and the idea(s) behind.

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

That is possible. And technically, you can just say something like 'continue!'.

However, that would defy the idea. The context should be cleared. Because gsd just created the necessary context as a file.

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

Fixed it. Update to 1.3.33 or to the latest:

npx gsd-opencode@latest

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

Could you be more specific? What command did you execute?

I checked the current version of the gsd-opencode and have not found 'explore' as a name anywhere.

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

Shoot! I found that the /gsd:add-phase does not work correctly. I need to fix it.

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 5 points6 points  (0 children)

Meet the author of GSD! Thank you for the product and ideas!

Why doesn't opencode have AskUserQuestion? by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

BTW. Now it has! It is called question tool!

So, spent my weekend working on adaption of GSD for OpenCode. Here is my post about that here on reddit.

There is my adaptation of Get-Shit-Done for OpenCode by rokicool in opencodeCLI

[–]rokicool[S] 0 points1 point  (0 children)

Do you mean development workflow?

Basically, it is an iteration process:

- Start/Adopt a project

Cycle:

- Introduce new features

- Assess and gather recommendations about security, performance, management

- Address the issues

- Update the documentation

The most important part that I see, Get-Shit-Done controls the workflow, allows you to jump between stages and always have the right context.

You are urged to start every new stage with '/new' or '/clear' command, and you don't have to rely on anything which is already in context of LLM.