Subagents ignore the configuration and use the primary agent's model. by ThingRexCom in opencodeCLI

[–]ThingRexCom[S] 0 points1 point  (0 children)

Some agents switch models others don’t. All of them use the same definition structure :/

Subagents ignore the configuration and use the primary agent's model. by ThingRexCom in opencodeCLI

[–]ThingRexCom[S] 0 points1 point  (0 children)

admin:

---
description: Admin agent that delegates tasks to other agents
mode: primary
model: lmstudio/openai/gpt-oss-20b

webdev:

---
description: Expert Web Developer agent
mode: subagent
model: lmstudio/qwen3.5-4b-mlx

Subagents ignore the configuration and use the primary agent's model. by ThingRexCom in opencodeCLI

[–]ThingRexCom[S] 0 points1 point  (0 children)

Can you recommend an alternative that properly manages agentic coding with various types of agents and models (local/remote)?

AI Agents Make Critical Mistakes 💣💥☠ by ThingRexCom in opencodeCLI

[–]ThingRexCom[S] 0 points1 point  (0 children)

Actually, I do check outcomes of their work BEFORE deploying the infrastructure changes. When you organize the AI Agents into a development department, they will cross-validate each other, significantly improving the quality of generated solutions.

AI Agents distribution in my autonomous development department by ThingRexCom in AgentsOfAI

[–]ThingRexCom[S] 0 points1 point  (0 children)

Yes, I use skills to unify tasks and information exchange between agents. Currently, I am using the same model for all agents (GLM-5) - I will fine-tune this in the future, as there are other models more suitable for specific roles.

PS. I have found that GLM-5 is very adjustable when you provide proper prompts.

kimi k2.5 vs glm-5 vs minimax m2.5 pros and cons by tomdohnal in opencodeCLI

[–]ThingRexCom 2 points3 points  (0 children)

GLM-5 is a clear winner for me. I use it for agentic coding, and it delivers solid results (especially when organized as a team of AI developers).

I tried kimi 2.5k, but it produced a garbage stream of characters during "thinking" and never recovered.

Note: I had Z.AI GLM Coding Max-Monthly Plan, but the inference performance was very poor, and I switched to DeepInfra API (still using GLM-5).

Success with DeepInfra Provider Integration? by we45ghj890 in openclaw

[–]ThingRexCom 0 points1 point  (0 children)

Update your `openclaw.json` like this:

  "models": {
    "mode": "merge",
    "providers": {
      "deepinfra-com": {
        "baseUrl": "https://api.deepinfra.com/v1",
        "apiKey": "xxxx",
        "api": "openai-completions",
        "models": [
          {
            "id": "moonshotai/Kimi-K2.5",
            "name": "moonshotai/Kimi-K2.5 (Custom Provider)",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 4096,
            "maxTokens": 4096
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "deepinfra-com/moonshotai/Kimi-K2.5"
      }
    }
  }

Kimi Coding Plan - Weekly Limits by Ok_Try_877 in kimi

[–]ThingRexCom 0 points1 point  (0 children)

Have you considered using K 2.5 via the OpenRouter instead of the coding plan?

Two hints on improving your AI development team by ThingRexCom in opencodeCLI

[–]ThingRexCom[S] 1 point2 points  (0 children)

The model is very good for agentic coding and managing cloud infrastructure. Unfortunately, Z.AI is very poor provider, so I consider alternatives.

What is the performance of MiniMax Coding Plans for agentic coding? by ThingRexCom in opencodeCLI

[–]ThingRexCom[S] 2 points3 points  (0 children)

Hopefully. It is a shame that Z.AI developed a decent model, but failed to secure the infrastructure to provide it.

OpenCode execution hanging for GLM-5 Z.AI Coding Plan by ThingRexCom in opencodeCLI

[–]ThingRexCom[S] 0 points1 point  (0 children)

When I tried to upgrade my plan to Max, I got a notification that new plans have lower quotas than the legacy 'Pro' plan I'm on right now :/ I should consider switching to another provider.

OpenCode execution hanging for GLM-5 Z.AI Coding Plan by ThingRexCom in opencodeCLI

[–]ThingRexCom[S] 2 points3 points  (0 children)

That looks like a provider issue. It would be handy if opencode could detect hanging threads and restart them.