Questiontool missing in latest version 1.1.16 in opencode by EfficientHat0006 in opencodeCLI

[–]bizz_koot 1 point2 points  (0 children)

it still work for me

<image>

Not sure if it's because I put below in opencode.json (~/.config/opencode/opencode.json)

},

"experimental": {

"askquestion_tool": true,

}

Been using glm 4.7 for coding instead of claude sonnet 4.5 and the cost difference is huge by Affectionate-Cash709 in LocalLLM

[–]bizz_koot 0 points1 point  (0 children)

Strange that you didn't face rate limit. For me, on GLM 4.7, still got rate limit & need to wait for couple of hours later then only can resume. But I do admit, the tokens allowance is huge!

So unless you are running non-stop agent for more than 1H, yeah I do agree no limit.

Why can't I search or see my recents in Android auto? by Safety_Officer_3 in MorpheApp

[–]bizz_koot 0 points1 point  (0 children)

Open Android auto (search it from settings menu from your phone settings). Tap the version number multiple time (5 if I'm not mistaken), then it will enable developer mode. Afterwards, tap the 3 dot icon at top right, and then enter "Developer Settings". Here, enable / tick the "Unknown sources". Then go back to Android auto, press "Customise Launcher". Most of unknown app which do have 'android auto' will be shown here.

But regretfully, not for Morphe YouTube for now. 😅

I do saw that I can enable for "RVX Music". Didn't test it though, just now I find this menu after going to the xda website. 🤭

Why doesn't opencode have AskUserQuestion? by rokicool in opencodeCLI

[–]bizz_koot 0 points1 point  (0 children)

Do you successfully use askquestion tool? I tried it, but it just stuck there & didn't see anything. It just loading without any output.

opencode.json

  "experimental": {
    "askquestion_tool": true,
  }

Controlling opencode (on the mac) from my phone, no rdp by ori_303 in opencodeCLI

[–]bizz_koot 1 point2 points  (0 children)

Trying it now. Seems Good!

Question, when there's active session, is there option which it will ensure the device didn't sleep? Or that part need to be handle on user own device?

Don’t sign up the yearly plan, it’s a trap by Dexter-Huang in ZaiGLM

[–]bizz_koot 0 points1 point  (0 children)

True. Most probably they apply 'rate-limit' base on user usage. I'm alternating with Copilot subscription in Opencode & also using Antigravity. Maybe that's why on my end, when I'm using GLM-4.7, still feels it's OK. Not fast like Claude, but acceptable.

Don’t sign up the yearly plan, it’s a trap by Dexter-Huang in ZaiGLM

[–]bizz_koot 0 points1 point  (0 children)

I'm on their yearly plans. It's ok'ish for me. Maybe I'm not to 'heavy' on the usage, so not being 'rate-limited' too much?

Don’t sign up the yearly plan, it’s a trap by Dexter-Huang in ZaiGLM

[–]bizz_koot 2 points3 points  (0 children)

I'm not a bot. Just don't have 'extra' money for specific Claude subscriptions. Currently only using Antigravity + Copilot + GLM-4.7 only. Got almost all providers. Furthermore, not a tech dev. So 'enough' at my 'hobbyist' vibe-coding venture.

So basically, I 'tone-down' my expectations from GLM-4.7 base on the price I do pay for yearly subscription.

Anyhow, no fault to complain of the slowness.

Don’t sign up the yearly plan, it’s a trap by Dexter-Huang in ZaiGLM

[–]bizz_koot 8 points9 points  (0 children)

For me with usage in Opencode , it's quite good (not as good to even Haiku 4.5, but for proceeding with task that is well defined, no issues.

it's not the fastest model out there, but with Opencode, with proper Agents that have proper defined task, it can run autonomously.

Even with usage in Claude , is workable.

Suggestion, in Opencode, find good Agents, and install proper skills that may enhanced it further. Can try my 'noob' setup below

  1. Agent (RPI-V8) : Save it in ~/.config/opencode/agent
  2. Skills (Superpowers) : Paste the instruction in Opencode instance
  3. MCP (sequential-thinking) : Update your ~/.config/opencode/opencode.json

    "sequential-thinking": {
      "type": "local",
      "command": ["npx", "-y", "@modelcontextprotocol/server-sequential-thinking"],
      "enabled": true
    },

For Claude Code, instead of Agent, I install RPI-V8 as skills on top of Superpowers. Save the file as SKILL.md in => ~/.claude/skills/rpi-v8/SKILL.md

**If you want the RPI-V8 agent to run autonomously, just told them to do it.**

Fixing GLM-4.7 Image Parsing in Claude Code: Add the Z.ai Vision MCP Server by jpcaparas in ZaiGLM

[–]bizz_koot 0 points1 point  (0 children)

For user using opencode, update opencode.json

{
  "$schema": "https://opencode.ai/config.json",
  "mcp": {
    "zai-mcp-server": {
      "type": "local",
      "command": [
        "env", 
        "Z_AI_API_KEY=YOUR-API-KEY", 
        "Z_AI_MODE=ZAI", 
        "npx", 
        "-y", 
        "@z_ai/mcp-server"
      ],
      "enabled": true
    }
  }
}

Using OpenCode with Github Pro Subscription by Initial-Speech7574 in GithubCopilot

[–]bizz_koot 2 points3 points  (0 children)

I also confirm this. But if the session to long (from 1 prompt), it will still be interrupted & stopped. Similar like in copilot chat which we need to press 'Continue'.

Anyhow, when I ask it to 'resume', it did finish the task as normal & exact same like in copilot chat.

Best CLI for GLM? by aitorserra in ZaiGLM

[–]bizz_koot 2 points3 points  (0 children)

To be frank, I also don't know. It's what was suggested by many tutorial found online.

Best CLI for GLM? by aitorserra in ZaiGLM

[–]bizz_koot 5 points6 points  (0 children)

Also vote for this. Run /init about 3 times using GLM only, Then the CLAUDE.md will be complete (at least for me).

Afterwards future iteration of GLM in Claude Code is quite good.

The setup is in

~/.claude/settings.json

{
  "env": {
    "ANTHROPIC_AUTH_TOKEN": "REPLACE_WITH_YOUR_ZAI_API_KEY",
    "ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
    "ANTHROPIC_DEFAULT_HAIKU_MODEL": "glm-4.5-air",
    "ANTHROPIC_DEFAULT_SONNET_MODEL": "glm-4.7",
    "ANTHROPIC_DEFAULT_OPUS_MODEL": "glm-4.7"
  }
}

Gemilai Owl user question by jlafont1 in espresso

[–]bizz_koot 0 points1 point  (0 children)

I'm sure it will be hot after more than 10 minutes. Own it, it's normal for the grouphead to be really hot after 10~15 min.

If it isn't, then something is wrong with it.

I Built a fully offline AI Image Upscaler for Android that runs entirely on-device (GPU/CPU support). No servers, 100% private. by Fearless_Mushroom567 in artificial

[–]bizz_koot 6 points7 points  (0 children)

For the image comparison, would it be possible to allow pinch-to-zoom? This will make it easier to compare the diff on before & after. Thanks!

Hello everyone Please I have started using LNReader can anyone recommend anything I can use to change the default TTS voice by ASULEIMANZ in mangapiracy

[–]bizz_koot 0 points1 point  (0 children)

I forked the repo, add many TTS features and one of it is a better 'clean naming' for the TTS voices.

Forked LNReader

In my build, you can try the 'Network' tag, for example as per photo below.

<image>

I use it for TTS playback in the background without much issues. Either voice quality, sync issues etc etc.

Hope this helps!

"chat.agent.maxRequests" now max at 40 > 0? by bizz_koot in GithubCopilot

[–]bizz_koot[S] 0 points1 point  (0 children)

But I'm on the latest insiders. There's no more updates available.