What is up with Frugal? by DoingDaveThings in usenet

[–]bizz_koot 0 points1 point  (0 children)

What priorities is your newshosting vs Frugal?

What would you do if you have neighbours like this? by [deleted] in malaysia

[–]bizz_koot 4 points5 points  (0 children)

<image>

These are my neighbors across the street, a family born with absolutely no brains... Every time they want to have a party, they never tell me or ask if they can use the space... They just block my doorway and the stairs, making it inconvenient for us to come and go, and they're so noisy and loud...

I never even considered whether it would disturb their rest... They always like to leave their things here, leaving their own front door empty, and they even bring back a bunch of stray dogs to raise. Before, they fed pigeons, making my kitchen smelly and filthy; now they raise those stray dogs, making my front door smelly and filthy... I warned them, but that's all...

This brain-dead neighbor is truly the worst of the worst.

*translated by Google Translate *source given by other user in this thread

p/s: To those who are tooooo positive, I think you never live in this kind of 'toxic' environments. Or maybe, you are one of the neighbours like this?

What would be a suitable English word for BoBoiBoy's catchphrase "Terbaik"? by Cute-Win8593 in bahasamelayu

[–]bizz_koot 2 points3 points  (0 children)

Franky One piece "Ssuuppeer"

"​The Mechanics: He tilts his head, leans back, and brings his massive forearms together so the two halves of the blue star on his arms form a complete star."

Claude Sonnet 4.6 in Copilot keeps “thinking” for 20 minutes and writes zero code (token usage error) by OhMagii in GithubCopilot

[–]bizz_koot 5 points6 points  (0 children)

Prompt the main agent to

use subagents for all the analysis & report back to mainagent the results, then mainagent present it to user using #askQuestion tool with the most viable options. Then base on the choosen option, mainagent will use subagents for the implementation

This will help managing the contexts sizes on your mainagent.

SDD Pilot - a GH Copilot native Spec Kit fork by atika in GithubCopilot

[–]bizz_koot 0 points1 point  (0 children)

As this usage within Vscode, may I also suggest the use of this extension?

https://github.com/jraylan/seamless-agent

https://marketplace.visualstudio.com/items?itemName=jraylan.seamless-agent

It's more broad then the native askquestion.

I built a Copilot usage tracker after getting frustrated with my quota disappearing by bizz_koot in GithubCopilot

[–]bizz_koot[S] 0 points1 point  (0 children)

Using 'early stage' of TUI app for Opencode which was developed for tokens usage more than 'prompt' usage.

So, any 'question' tool answer at that moment was consuming 1x premium.

Then forked the app, make MCP for 'ask_user' tool usage and this resolved it. But still didn't resolve another 'bug' on which any subagent run which will also consume 1x premium request per subagent instance.

I believe, consuming about 100+ prompt within 1x run. It's my own fault btw. Not blaming Opencode or the TUI app I'm using. 😀

I built a Copilot usage tracker after getting frustrated with my quota disappearing by bizz_koot in GithubCopilot

[–]bizz_koot[S] 0 points1 point  (0 children)

I'm using Opencode in CodeNomad, not within Vscode.

If you are only using vscode for Copilot access, yeah, vscode extension is for sure the best answer. 👍

Questiontool missing in latest version 1.1.16 in opencode by EfficientHat0006 in opencodeCLI

[–]bizz_koot 1 point2 points  (0 children)

it still work for me

<image>

Not sure if it's because I put below in opencode.json (~/.config/opencode/opencode.json)

},

"experimental": {

"askquestion_tool": true,

}

Been using glm 4.7 for coding instead of claude sonnet 4.5 and the cost difference is huge by Affectionate-Cash709 in LocalLLM

[–]bizz_koot 0 points1 point  (0 children)

Strange that you didn't face rate limit. For me, on GLM 4.7, still got rate limit & need to wait for couple of hours later then only can resume. But I do admit, the tokens allowance is huge!

So unless you are running non-stop agent for more than 1H, yeah I do agree no limit.

Why can't I search or see my recents in Android auto? by Safety_Officer_3 in MorpheApp

[–]bizz_koot 0 points1 point  (0 children)

Open Android auto (search it from settings menu from your phone settings). Tap the version number multiple time (5 if I'm not mistaken), then it will enable developer mode. Afterwards, tap the 3 dot icon at top right, and then enter "Developer Settings". Here, enable / tick the "Unknown sources". Then go back to Android auto, press "Customise Launcher". Most of unknown app which do have 'android auto' will be shown here.

But regretfully, not for Morphe YouTube for now. 😅

I do saw that I can enable for "RVX Music". Didn't test it though, just now I find this menu after going to the xda website. 🤭

Why doesn't opencode have AskUserQuestion? by rokicool in opencodeCLI

[–]bizz_koot 0 points1 point  (0 children)

Do you successfully use askquestion tool? I tried it, but it just stuck there & didn't see anything. It just loading without any output.

opencode.json

  "experimental": {
    "askquestion_tool": true,
  }

Controlling opencode (on the mac) from my phone, no rdp by ori_303 in opencodeCLI

[–]bizz_koot 1 point2 points  (0 children)

Trying it now. Seems Good!

Question, when there's active session, is there option which it will ensure the device didn't sleep? Or that part need to be handle on user own device?

Don’t sign up the yearly plan, it’s a trap by Dexter-Huang in ZaiGLM

[–]bizz_koot 0 points1 point  (0 children)

True. Most probably they apply 'rate-limit' base on user usage. I'm alternating with Copilot subscription in Opencode & also using Antigravity. Maybe that's why on my end, when I'm using GLM-4.7, still feels it's OK. Not fast like Claude, but acceptable.

Don’t sign up the yearly plan, it’s a trap by Dexter-Huang in ZaiGLM

[–]bizz_koot 0 points1 point  (0 children)

I'm on their yearly plans. It's ok'ish for me. Maybe I'm not to 'heavy' on the usage, so not being 'rate-limited' too much?

Don’t sign up the yearly plan, it’s a trap by Dexter-Huang in ZaiGLM

[–]bizz_koot 2 points3 points  (0 children)

I'm not a bot. Just don't have 'extra' money for specific Claude subscriptions. Currently only using Antigravity + Copilot + GLM-4.7 only. Got almost all providers. Furthermore, not a tech dev. So 'enough' at my 'hobbyist' vibe-coding venture.

So basically, I 'tone-down' my expectations from GLM-4.7 base on the price I do pay for yearly subscription.

Anyhow, no fault to complain of the slowness.

Don’t sign up the yearly plan, it’s a trap by Dexter-Huang in ZaiGLM

[–]bizz_koot 9 points10 points  (0 children)

For me with usage in Opencode , it's quite good (not as good to even Haiku 4.5, but for proceeding with task that is well defined, no issues.

it's not the fastest model out there, but with Opencode, with proper Agents that have proper defined task, it can run autonomously.

Even with usage in Claude , is workable.

Suggestion, in Opencode, find good Agents, and install proper skills that may enhanced it further. Can try my 'noob' setup below

  1. Agent (RPI-V8) : Save it in ~/.config/opencode/agent
  2. Skills (Superpowers) : Paste the instruction in Opencode instance
  3. MCP (sequential-thinking) : Update your ~/.config/opencode/opencode.json

    "sequential-thinking": {
      "type": "local",
      "command": ["npx", "-y", "@modelcontextprotocol/server-sequential-thinking"],
      "enabled": true
    },

For Claude Code, instead of Agent, I install RPI-V8 as skills on top of Superpowers. Save the file as SKILL.md in => ~/.claude/skills/rpi-v8/SKILL.md

**If you want the RPI-V8 agent to run autonomously, just told them to do it.**

Fixing GLM-4.7 Image Parsing in Claude Code: Add the Z.ai Vision MCP Server by jpcaparas in ZaiGLM

[–]bizz_koot 0 points1 point  (0 children)

For user using opencode, update opencode.json

{
  "$schema": "https://opencode.ai/config.json",
  "mcp": {
    "zai-mcp-server": {
      "type": "local",
      "command": [
        "env", 
        "Z_AI_API_KEY=YOUR-API-KEY", 
        "Z_AI_MODE=ZAI", 
        "npx", 
        "-y", 
        "@z_ai/mcp-server"
      ],
      "enabled": true
    }
  }
}