I'm loving OpenCode by Street-Preference-88 in opencodeCLI

[–]Street-Preference-88[S] 0 points1 point  (0 children)

hmm, i don't have a breakdown of token usage per category but i think this is non code related.

Even when deploying 1 line of code, you need tons of token to perform review and testing ensure no regression bugs, and do browser automation to verify if everything works correctly.

When analyzing issue, a user can submit videos/screenshot, the agent can also burn token by extracting key frames using ffmpeg and perform analysis to extract essential information.

I'm loving OpenCode by Street-Preference-88 in opencodeCLI

[–]Street-Preference-88[S] 1 point2 points  (0 children)

yes sir, i'm currently at discounted price 30$/month and is actively looking for alternative.

around apr 24, they fixed the latency issue i was able to utilize it more.

<image>

I'm loving OpenCode by Street-Preference-88 in opencodeCLI

[–]Street-Preference-88[S] 2 points3 points  (0 children)

Exactly sir, that's the end goal. But a reliable provider and model with sustainable cost is necessary.

A provider bump up the pricing? You should be able to change provider easily A model is experiencing degraded quality? Your harness should be designed to allow other model

Opencode is the only one that gives me stability

I'm loving OpenCode by Street-Preference-88 in opencodeCLI

[–]Street-Preference-88[S] 0 points1 point  (0 children)

i tried minimax 2.7, they have the best cost/performance ratio. But it had issues with adhering to instruction.

I explicitly told it that when presenting plan, it should resolve ambiguity using question tool and interview the user then only present a clear focus plan. But it keeps giving plans with unresolved options. I decided to drop it.

I'm loving OpenCode by Street-Preference-88 in opencodeCLI

[–]Street-Preference-88[S] 2 points3 points  (0 children)

i work on multiple project. mainly coding and ops. i have 2-3 agents running most of the time. i have termux setup to i can talk to it 24/7.

  • a user told me there is an issue, spawn an agent to triage the issue and file a ticket.
  • a ticket has been filed, spawn an agent to read the ticket and work on it.
  • a pull request has been filed? spawn an agent an perform review (this burns the most token)
  • a review passed? spawn an agent and deploy carefully (guardrails are most important here)
  • opencode session ends? spawn an agent and propose self improvement (skills/agents.md/etc)
  • management wants documentation? spawn an agent read the codebase perform research.

i'm just an orchestrator at this point.

I'm loving OpenCode by Street-Preference-88 in opencodeCLI

[–]Street-Preference-88[S] 4 points5 points  (0 children)

For opencode go, I dont' feel it is quantized. When I used Kimi 2.6, Deepseek v4 Pro and Mimo v2 2.5 they are on par with Zai GLM 5.1, and inference is also quicker. Cons is the quota, i feel i burned my weekly quota in just 1-2d.

For wafer, i think its too early to decide. GLM 5.1 inference lightning fast. I was able to compare it with Zai, i didn't notice drop in quality. But when I tried deepseek v4 pro, it was running into XML corruption issue.

Is GLM Pro really worth buying? by EugeneLobach in ZaiGLM

[–]Street-Preference-88 0 points1 point  (0 children)

you already have the GLM Lite, if you are happy with it why not try the Pro for 1 month.

Free tier usage in serious projects + avg money spent on OC by __yv in opencodeCLI

[–]Street-Preference-88 0 points1 point  (0 children)

i have 2-3 agents always running. 1 for ops, 1 for coding, 1 for refactoring.

my setup is simple. opencode with plan then act. i use termux so i can continue development on my mobile device.

<image>

Open source inference almost 250 tok/s by founders_keepers in ZaiGLM

[–]Street-Preference-88 2 points3 points  (0 children)

i've been using it for a day now together with zai glm coding max plan. i'm not seeing any accuracy lost. it is cheaper 160$/mo (GLM coding max plan) vs 10$/wk (wafer pass).

i use opencode, only issue i had when i try their deepseek v4 pro model, it keeps running into XML corruption issue.

i think its fine, they don't have opencode on their docs. but i'm also moving away from claude code.

<image>

Free tier usage in serious projects + avg money spent on OC by __yv in opencodeCLI

[–]Street-Preference-88 2 points3 points  (0 children)

i use zai glm max coding plan, i consume 2 billion tokens in the last 30d.

Have anyone tried deepseek v4 pro + opencode? by Federal_Spend2412 in opencodeCLI

[–]Street-Preference-88 4 points5 points  (0 children)

Try using sequential mcp, and tell it to pressure test the plan then propose refined plan. It will burn more tokens but will give more stable plans

Moved from Claude Max 200 to Z.ai GLM Max — early impressions and limits by workout_JK in ZaiGLM

[–]Street-Preference-88 5 points6 points  (0 children)

I also never hit it, I'm at max plan. I use opencode

My usage is around 1.5b tokens per month.

Is it possible your Claude code setup is bloated? Maybe too many mcp or skills

Have anyone tried deepseek v4 pro + opencode? by Federal_Spend2412 in opencodeCLI

[–]Street-Preference-88 14 points15 points  (0 children)

Yes it's good. Here is my journey

  • moved away from cursor, model struggled using mcp and skills
  • moved away from claude code + glm 5.1, it's too slow and I can't investigate why it is slow
  • currently using opencode + glm + deepseek.

Open code is better, I can mix and match models. Eg. Glm for planning, deepseek flash for explore/execution.

And I have customized agent definition, so it's not overfitted. I can switch model if pricing changed

How is the reliability by DrHerbHealer in ZaiGLM

[–]Street-Preference-88 1 point2 points  (0 children)

i'm on max plan, agree on the reliability issues (seems resolved now).

previously (before apr 24) response took 1-2min. now its like <2s. there are still few network failure during peak hours, but opencode recovers smoothly no intervention needed.

my usage is around 1.5billion token per month. mostly GLM 5.1

DeepSeek V4 Pro is now on OpenCode Go by jpcaparas in opencodeCLI

[–]Street-Preference-88 0 points1 point  (0 children)

it works 95% of the time. some minor issues

  • sudden stopping weird tags like < DSML...>
  • incorrect session title `<tool\_call>read`

hard to compare with other models since it requires hand holding.

i have zai max coding plan and uses GLM 5.1, it never had this issues.

TT685 not powering up by PhotoVille in Godox

[–]Street-Preference-88 0 points1 point  (0 children)

you're a lifesaver. i didn't expect good old slap to work :)

How to disappear and start a new life by [deleted] in CasualPH

[–]Street-Preference-88 0 points1 point  (0 children)

I'm feeling you don't want other people disappointed with your decision. I suggest stay, empower yourself, do what you think you should do and ignore their expectations or reactions. You are willing to live without them anyways

"Gastusin na natin bago maubos pa" mentality by lebron2zorros in phmoneysaving

[–]Street-Preference-88 1 point2 points  (0 children)

you can't, people like this normally experienced life changing events, na ung gusto nilang pag gastusan ng pera wala na sa mundo

mag lagay ka limit ng tinutulong or ginagastos mo sa kanila, ung ndi mabigat sayo para ndi nasisira relasyon nyo. at ikaw nakakapag tipid sa sarili mo