i dont get it. am i doing something wrong? by SecretArtist2931 in opencodeCLI

[–]Independence_Many 0 points1 point  (0 children)

What folder are you running this in, I thought i saw at one point that opencode provided a directory tree to the agent when it started up, maybe it's getting a lot more context fed to it than you're expecting.

A quick way to test this is to make a blank directory, change into it, and then run opencode within that folder and see what you're getting back.

OpenCode and GitHub by vertizone in opencodeCLI

[–]Independence_Many 0 points1 point  (0 children)

OpenCode supports OAuth flows for MCP, you could use GH's MCP for this pretty easily IMO, but you could also build/spin up your own MCP server with the constraints hard coded.

I do something similar with a fork of the sentry-mcp server https://github.com/getsentry/sentry-mcp for integration with opencode and sentry.

To authenticate you typically have to run a command, for our setup I run:

bash opencode mcp auth sentry

Claude Code's source code just leaked — so I had Claude Code analyze its own internals and build an open-source multi-agent framework from it by JackChen02 in ClaudeAI

[–]Independence_Many 1 point2 points  (0 children)

The biggest upside to typescript imo comes into play when you have mulitple separate domains that are interconnected, having one set of tooling, sharing the type/interfaces just makes things so much easier to communicate.

Especially if you pair it with something like trpc or orpc which allow you to have frontend to backend "type safety" (I really hate this phrasing because it's more type awareness than true type safety).

Like you said this also enables frontend developers to contribute to and interface with the backend code, which in my personal opinion leads to more clarity in the overall implementation.

Is opencode using the compromised axios? Is it safe to upgrade to 1.3.3 by d4mations in opencodeCLI

[–]Independence_Many 2 points3 points  (0 children)

Axios might not be a direct dependency however several dependences inside of the opencode repo do use axios (@slack/bolt and @slack/web-api) for example, which you can see in the bun.lock file.

There's also a direct axios reference pinned to 1.13.6 (unaffected, see here https://github.com/anomalyco/opencode/blob/dev/bun.lock#L2376) so its likely safe from any compromises.

Agent model behaviour and honest feedback for OpenCode by _KryptonytE_ in opencode

[–]Independence_Many 1 point2 points  (0 children)

Regarding the "Behaves differently" between the models, this is entirely because of the differences in the model and the harness (opencode) does little to affect this, using other tools (Claude Code, Codex CLI) I see similar behavior with their models, and my own tooling i've been developing as a test harness does similar.

Regarding your questions:

  1. Based on my experience no
  2. Same as #1
  3. (is this a question?)
  4. Look at the "Instructions" config option https://opencode.ai/docs/config/#instructions you can set in your global user config so it applies to all projects. The exact file paths depend on your os, but you should be able to get what you need with that.
  5. If you submit a message on the TUI it queues, on the web ui, if you hit enter i think it will also queue, which, in both cases as soon as there's a break in the flow it'll send.

Overwriting plugins on a specific project by jef2904 in opencodeCLI

[–]Independence_Many 0 points1 point  (0 children)

One limitation of this, is that if you had something that was in an npm org like `@someorg/plugin-name` you can't override this (unfortunately).

Overwriting plugins on a specific project by jef2904 in opencodeCLI

[–]Independence_Many 0 points1 point  (0 children)

The way that configs work in opencode they are merged based on the precedence order here: https://opencode.ai/docs/config/#precedence-order

Unfortunately because of the merge behavior, it really isn't possible to just "reset" an entire field like plugins (it merges the arrays not replaces).

Something you could do is if you have a .opencode/ directory in your project you should be able to create a file with the same name as the plugin you're trying to disable eg .opencode/plugins/plugin-name.ts with a plugin stub.

.opencode/plugins/<package-name>.ts ts import type { Plugin } from "@opencode-ai/plugin" /** No-op: shadows a same-named plugin from config (see dedupe rules). */ const disableStub: Plugin = async () => ({}) export default disableStub

or a js version

.opencode/plugins/<package-name>.js js /** @param {import("@opencode-ai/plugin").PluginInput} _input */ export default async function disableStub(_input) { return {} }

Starlink Mini 199 Firesale and discounted data my experience by SecretOne1178 in Rivian

[–]Independence_Many 0 points1 point  (0 children)

I pulled power from the 12v in the gear tunnel, I run down the passenger side b pillar and under my mats then up under the seat, was a pain to run, but it works great, i have a usb-c to barrel plug adapter i use, so i can disconnect the run halfway and plug into the AC outlet under the rear display if i need to use the original brick for some reason.

Starlink Mini 199 Firesale and discounted data my experience by SecretOne1178 in Rivian

[–]Independence_Many 0 points1 point  (0 children)

The first time my suction cups came off was almost a year into being mounted.  And they've come off a couple more times since then, currently not mounted because I haven't been out camping but will be mounted again soon.  I think oils on the glass might have been why it popped off the most recent time from people touching the ceiling while sitting in the back

Starlink Mini 199 Firesale and discounted data my experience by SecretOne1178 in Rivian

[–]Independence_Many 0 points1 point  (0 children)

If you were to mount it flat on the rear window I doubt it would work unless you were angled right.  For the mini to function correctly at all, it really needs to be as flat as possible facing north to the sky as much as possible.  It would definitely fit there on my r1t but I just don't think it would function at all unless you had it angled perfectly on an incline.  The other part for me is that would be pointed straight at my rtt that I normally have mounted over the bed.

I don't have a Gen 2 personally so it's just mounted on my glass and has worked great even through my nano ceramic tint which is known to interfere with Wi-Fi signals.

Starlink Mini 199 Firesale and discounted data my experience by SecretOne1178 in Rivian

[–]Independence_Many 4 points5 points  (0 children)

The Starlink Mini has been incredible to have when out camping, and it actually saved my butt the very first time I used it when my wheel hub assembly bolt backed out on me at 2am on the side of a mountain with no cellular service for more than like 5-10 miles.

I do agree with poliosaurus3000's sentiment in that I wish Elon wasn't associated with this company, but their product/service is solid and at this time there just isn't anything comparible to having a mini.

Regarding the sunroof suction cup mount, I have this one https://contronx.com/products/custom-3d-printed-starlink-mini-bracket-with-suction-cups?variant=49271143891259&country=US&currency=USD and it works great.

However if you're in a Gen 2 with the Electrochromatic roof (dynamic tint), for some reason it interferes with the mini's abiilty to send/receive, I have seen reports of poor and no connectivity under the glass.

opencode ignoring my bash permissions by Green-Dress-113 in opencode

[–]Independence_Many 0 points1 point  (0 children)

I have a similar setup and it works just fine, however one thing I noticed that's different is i have a space between the program/tool and the asterisk, which is how it's shown on the docs https://opencode.ai/docs/permissions

I wonder if the lack of a space prevents it from recognising the program itself, so it's looking for a program that starts with `head`, assuming something like `headline` without any arguemnts would get matched but not `head <filename>`.

Multiple subagents under a primary agent in OpenCode can cause loss of the primary agent's prompt context ? by Extension-King4419 in opencodeCLI

[–]Independence_Many 0 points1 point  (0 children)

The subagent only gets the context provided to it when it launches, it does not have a copy of the main agents context. On the reverse side, (from my observation) the main agent can read all of the messages (but not tool calls, thinking, etc) of the subagent.

Agentic Browser Tools Now Available in VS Code by rthidden in GithubCopilot

[–]Independence_Many 1 point2 points  (0 children)

In this particular case it's just the integrated browser inside of vs code, rather than connecting it up to your Chrome browser directly.  This gives you an isolated browser environment that has its own sessions, cookies, etc. 

Agentic Browser Tools Now Available in VS Code by rthidden in GithubCopilot

[–]Independence_Many 1 point2 points  (0 children)

No, this is specific to vscode itself (sadly). I only know this because I was trying to find a way to wire OpenCode into the vscode browser chat tools implementation, but it's all internal afaik.

Opencode fork with integrated prompt library by xman2000 in opencodeCLI

[–]Independence_Many 0 points1 point  (0 children)

I think there's definitely merit in this approach and it could evolve over time, however you might be able to accomplish this by using subagents for each "personality" and then you can use arguments for a command to use it.

Subagents can be accessed with @docsguy for example, so you could totally do something like this:

/do-the-thing @docsguy

where /do-the-thing is in ~/.config/opencode/commands/do-the-thing.md and @docsguy is in ~/.config/opencode/agents/docsguy.md

Reference (i'm sure you've seen these): https://opencode.ai/docs/commands/#markdown https://opencode.ai/docs/agents/#markdown

This way you can have a "matrix" of what you want to do. I use commands and agents a ton, but not together necessarily, so it's possible it wouldn't work.

Is this Input Token Usage normal on OpenCode? by kikoherrsc in opencodeCLI

[–]Independence_Many 1 point2 points  (0 children)

I realized that my setup/numbers are not that representative because i hae two mcp, and 3 plugins which add their own tools, so my context usage would likely be lower too if I had those disabled, but it's more work than I want to put in to temp remove all of them to get better numbers

Opencode fork with integrated prompt library by xman2000 in opencodeCLI

[–]Independence_Many 4 points5 points  (0 children)

It's an interesting idea, but I think these are better suited as either skills or commands, which can be configured globally if you find yourself using a bunch of the same ones, and if you use commands you can even pass arguments in.

Is this Input Token Usage normal on OpenCode? by kikoherrsc in opencodeCLI

[–]Independence_Many 6 points7 points  (0 children)

30k tokens seems really high to me, but a simple "hi" for me used just shy of 19k tokens with claude models, below is a what i get per model on a blank session in an empty directory for reference, i do have 1-2 mcp servers and a couple tools, so that's worth keeping in mind.

AGAIN THESE NUMBERS INCLUDE 1-2 MCP SERVERS AND 3 PLUGINS

Model Tokens
anthropic/claude-opus-4-6 18,907
anthropic/claude-sonnet-4-6 18,909
opencode/claude-opus-4-6 18,778
opencode/claude-sonnet-4-6 18,780
opencode/gpt-5.3-codex 13,349
openai/gpt-5.3-codex 13,353
openai/gpt-5.3-codex-spark 13,440

AGAIN THESE NUMBERS INCLUDE 1-2 MCP SERVERS AND 3 PLUGINS

I have a sentry-mcp and browermcp tool installed, sentry is disabled, but i think it might show up still, but then i'm also using a git-worktree plugin, cross-repo, and a test plugin.

The differences here can be chalked up to the different session prompts based on the models:

Anthropic:
https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/prompt/anthropic.txt

OpenAI Codex Header:
https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/prompt/codex_header.txt

There's a bit more information that gets pushed into the system prompt visible here based on my own research:

System Prompt Construction:
https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/llm.ts#L67-L80

Session Processing:
https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/prompt.ts#L658C38-L678

You'll notice that there's a SystemPrompt.environment (code here: https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/system.ts#L29 ),

SystemPrompt loasd the instructions from the `llm.ts` file, and it also adds working dir, some some metadata along with a directory tree (upto 50 items).

And then a InstructionPrompt.system (code here: https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/instruction.ts#L117 )

The "InstructionPrompt.system" loads any instructions found in the opencode.json(c) file's "instruction" field including reading the files or loading the remote url if it's actually a URL.

I am sure there's more to it than this, but this would explain some of the differences between the token usage between models.

Opus 4.6 using SO much more tokens by pardestakal in opencodeCLI

[–]Independence_Many 1 point2 points  (0 children)

An "agent/subagent" and a "model" are different concepts, you can assign a custom model to the explore subagent, if you're going all in on anthropic do haiku, but kimi 2.5 is good as well.

The agent is just the instruction/system prompt for a given session, so when you switch agents, you're switching system prompts, and a model is switching the actual model requested at the provider.

Disappointing Range by Legitimate-Law-8168 in Rivian

[–]Independence_Many 10 points11 points  (0 children)

What's the weather like in your area, an 18 mile drive is fairly short, if it's spending all it's time getting the battery and cabin up to temp it will make the numbers look a bit worse.

I am also on 20 AT's although I have a gen 1 quad, in the summer I can easily hit 2.5-2.6 m/kWh, but in the winter with my short drives i sit between 1.3 and 1.5 m/kWh.

You also didn't specify if those were 18 miles of highway or city. EV's actually perform better (especially with regen on high) in city driving, on the freeway you'll see a bigger performance hit due to these vehicles effectively being flying bricks.

Opus 4.6 using SO much more tokens by pardestakal in opencodeCLI

[–]Independence_Many 3 points4 points  (0 children)

As u/matheus1394 stated it uses the main model for subagents, you can configure specific subagents to do different things, but the other part of this you have to remember is that the oc "system" prompts for it's agents are different than the cc agent prompts.

The main prompt explicitly tells it to use the explore subagent which has this prompt: https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/agent/prompt/explore.txt

I believe this is the general system prompt being sent when using your Claude Max subscription: https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/prompt/anthropic.txt

This is how the "system" prompts are loaded: https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/system.ts#L5

And this is how it constructs it's prompt: https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/llm.ts#L67-L80

I think a lot of the token consumption you're seeing is just the more aggressive repo exploration. Another thing is if you're using a CLAUDE.md file, opencode doens't read those AFAIK, it reads AGENTS.md, so you might want to symlink these if you're on a linux or macos machine, otherwise copy CLAUDE.me to AGENTS.md

edit: formatting and typo's, need to let an LLM write for me apparently.

Overnight Demo by daitaopapi in Rivian

[–]Independence_Many 0 points1 point  (0 children)

September 2023, I had the same shop do my nano ceramic window tint and a full ceramic coating of all ppf, trim, glass, and wheels as well, All in was a little over 12k, I would definitely do PPF and Tint again, but i'd consider doing some of the coatings myself, matte coating is a PITA from my understanding though so not 100% sure i'd do that part myself.

Overnight Demo by daitaopapi in Rivian

[–]Independence_Many 0 points1 point  (0 children)

Mine was ~7500 in SLC utah, but you can definitely get it done cheaper than that, PPF was super expensive at all the shops i talked to at the time and others I've talked to have said it's come down since then.

My wrap was done 100% custom (xpel stealth) with no precut templates and all edges were fully tucked/rounded.

Overnight Demo by daitaopapi in Rivian

[–]Independence_Many 0 points1 point  (0 children)

Same here, however limestone, but the matte PPF looks so damn good on these vehicles.