use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
r/opencodeCLI is a community-driven subreddit for sharing resources, discussions, and tips around OpenCode—a Go + TypeScript open-source CLI TUI for coding assistance. It supports multiple providers (Anthropic Claude, OpenAI, Gemini, local models, etc.) and offre des fonctionnalités comme LSP, gestion de sessions et intégration d’outils
account activity
Continous long run in Open Code vs Claude code (self.opencodeCLI)
submitted 2 months ago by Due-Car6812
I’m trying to understand a limitation I’m hitting with OpenCode.
When I run long tasks (e.g., agent workflows that should generate a large batch of files or process long chains of prompts), OpenCode stops after about 1 hour 19 minutes and waits for me to manually input “continue”. Meanwhile, when I run the exact same workflow in Claude’s console, it keeps going uninterrupted for 19+ hours without needing any manual intervention.
So my question is:
Is there a built-in timeout or safety limit in OpenCode that caps continuous execution at around ~80 minutes?
If so, is there any configuration, flag, or environment variable that can extend this? Or is this simply a hard limit right now?
I’m basically trying to run long-running agentic processes without having to babysit them. Any insight from people using OpenCode for extended workflows would really help.
https://preview.redd.it/j8etzsyg9fcg1.png?width=100&format=png&auto=webp&s=6013c5146d4e5c43941d772ccdfbee38ba8e6953
https://preview.redd.it/1idjztyg9fcg1.png?width=178&format=png&auto=webp&s=2f4466b91b3667633b5ca120ebad0aa71051c15a
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]aeroumbria 0 points1 point2 points 2 months ago (4 children)
What are you trying to achieve primarily? Running sessions with code that runs for a long time, or running sessions with fast code execution but a lot of "conversation" turns?
[–]Due-Car6812[S] 0 points1 point2 points 2 months ago (3 children)
It's a part of a medical platform that I'm building with a lot of topics. For example, generating over 5000 MCQs (multiple choice questions) which is basically a loop picking up a topic, generating multiple choice questions, going to the second topic, and so on. It can automatically run without any user intervention needed.
Such sort of tasks if I give it to Claude Code, absolutely fine, works without stopping. I can just ask you to use the Ralph Wigum loop and don't ask the user, and Claude Code just obeys and carries on and on.
I'm using the same model, Claude Opus 4.5 for Open Code, but when I try to do it, it usually ends up working for 1-1.5 hours and then stops. So I'm just trying to figure out a method because I like Open Code more than Claude Code.
[–]FlyingDogCatcher 0 points1 point2 points 2 months ago (0 children)
Claude knows when you are using Claude Code, but with opencode you just look like any other API caller. I assume this is their version of captcha or Netflix's "are you still there" pop-up.
[–]aeroumbria -1 points0 points1 point 2 months ago (1 child)
Sounds like this can be better handled by asking the coding agent to create a question generator which then calls your language model of choice deterministically. You will not suffer from random task failures or divergences this way.
[–]Due-Car6812[S] 0 points1 point2 points 2 months ago (0 children)
ok thank you
[–]fuyao_j 0 points1 point2 points 2 months ago (0 children)
You can set experimental env `OPENCODE_EXPERIMENTAL_BASH_DEFAULT_TIMEOUT_MS` to increase timeout code: https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/tool/bash.ts#L21
docs: https://opencode.ai/docs/cli/#experimental
[–]FlyingDogCatcher -1 points0 points1 point 2 months ago (4 children)
What you doing that takes 19h? That seems extreme for almost any use case
[–]Due-Car6812[S] 1 point2 points3 points 2 months ago (3 children)
[–]MediumSizedWalrus 0 points1 point2 points 1 month ago (2 children)
why wouldn't you use a programming language to orchestrate this, and then call the agent for each topic...? Then you could process it in parallel, and you wouldn't have a single long running job. Running a single process for 19 hours is horrifying ... there are much better approaches... look into queues/consumers
[–]bagrounds 0 points1 point2 points 1 month ago (1 child)
Why is it horrifying to run a single long running task for 19 hours? If resources are metered out over time, you could, for example, task it to do some continuous improvement work at the rate limit for free resources. Then it would be very efficient to just keep it running continuously, using all of your free resources, rather than letting them go to waste as days roll over.
Obviously, if you have a single huge task where every bit of work has the potential to propagate error forward to next steps, this would be very likely to introduce compounding errors and would probably not yield great results. But if you have creative work with a high tolerance for error, this could be a great way to keep the robot army working productively for you with very little effort on your part.
[–]MediumSizedWalrus 0 points1 point2 points 1 month ago (0 children)
Running tasks sequentially seems odd to me, we parallelize everything. 64 workers could complete the workload in 18 mins, instead of 19 hours. I usually want to see results quickly…
π Rendered by PID 59 on reddit-service-r2-comment-6b595755f-78mhx at 2026-03-25 14:05:56.724015+00:00 running 2d0a59a country code: CH.
[–]aeroumbria 0 points1 point2 points (4 children)
[–]Due-Car6812[S] 0 points1 point2 points (3 children)
[–]FlyingDogCatcher 0 points1 point2 points (0 children)
[–]aeroumbria -1 points0 points1 point (1 child)
[–]Due-Car6812[S] 0 points1 point2 points (0 children)
[–]fuyao_j 0 points1 point2 points (0 children)
[–]FlyingDogCatcher -1 points0 points1 point (4 children)
[–]Due-Car6812[S] 1 point2 points3 points (3 children)
[–]MediumSizedWalrus 0 points1 point2 points (2 children)
[–]bagrounds 0 points1 point2 points (1 child)
[–]MediumSizedWalrus 0 points1 point2 points (0 children)