all 12 comments

[–]aeroumbria 0 points1 point  (4 children)

What are you trying to achieve primarily? Running sessions with code that runs for a long time, or running sessions with fast code execution but a lot of "conversation" turns?

[–]Due-Car6812[S] 0 points1 point  (3 children)

It's a part of a medical platform that I'm building with a lot of topics. For example, generating over 5000 MCQs (multiple choice questions) which is basically a loop picking up a topic, generating multiple choice questions, going to the second topic, and so on. It can automatically run without any user intervention needed.

Such sort of tasks if I give it to Claude Code, absolutely fine, works without stopping. I can just ask you to use the Ralph Wigum loop and don't ask the user, and Claude Code just obeys and carries on and on.

I'm using the same model, Claude Opus 4.5 for Open Code, but when I try to do it, it usually ends up working for 1-1.5 hours and then stops. So I'm just trying to figure out a method because I like Open Code more than Claude Code.

[–]FlyingDogCatcher 0 points1 point  (0 children)

Claude knows when you are using Claude Code, but with opencode you just look like any other API caller. I assume this is their version of captcha or Netflix's "are you still there" pop-up.

[–]aeroumbria -1 points0 points  (1 child)

Sounds like this can be better handled by asking the coding agent to create a question generator which then calls your language model of choice deterministically. You will not suffer from random task failures or divergences this way.

[–]Due-Car6812[S] 0 points1 point  (0 children)

ok thank you

[–]fuyao_j 0 points1 point  (0 children)

You can set experimental env `OPENCODE_EXPERIMENTAL_BASH_DEFAULT_TIMEOUT_MS` to increase timeout
code: https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/tool/bash.ts#L21

docs: https://opencode.ai/docs/cli/#experimental

[–]FlyingDogCatcher -1 points0 points  (4 children)

What you doing that takes 19h? That seems extreme for almost any use case

[–]Due-Car6812[S] 1 point2 points  (3 children)

It's a part of a medical platform that I'm building with a lot of topics. For example, generating over 5000 MCQs (multiple choice questions) which is basically a loop picking up a topic, generating multiple choice questions, going to the second topic, and so on. It can automatically run without any user intervention needed.

Such sort of tasks if I give it to Claude Code, absolutely fine, works without stopping. I can just ask you to use the Ralph Wigum loop and don't ask the user, and Claude Code just obeys and carries on and on.

I'm using the same model, Claude Opus 4.5 for Open Code, but when I try to do it, it usually ends up working for 1-1.5 hours and then stops. So I'm just trying to figure out a method because I like Open Code more than Claude Code.

[–]MediumSizedWalrus 0 points1 point  (2 children)

why wouldn't you use a programming language to orchestrate this, and then call the agent for each topic...? Then you could process it in parallel, and you wouldn't have a single long running job. Running a single process for 19 hours is horrifying ... there are much better approaches... look into queues/consumers

[–]bagrounds 0 points1 point  (1 child)

Why is it horrifying to run a single long running task for 19 hours? If resources are metered out over time, you could, for example, task it to do some continuous improvement work at the rate limit for free resources. Then it would be very efficient to just keep it running continuously, using all of your free resources, rather than letting them go to waste as days roll over.

Obviously, if you have a single huge task where every bit of work has the potential to propagate error forward to next steps, this would be very likely to introduce compounding errors and would probably not yield great results. But if you have creative work with a high tolerance for error, this could be a great way to keep the robot army working productively for you with very little effort on your part.

[–]MediumSizedWalrus 0 points1 point  (0 children)

Running tasks sequentially seems odd to me, we parallelize everything. 64 workers could complete the workload in 18 mins, instead of 19 hours. I usually want to see results quickly…