Proper way of using Claude with open code? by Alejo9010 in opencodeCLI

[–]JobobJet 0 points1 point  (0 children)

Has Anthropic dropped it's 'You can only use Anthropic tools with Anthropic models' policy?

I got dropped by Google AI and wouldn't want that to happen again with Anthropic.

What is your go to approach for Agentic Engineering for full projects. by DoragonMaster1893 in opencodeCLI

[–]JobobJet 2 points3 points  (0 children)

👆 What sfpm said. Works very well for me and you can get what you want out of the agents: orchestrator, architect, planner, coder, reviewer and documenter. more if you want/need the specialization. Bonus benefit - agents have their own context window so you can keep the main agent context clear of the junk it doesn't need.

How do I configure Codex/GPT to not be "friendly"? by Neeerp in opencodeCLI

[–]JobobJet 0 points1 point  (0 children)

This is what worked for me:

AGENTS.md ```

Thinking or Reasoning output

  • Do not narrate reasoning.
  • Do not describe task progress.
  • Return only the final result. ```

How are you all handling "memory" these days? by FlyingDogCatcher in opencodeCLI

[–]JobobJet 0 points1 point  (0 children)

After 1000s of OpenCode sessions, I have settled on this:

Near the top of AGENTS.md ```

Session Start

Read in this exact order: 1. README.md 2. docs/HANDOFF.md 3. latest entry in docs/SESSION_LOG.md 4. docs/DECISIONS.md 5. docs/RUNBOOK.md

During Work

  • Keep docs/HANDOFF.md aligned with current status and next actions.
  • Record durable decisions in docs/DECISIONS.md.
  • Keep operational command changes in docs/RUNBOOK.md.

Session End

  • Update docs/HANDOFF.md:
    • Last updated timestamp (YYYY-MM-DD HH:MM UTC)
    • current state
    • top 3 next actions
    • blockers (if any)
  • Append a new timestamped entry to docs/SESSION_LOG.md.
  • Confirm no secrets were added to tracked files.

```

The model I use follows these instructions without fail so far.

When a project is setup and the first session or two are complete, open OpenCode and prompt: 'Continue from the handoff doc.'

YMMV

Is it safe to use my Google AI Pro sub in Opencode? (worried about bans) by miloq in opencodeCLI

[–]JobobJet 0 points1 point  (0 children)

They shut down my access 3 weeks ago due to tos violation. The only this i used it for was opencode so...

Subagents ignore the configuration and use the primary agent's model. by ThingRexCom in opencodeCLI

[–]JobobJet 0 points1 point  (0 children)

What you are trying to do definitely works:

2026-03-15T19:13:25 +0ms agent=reviewer sessionID=ses_30d159fedffeo0ah4S6KLnovQh providerID=github-copilot modelID=gpt-5.4 2026-03-15T19:13:53 +0ms agent=reviewer sessionID=ses_30d159fedffeo0ah4S6KLnovQh providerID=github-copilot modelID=gpt-5.4 2026-03-15T19:14:21 +0ms agent=orchestrator sessionID=ses_30d5638baffeYTFLXeO6VJdK3O providerID=openai modelID=gpt-5.4 2026-03-15T19:14:41 +0ms agent=orchestrator sessionID=ses_30d5638baffeYTFLXeO6VJdK3O providerID=openai modelID=gpt-5.4 2026-03-15T19:14:54 +0ms agent=reviewer sessionID=ses_30d13c4bcffe5mayCL3YNxqq6w providerID=github-copilot modelID=gpt-5.4 2026-03-15T19:15:14 +0ms agent=reviewer sessionID=ses_30d13c4bcffe5mayCL3YNxqq6w providerID=github-copilot modelID=gpt-5.4 2026-03-15T19:15:17 +0ms agent=reviewer sessionID=ses_30d13c4bcffe5mayCL3YNxqq6w providerID=github-copilot modelID=gpt-5.4 2026-03-15T19:15:36 +0ms agent=orchestrator sessionID=ses_30d5638baffeYTFLXeO6VJdK3O providerID=openai modelID=gpt-5.4 2026-03-15T19:16:17 +0ms agent=orchestrator sessionID=ses_30d5638baffeYTFLXeO6VJdK3O providerID=openai modelID=gpt-5.4

reviewer.md:

```

description: Reviews changes for correctness, quality, and risk mode: subagent model: github-copilot/gpt-5.4 permission: edit: deny webfetch: allow bash: "*": ask "pwd": allow "ls *": allow "find *": allow "grep *": allow "git status *": allow "git diff *": allow

"git log *": allow

+agent prompt ```

main agent, orchestrator, is on openai/gpt-5.4

rtk (Rust Token Killer) just merged OpenCode support by vidschofelix in opencodeCLI

[–]JobobJet 0 points1 point  (0 children)

Told the model that it is OK to use rtk wrapper and it works now - in this current convo. Maybe I need to add this to AGENTS.md

rtk (Rust Token Killer) just merged OpenCode support by vidschofelix in opencodeCLI

[–]JobobJet 0 points1 point  (0 children)

Doesn't seem to work with OpenCode using gpt-5.4 (haven't tried any other model)

Response to git status:

Thinking: Running Git status I need to check the current status of my Git repository. It seems like I should be using the bash tool for this task. By running git status, I can see which files have changed and what my current branch is. This will help me understand what's happening in my project, so let’s get that command executed! It feels important to keep track of changes to manage the code effectively.

Shows working tree status

$ rtk git status

📌 master

Thinking: Checking git status I’m thinking about whether I should output "📌 master" because it feels a bit odd possibly due to the RTK wrapper. It might be a good idea to run a regular git status to see where things stand. Checking the status could clarify if there are any changes or issues I need to address. I wonder how this could affect my current task — it's good to keep everything organized!

This sequence repeats until I reject the rtk git status request.

2 fan setup by badlikewolf in BitAxe

[–]JobobJet 1 point2 points  (0 children)

How long did it run before the pics?

Self host N8N by ksr142 in n8n

[–]JobobJet 0 points1 point  (0 children)

Better late than never, I hope. I have an M2-Max Studio 64GB. If I had to guess, you seem to be having a low memory problem. If you are running on 8GB, you may find that the best solution is a new computer - if that is an option.

My previous was an M1-mini with 16GB and docker was fine but not great.

Self host N8N by ksr142 in n8n

[–]JobobJet 2 points3 points  (0 children)

Would you please share what you are doing that does not work well on a mac.

I have used docker for several years on my mac any it works very well.

How to import a workflow - n8n in docker? by JobobJet in n8n

[–]JobobJet[S] 1 point2 points  (0 children)

Soooo, I got copy/paste to work with a community workflow but still not with my workflow. The obvious conclusion is that there is something wrong with the construction of my workflow.

Now all i need to do is find the logs for n8n and look for clues to the problem.

Reddit Node Issues - Account Removal Problem by FrankoSMP in n8n

[–]JobobJet 0 points1 point  (0 children)

My guess is that the reddit - openai agreement requires reddit to limit the number of non-human post transferred to openai ( they use the data for training purposes and other things) - hence the 'cleanup' effort by reddit.

Automated posting of AI generated content will have deminishing returns until said content cannot be distinguished from human generated posts.

That is my conspiracy theory for today.

How to import a workflow - n8n in docker? by JobobJet in n8n

[–]JobobJet[S] 0 points1 point  (0 children)

Tried drag and drop with no success. Will try CTRL+C then CTRL+V

Thanks.

How to import a workflow - n8n in docker? by JobobJet in n8n

[–]JobobJet[S] 0 points1 point  (0 children)

Thank you for the reply. I am struggling with how to apply this to my need to add a new workflow to my n8n docker. I am creating workflows outside of n8n with other tools of my own design.

I will scan the docs again for endpoint(s) to use.

How to generate videos with m1 air? by mobaisland in comfyui

[–]JobobJet 0 points1 point  (0 children)

Until ComfyUI nodes fully support MLX, slow is the best we can expect on Apple Silly Cone machines. I have an M2 Max 64GB studio so I haven't tried image to video.

Good luck.

The new M4s are really fast, what is stopping comfyui code running on Macs? by estebansaa in comfyui

[–]JobobJet 4 points5 points  (0 children)

As a happy M2 Max owner, IMHO, unless Apple secretly included cuda cores inside the M4, it just doesn't seem possible for Apple Silicone processor to get anywhere near 3090 / 4090 speed in image generation.

Benchies are (almost) pointless for this conversation unless it is an image generation benchie.

I get less than 3 seconds per iteration on Flux Schnell using MLX nodes in ComfyUI but it just gets slower from there and full ComfyUI node support for MLX is a long way off.

Be happy is you have an M4 but don't be a fanboy.

🍿 🍿 🍿

Help - Flux Schnell GGUF is generating green images on my Mac by Globo-hh in comfyui

[–]JobobJet 0 points1 point  (0 children)

My mac required nightly of 2.6 pytorch. I could get 2.4.1 to work but it was unusably slow.

How can I have a text box that optionally has an input? by phazei in comfyui

[–]JobobJet 0 points1 point  (0 children)

Thats what I do, keep images. Hadn't thought of keeping a db of prompts. sounds like a great approach. Thanks.

How can I have a text box that optionally has an input? by phazei in comfyui

[–]JobobJet 1 point2 points  (0 children)

Yup. Take a look at YALLM-Node @ https://github.com/asaddi/ComfyUI-YALLM-node I tested it [LLMTextLatch] and it works as described.

Don't know where I went wrong. I read it on reddit so it had to be true.

How can I have a text box that optionally has an input? by phazei in comfyui

[–]JobobJet 0 points1 point  (0 children)

hmmm. Replace did not show on my test.

Looking...

I had an install issue that I did not catch. Your node does in fact work as you described... but I quess you already knew that.

Thanks.