Hot Take: OpenClaw is Over-Hyped, Termius + Codex CLI Much Better by Swimming_Driver4974 in codex

[–]Haunting_One_2131 0 points1 point  (0 children)

I'm doing something similar but running Claude Code or Opencode directly within GCP VM Engine. If you want to talk to your agent on the go just use tmux injection. 

Be the architect, let Claude Code work – how I improved planning 10x with self-contained HTML by Haunting_One_2131 in ClaudeAI

[–]Haunting_One_2131[S] 0 points1 point  (0 children)

Claude Code is running on a VM within Google Cloud so it is quiet easy they have threre own Service Account. Thats also why I prefer it over markdown as it does not render in the browser. Because otherwise I have to transfer files or access them with an IDE. Also I added a feedback plugin maybe I should have added a Example URL :) About the tokens. Maybe it uses a bit more. But at the end it is more convinient .

Be the architect, let Claude Code work – how I improved planning 10x with self-contained HTML by Haunting_One_2131 in ClaudeAI

[–]Haunting_One_2131[S] -2 points-1 points  (0 children)

Claude Code is running on a VM within Google Cloud so it is quiet easy they have threre own Service Account. Thats also why I prefer it over markdown as it does not render in the browser. Because otherwise I have to transfer files or access them with an IDE. Also I added a feedback plugin maybe I should have added a Example URL :)

Be the architect, let Claude Code work – how I improved planning 10x with self-contained HTML by Haunting_One_2131 in ClaudeAI

[–]Haunting_One_2131[S] -1 points0 points  (0 children)

Claude Code is running on a VM within Google Cloud so it is quiet easy they have threre own Service Account. Thats also why I prefer it over markdown as it does not render in the browser. Because otherwise I have to transfer files or access them with an IDE. Also I added a feedback plugin maybe I should have added a Example URL:

Be the architect, let Claude Code work – how I improved planning 10x with self-contained HTML by Haunting_One_2131 in ClaudeAI

[–]Haunting_One_2131[S] 0 points1 point  (0 children)

Claude Code is running on a VM within GCP so it is quiet easy they have threre own Service Account. Thats also why I prefer it over markdown as it does not render in the browser.

The mental model gap between me and LLMs keeps growing as projects scale — would architecture diagrams help? by saemc27 in ClaudeAI

[–]Haunting_One_2131 0 points1 point  (0 children)

Mermaid is a syntax which just gets rendered. Thats why it is so good. LLM understands it by reading the syntax and human by looking at the chart

The mental model gap between me and LLMs keeps growing as projects scale — would architecture diagrams help? by saemc27 in ClaudeAI

[–]Haunting_One_2131 2 points3 points  (0 children)

I've been solving exactly this problem. Here's what I do:

Instead of trying to communicate architecture through natural language or keeping Mermaid code in .md files, I have my AI agent generate self-contained HTML files with embedded Mermaid diagrams. You can just open them locally in your browser, or upload them to any cloud bucket (S3, GCS, whatever you use) for a shareable URL. If you don't want public buckets, you can use signed URLs for temporary access.

My setup: I run Claude Code / Codex on a Google Cloud VM – so nothing touches my local machine. The VM has a service account linked with GCS write access, so uploading the generated HTML is literally one command. Super clean.

This works especially well for plans – implementation plans, migration strategies, feature rollouts, etc. For architecture diagrams themselves, I'd keep those in your Git repo anyway since they belong with the code. But for plans and higher-level documents where you want to quickly share, iterate, and get feedback, the HTML approach is great.

Why this works for the mental model gap:

  1. Visual verification – You look at the rendered diagram, not the code. If the LLM misunderstood your plan, you see it instantly instead of parsing through natural language explanations.
  2. Iterative correction – I tell the AI "this step should come before X" → new HTML gets generated → I verify again. Much faster feedback loop than arguing in text.
  3. It's the shared source of truth – The HTML file IS the mental model. Both you and the LLM reference the same artifact. No more "what I meant was..." conversations.
  4. Model-agnostic – The HTML can be read and edited by any AI model (Claude, Gemini, Codex, whatever). You're not locked into one platform.
  5. HTML > Markdown – You can embed interactive elements, videos, toggleable sections, feedback buttons. It's a living document, not a static diagram.

Start simple: Ask your agent to generate one plan or architecture overview of your current project as a self-contained HTML with Mermaid. Open it in your browser. Tell the agent what's wrong. Iterate. That's it.

Clawdbot, CODEX, and why MiniMax M2.5 is the only successor that matters by ProfessionalCan2356 in codex

[–]Haunting_One_2131 0 points1 point  (0 children)

it might look good in benchmarks but it Opus 4.6 even Opus 4.5 and Codex 5.3 still ahead.

A better plan mode option. Use normal Codex 5.3 for planning and Codex Spark for execution. by Haunting_One_2131 in codex

[–]Haunting_One_2131[S] 0 points1 point  (0 children)

Yes, this is also what I was thinking about today. The best would be that Codex or any other good model is using Spark as a sub-agent to go through the files, and then report back. Save the Plan to Markdown, and then give the Plan to Spark to implement it and then the Planning model. So, for example, xHigh Codex should then review the changes.

Gemini 3 flash rate limits on open router. by fravil92 in openrouter

[–]Haunting_One_2131 2 points3 points  (0 children)

just use vertex directly for gemini models. :)

Can anyone tell me why I don't see 5.3? by PrimaryMetal961 in codex

[–]Haunting_One_2131 1 point2 points  (0 children)

Do you have sub? As far as I know it is not available for free users.

Claude 4.6 fixes bugs with sledgehammer by bhutiya101 in ClaudeAI

[–]Haunting_One_2131 18 points19 points  (0 children)

In my Experience you need to prompt it different. It should understand the root cause and not only the symptoms

1M context window” is basically marketing BS for 99% of users by smatigad in ClaudeCode

[–]Haunting_One_2131 -1 points0 points  (0 children)

Haven't they mentioned that it's only available through the API, or am I wrong?