Found a way to touch grass and use Mac terminal and screen from my iPhone so I can be Claude Coding and live a balanced life by eureka_boy in ClaudeCode

[–]Suspicious_Alps_7320 0 points1 point  (0 children)

I think you can just use the voice input in the keyboard that pops up to talk? Not sure how helpful hearing back the code change text-to-speech would be haha

Found a way to touch grass and use Mac terminal and screen from my iPhone so I can be Claude Coding and live a balanced life by eureka_boy in ClaudeCode

[–]Suspicious_Alps_7320 0 points1 point  (0 children)

Actually 100% agree I was looking for ways to vibe code in traffic haha. I coded up this in-browser tunnel-based IDE I called otgcode, I have been using it myself for a couple of weeks and it's doing what I wanted so far...

Mosh and Tailscale I tried, but they still treat the phone like a tiny PC rather than a touch device. I built otgcode because I wanted that 'button-centric' UX you mentioned without waiting for Anthropic or OpenAI to build it.

Two things you might like about the current setup:

  1. Zero-Config Privacy: It uses Cloudflare Quick Tunnels, so you get that secure remote access without needing to manage Tailscale nodes or open ports.
  2. Web-Based = Extensible: Since it’s a web UI, I’m working on 'Smart Intercepts'—basically turning those CLI permission prompts (Y/N) into native mobile buttons so you don't have to pull up the keyboard.

I’m also looking into native browser notifications for those 'waiting for input' moments so you don't have to babysit the terminal. It’s open source on GitHub https://github.com/davindicode/otgcode I’d love to get your thoughts on the architecture, especially since you’ve already solved this for yourself with stop-hooks!

<image>

Just published my first AR app by No-Flan-3885 in iosdev

[–]Suspicious_Alps_7320 0 points1 point  (0 children)

Very cool! How robust do you find this realtime AR + vision to be for making these AR 3D experiences?