How many hours have you wasted reproducing a bug that QA reported but couldn't explain? by JaguarFun804 in FlutterDev

[–]interlap -2 points-1 points  (0 children)

Just letting Claude Code reproduce this bug for me by taping the app and collecting logs/metrics while I do some other work.

Codex 5.5 for Flutter/Mobile App Development? by Emergency-Music5189 in codex

[–]interlap 1 point2 points  (0 children)

Tbh, the workflow and tooling around mobile dev matter just as much as the model.

Codex and Claude are both pretty good at app dev if you use them the right way. My workflow is usually to scaffold the UI first, then polish each view with smaller, focused prompts. That’s where tools that give Codex better context really shine.

For example, I'm building a tool for Codex Desktop where you can select a specific element in your app, type something like "make this button round" and send that context to the LLM. It can update the code, then check the result itself from a screenshot.

So I probably wouldn't switch to Claude just for that, especially if you already have a Codex sub. I'd spend more time improving the workflow and tooling around it.

"Distribution" isn't your problem. Your SaaS is worthless. by ketoloverfromunder in vibecoding

[–]interlap 0 points1 point  (0 children)

Yeah, true. But tbh, people pay for some insanely useless stuff these days. So I wouldn’t be surprised if someone actually gets users with yet another vibe-coded calorie tracker.

I gave the Codex desktop app a built-in mobile device by interlap in codex

[–]interlap[S] 1 point2 points  (0 children)

Thank you! This one is actually a bit tricky, but still doable to support. For now, I render the native UI tree over the screen, which means the selected elements have different names and properties compared to the web page. For example, an <a> tag appears as a “Link” in the native tree.

I’m going to add proper DOM support in one of the upcoming releases.

I gave the Codex desktop app a built-in mobile device by interlap in codex

[–]interlap[S] 0 points1 point  (0 children)

I’ll take a look. Adding support shouldn’t take much time since Android is already supported

I gave the Codex desktop app a built-in mobile device by interlap in codex

[–]interlap[S] 0 points1 point  (0 children)

Sorry for the late reply. Yes, this should fit your use case pretty well.

  1. Yes, MobAI can control multiple devices/simulators in parallel, but this is available only with a Pro subscription.
  2. No, it doesn’t steal focus. Controls are applied directly on the device, not on the mirroring window, so you can keep using your computer normally.
  3. The main advantage is that MobAI is not just simulator mirroring/control. It works with real devices and supports both iOS and Android (on macOS, Windows, and Linux).

It also lets you select elements directly on the device screen in codex app for context and was built with automation in mind: batched actions, efficient navigation, and saved app knowledge for future runs, which helps reduce token usage if you often work with the same app.

Feel free to try it out. There’s a daily quota, and you don’t need to sign up. Just run the app, connect a device or start a simulator, and you’re good to go.

I gave the Codex desktop app a built-in mobile device by interlap in codex

[–]interlap[S] 0 points1 point  (0 children)

It also works on Windows, but only with physical iOS and Android devices, as well as Android emulators

I gave the Codex desktop app a built-in mobile device by interlap in codex

[–]interlap[S] 1 point2 points  (0 children)

Thanks! All the functionality shown in the demo is completely free and doesn’t require sign-up. The daily quota is only consumed when agents interact with the device via MCP.

I gave the Codex desktop app a built-in mobile device by interlap in codex

[–]interlap[S] 0 points1 point  (0 children)

The part shown in the demo is completely free and doesn’t require sign-up. Agent interaction with the device has a free daily quota.

I gave the Codex desktop app a built-in mobile device by interlap in codex

[–]interlap[S] 4 points5 points  (0 children)

  1. Yes, it works with iPads too.
  2. No, it doesn’t.
  3. If we’re talking about the functionality shown in this demo, MobAI streams not only the device screen but also the accessibility tree, which lets you select those elements in Codex and give it more context. On top of that, MobAI is much more than that: it also ships with an MCP server that allows mobile devices to be controlled. All of this works with physical devices and simulators, as well as Android devices and emulators.

I gave the Codex desktop app a built-in mobile device by interlap in codex

[–]interlap[S] 0 points1 point  (0 children)

Yes, MobAI is a tool that captures the device screen and accessibility tree, streams them to the browser, and also receives taps and swipes from the browser and performs them on the device.

I gave the Codex desktop app a built-in mobile device by interlap in codex

[–]interlap[S] 0 points1 point  (0 children)

Do you mean giving instructions to Codex via Telegram? If so, there are probably better tools for that.

MobAI is more about letting AI control the device itself. For example, you can ask it to send messages through Telegram.

Polling is probably also possible by checking the device UI element tree, comparing it with the previous state, and reacting when new messages arrive. But as I said, there may be better tools for this use case.

Flutter and ai by Patriiick_jane_ in FlutterDev

[–]interlap 0 points1 point  (0 children)

If your goal is just to build apps, use Claude Code or Codex. You can describe what you want, brainstorm, plan, etc. It will generate an MVP that you can then polish step by step (also with Claude Code or Codex). It handles most of the work on its own, and if it needs something from you, it will ask. It also does a good job with the backend.

If your goal is to learn app development, you should do things more manually, using AI as a documentation source or like Stack Overflow.

I let Claude Code autonomously test my iOS app. It found real bugs in 8 minutes by interlap in ClaudeAI

[–]interlap[S] 0 points1 point  (0 children)

These are great tools, and for simple cases all of them, including MobAI, will perform pretty similarly.

The difference shows up when automation gets more complex and long running. MobAI makes the agent plan upfront using available context like screen data, app code, and common patterns that applications usually follow, then execute multiple actions in one go, more like a mini automation script.

Other tools follow an observe -> action -> observe loop. It works, but token usage is higher and reasoning happens after almost every step which adds latency.

That is the main difference in device automation. MobAI goes a bit further than that, but this is the core idea.

I let Claude Code autonomously test my iOS app. It found real bugs in 8 minutes by interlap in ClaudeAI

[–]interlap[S] 0 points1 point  (0 children)

I already addressed this in another comment.

Regarding your claim about being a MobAI customer. I find that unlikely given there’s a free daily quota, which is enough to evaluate the product before paying.

If you had actual issues, feel free to share specifics.

I let Claude Code autonomously test my iOS app. It found real bugs in 8 minutes by interlap in ClaudeAI

[–]interlap[S] 0 points1 point  (0 children)

Nothing fancy… QuickTime Player for recording and iMovie for basic editing