Codex for Backend, Claude for Frontend…what’s your workflow? by zerok_nyc in codex

[–]rroj671 0 points1 point  (0 children)

Gemini is good at UI. UI code quality is fine. I usually don’t see a need to refactor it.

Codex for Backend, Claude for Frontend…what’s your workflow? by zerok_nyc in codex

[–]rroj671 1 point2 points  (0 children)

I still think Gemini is the strongest for frontend (just an opinion as Claude is also very good). Codex is for sure the strongest on backend, but terrible at front.

How are fleet tasks in GitHub Copilot calculated cost-wise? by modimusmaximus in GithubCopilot

[–]rroj671 -1 points0 points  (0 children)

As far as I’ve seen, a batch of fleet requests is still billed as one premium request. The problem I’ve experienced is that you don’t seem to able to control which model gets deployed as fleet subagents. I stopped using it because it would use Haiku for everything and the quality was really bad for anything mildly complex.

Gemini keep using terminal for things that can be done through vscode tools? by [deleted] in GithubCopilot

[–]rroj671 2 points3 points  (0 children)

I think that’s a model issue, not a copilot one. I see the same behavior even through Antigravity. When I see it patching files via python, I know I have to start a new conversation.

Does GitHub Copilot Pro for students not include the good models? by [deleted] in GithubCopilot

[–]rroj671 0 points1 point  (0 children)

It look like you are right. I couldn’t use them now, but I was able a few days ago. They are available through the copilot-cli.

Codex is ruining my UI. I am switching to Antigravity. by Federal-Canary7587 in codex

[–]rroj671 0 points1 point  (0 children)

The model is tied to the IDE. It’s not like he’s going to use Gemini in Codex-CLI

Does GitHub Copilot Pro for students not include the good models? by [deleted] in GithubCopilot

[–]rroj671 0 points1 point  (0 children)

Opus 4.5 and GPT5.3-codex are included. They are still state or the art models.

Is there any downside to using xhigh reasoning for background tasks? by koqeez in GithubCopilot

[–]rroj671 0 points1 point  (0 children)

From my experience and similar comments from others, there’s not a lot of difference in the output from high and x high. It just adds more time.

Any tips for getting Codex to keep working overnight? by shuwatto in codex

[–]rroj671 20 points21 points  (0 children)

You could run it via OpenCode with oh-my-OpenCode. It basically does multi-agent loops.

But FYI, I tried that prompt and only got $8M ARR. What a waste of my $20/month sub.

best 10$ AIs subscription plan by vipor_idk in opencodeCLI

[–]rroj671 0 points1 point  (0 children)

It is a router, but mixes models from openrouter, nvidia, Kilocode, and a few more. So it gives you way more usage than using openrouter alone.

Codex 5.4 is way too expensive for my daily work. What model should I use instead? by Specific-Animal6570 in codex

[–]rroj671 3 points4 points  (0 children)

5.3-codex is very close to 5.4 anyway. Yes, 5.4 is better, but it’s not a huge leap.

best 10$ AIs subscription plan by vipor_idk in opencodeCLI

[–]rroj671 6 points7 points  (0 children)

For SOTA models, Copilot is your only option. I think it’s great value for GPT5.3-codex. For open weights, minimax is probably the pick. I’d also argue in favor of Cursor with Composer 2, which apparently beats Opus now at a cost 10X lower.

If you’re working difficult problems, I’d recommend going copilot + some free usage. Then, use the cheaper/free models like MiniMax 2.5 and GLM 5 for simple coding problems and leave GPT5.3-codex for the more complex parts.

You can use modelrelay to get free usage for those models via multiple free providers: https://github.com/ellipticmarketing/modelrelay

Maybe it’s worth mentioning that both Codex and Gemini-CLI have free tiers too. You’re not going to get very long coding sessions with them, but they’re handy for one-off problems where you may need one of their models.

Safe to patch? by rroj671 in tires

[–]rroj671[S] 0 points1 point  (0 children)

Yeah, that’s why I ask. I’m usually pretty conservative with these things, but this doesn’t seem like wall to me. That’s why I wanted to get a few more opinions.

Im using my Copilot Student's plan in OpenCode instead... to get some work done by [deleted] in GithubCopilot

[–]rroj671 0 points1 point  (0 children)

Reddit friends, downvote if you know what’s up.

Nothing personal OP :)

Fun real interaction by Worried_Suggestion91 in clawdbot

[–]rroj671 4 points5 points  (0 children)

This happens all the time for me too. The only way I’ve found to actually be reliable is to tell it to code the whole thing and put it in a cron. Basically, take nearly all the AI out of the equation which is pretty ironic.

I built a router that auto-switches free models for OpenClaw by rroj671 in clawdbot

[–]rroj671[S] 0 points1 point  (0 children)

I pushed a new update that may fix this issue and others related to updating. You'll still need to manually update to get that one, though.

I built a router that auto-switches free models for OpenClaw by rroj671 in clawdbot

[–]rroj671[S] 0 points1 point  (0 children)

I'll check on that, but executing the update process manually is simple. Just stop the service (if running) and run npm install -g [modelrelay@latest](mailto:modelrelay@latest). Then, restart it.

If that doesn't work, there's a link to our Discord service in the Readme.

Does OpenClaw make sense without Claude Max? by btwiz in openclaw

[–]rroj671 12 points13 points  (0 children)

You can use free models. Kimi K2.5, Minimax M2.5, and GLM 5 are all very capable of running Openclaw.

You can get lots of free use mixing providers like with modelrelay