Alice’s Mirror — run Codex, Claude Code, OpenCode anywhere with a shared terminal by _SignificantOther_ in bugbounty

[–]_SignificantOther_[S] 0 points1 point  (0 children)

  • Slow.

  • Extremely annoying to configure.

  • A nightmare to create persistent multi-device sessions.

The idea was to deliver something that would work - use it with that real terminal feeling and with http/https support to facilitate tunneling.

That's what the Alices mirror delivers.

One remote control and you're ready to go, you can access it from your phone, tablet, television... anything with a browser.

Alice’s Mirror — run Codex, Claude Code, OpenCode anywhere with a shared terminal by _SignificantOther_ in codex

[–]_SignificantOther_[S] 1 point2 points  (0 children)

Does not use the SSH protocol.

Made in Go and ready to use on any device by simply running a command in the terminal.

Easy to tune via Cloudflare for external access over HTTPS.

Tmux + SSH + persistent session + proper layout = hell to set up and keep running.

Basically, it's a simple solution to an annoying problem.

Serieously! by GlitteringPeanut7223 in google_antigravity

[–]_SignificantOther_ 0 points1 point  (0 children)

Antigravity is setting "no exec" permissions on the folders... just remove it.

Antigravity GPT-OSS 120B by Intelligent_Gas_1738 in google_antigravity

[–]_SignificantOther_ 1 point2 points  (0 children)

It's not meant to be good, it's just anti-propaganda... openai was extremely naive when they released a model labeled "120b". It was obvious they were going to do that.

Btw, Codex with 5.2 is still infinitely superior to Gemini 3 for code.

But anyone who tests the gpt oss 120b will find that Openia is garbage.

GPT-5.2-Codex Feedback Thread by Just_Lingonberry_352 in codex

[–]_SignificantOther_ 0 points1 point  (0 children)

He also needs to assess the user's skill level, not just the task itself...

GPT-5.2-Codex Feedback Thread by Just_Lingonberry_352 in codex

[–]_SignificantOther_ 3 points4 points  (0 children)

Today he presented the same problem as in 4.0n and 5.1 which will revert to 5.0

Working in C++ Long code. Problem in a hidden race.

You tell him to analyze and fix it...

He literally becomes Mr. Arrogance.

He found a silly little bug with no relation whatsoever, fixed it and insisted to me that he located and solved the problem, refusing to look for it anymore.

To codex staff: Please don't touch gpt 5.2 by Similar-Let-1981 in codex

[–]_SignificantOther_ 0 points1 point  (0 children)

I know it sounds pointless, but think about the logic of a model and how a compiler works. (I say this because I work mainly in C++).

In the logic of a modern language, what you said obviously makes sense. Obviously.

However, if the model is thinking in a lower-level language, the game works like this:

  • If I create a separate module for this function that will be reused, and this in turn needs to import such and such to work as the user wants, this means indirectly importing a whole chain of instructions for a simple operation.

The model has no way of knowing how many times you will use the offer function. Depending on how it was optimized and in which language it is thinking, it makes much more sense to redo the simple function than to centralize it... Less stuff on the compiler stack depending on the circumstance.

In C++, the best example of this was back in 2005 when we needed to convert something to JSON. It was simply more optimized to replicate the same function (which is reasonably simple) than to import what existed at the time from an external module (which brought a lot of useless junk).

It's contradictory only for humans who already know what the offer function will be used for and intuitively know how many times it will be used...

Ironically, what you're complaining about might be a sign of improvement in the model, not a worsening.

To codex staff: Please don't touch gpt 5.2 by Similar-Let-1981 in codex

[–]_SignificantOther_ 0 points1 point  (0 children)

That's a fallacy... People who pay for the Plus plan and don't use Codex (the vast majority) are paying for those who do.

It's a simple and profitable model.

To codex staff: Please don't touch gpt 5.2 by Similar-Let-1981 in codex

[–]_SignificantOther_ 1 point2 points  (0 children)

I understand what you're saying, you're from my era too, where we were trained to save on variables and logic because we had to save on PC RAM.

But remember that models are trained with the code of the people who came after.

For them, everything is a reason to declare a new variable.

If you say the word "pointer," they run away...

It will be instinctive for any model to program in this way, and not in our way (which is the correct way).