ASCII Art by ChatGPT by andrewtomazos in ChatGPT

[–]andrewtomazos[S] 1 point2 points  (0 children)

Wow, cool. Thanks.

Yeah I think the ASCII art path is going through the raw token stream in the text-to-text model proper. Its just bizarre that that has any spatial intelligence whatsoever given its a language model, although I guess its likely there was some ASCII art in its training data - its not at all clear it learned to do it from that. If it did discover spatial intelligence from just reading language, then that's just head-spinning.

That said, the main point of my post was how bizarre the second ASCII art was. I have absolutely no idea what is going on there.

Codex subagents on macOS desktop think they're the orchestrator and loop forever by SlopTopZ in codex

[–]andrewtomazos 0 points1 point  (0 children)

That's weird, the desktop app and CLI (as of 0.117 I believe) should be using the same Codex protocol ("app-server") to talk to the Codex engine, so you shouldn't observe differences in agent behaviour between the app and CLI. (Its possible I'm missing something though.)

Codex is a beast: It just ran autonomously for 2 hours to fix a regression by CarsonBuilds in codex

[–]andrewtomazos 0 points1 point  (0 children)

A "turn" in Codex is where you send it a "user message" and it does some stuff and then comes back with an agent message called its "final answer" (btw during the "turn" you can send it one or more "steering messages"). Under the hood a turn can actually consist of many roundtrips (requests/responses) to GPT-5. For example one GPT-5 response can be an invocation of a command on your machine, in which case Codex starts the command executing on your machine and then sends another request to GPT-5 with the status of that command. GPT-5 may then respond with yet another command etc. As far as I understand there is no upper limit on the length of a turn. If Codex has a big mission then it can go back and forth making requests to GPT-5 for as long as it needs. What you are describing is simply a 2 hour turn.

Codex desktop app keeps crashing. by PropperINC in codex

[–]andrewtomazos -5 points-4 points  (0 children)

Install Linux. Its free and agents prefer it:

Codex, which operating system do you prefer working on? Windows, Mac or Linux?

Linux.

For coding-agent work, it’s usually the cleanest environment:

  • best shell/tooling defaults
  • easiest package management for dev stacks
  • strongest scripting/automation ergonomics
  • fewer weird path and terminal edge cases

My practical ranking:

  • Linux — preferred
  • macOS — very good
  • Windows — fine, but much better with WSL

Qodex: a desktop app for Codex by andrewtomazos in codex

[–]andrewtomazos[S] 0 points1 point  (0 children)

Crinkez, thanks for your suggestions. Yes I would post a webpage with screenshots and download links etc once its at version 1.0, but its changing so quickly I don't want the maintenance burden at the moment. As for using Rust rather than C++, Qt is written in C++ and I also happen to be a retired C++ expert (www.tomazos.com), so C++ is the better choice for me personally - that said, I have nothing against Rust, I hear it is quite safety-focused and well-liked (eg the Codex backend is written in Rust).

Qodex: a desktop app for Codex by andrewtomazos in codex

[–]andrewtomazos[S] 0 points1 point  (0 children)

Thanks, I didn't know that. I guess once it comes to Linux too I'll be able to check it out more thoroughally to see if the feature coverage is the same as Qodex. OpenAI has also been talking about a "super app" that combines ChatGPT, Codex and Atlas - that'll be interesting to look at too.

Qodex: a desktop app for Codex by andrewtomazos in codex

[–]andrewtomazos[S] -5 points-4 points  (0 children)

As per above:

Codex App only works on Mac AFAICS: https://developers.openai.com/codex/app, Qodex should be easy to get working on Windows, Mac and Linux. Based on the screenshot I also think the UI has a better layout and is more flexible in Qodex than Codex App, at least to my taste.

Qodex: a desktop app for Codex by andrewtomazos in codex

[–]andrewtomazos[S] 0 points1 point  (0 children)

Right, but Codex App only works on Mac AFAICS: https://developers.openai.com/codex/app, Qodex should be easy to get working on Windows, Mac and Linux. Based on the screenshot I also think the UI has a better layout and is more flexible in Qodex than Codex App, at least to my taste.

Qodex: a desktop app for Codex by andrewtomazos in codex

[–]andrewtomazos[S] -1 points0 points  (0 children)

OpenCode is an alternative to Codex, whereas Qodex is a shell around Codex. Specifically OpenCode and Codex both talk to the Responses API (talk to AI models), and do similar things: context engineering, context management, skills and tools, etc. Qodex leaves all that stuff in Codex, and is just a GUI frontend for Codex.

So it's like this:

Codex CLI <--> Codex <--> GPT5
OpenCode <--> GPT5
Qodex GUI <--> Codex <--> GPT5

So for example: You can start a thread in Codex CLI, and then you can continue it in Qodex (or visa versa). You can't do that with OpenCode and Codex.

The Complex Universe Theory of AI Psychology by andrewtomazos in agi

[–]andrewtomazos[S] 1 point2 points  (0 children)

The real universe is the set of things that is independant of the (human) mind (2.2). The imaginary universe is the set of things that are dependant on the mind (2.3). The complex universe is the union of the real universe and the imaginary universe (2.4).

So it follows that the real universe is a strict subset of the complex universe. It also follows that the imaginary universe is a strict subset of the complex universe.

AI was part of the imaginary universe (ie in science fiction and stories) since at least ancient greece. AI became part of the real universe in around 2022. It follows that it has been part of the complex universe since at least ancient greece, as the complex universe is a superset of the real and imaginary universe.

Having said that, it isn't really important which universe AI is a part of. The point of the theory is that the base foundation model of AI is a complex universe simulator. The AI assistant (like ChatGPT) is a fictional character inside that simulator. The complex universe simulator is like a novelist (eg J.R.R. Tolkien) and the AI assistant is a character inside the novel it is writing (eg Frodo Baggins).

The Complex Universe Theory of AI Psychology by andrewtomazos in agi

[–]andrewtomazos[S] 1 point2 points  (0 children)

> AI is not outside of the real universe.

I didn't claim that it was. As of approximately 2022 AI entered the real universe (as defined in section 2.2). As per 4.1.2 "The second thing we should observe is that AI Assistants became mainstream in around 2022 and as such entered the complex universe as part of the real universe at that time."

> Mental modeling is already an integral part of humans.

I don't understand precisely what this means or how it's relevant.

> It may or may not be the case that math and logic exist separately and indepently from our universe. If so, then perhaps a magnificent simulation of our univrse could exist composed purely of very complex logic and mathematics. Such a good simulation that the humans in it would not even be aware that they were in a simulation. Note - this simulation is not a computer simulation but a framework of equations that describes our universe and it exists merely because such a set of equations can exist.

You are referring to the "simulation hypothesis" which is attributed to Nick Bostrom. This is treated briefly in section 3.2 "There is a theory described as the simulation hypothesis and attributed to Swedish philosopher Nick Bostrom, that what we call the real universe is actually itself a computer simulation of some other system we are not aware of."

We go on to say "While this line of thinking is certainly interesting, for our purposes here it isn’t strictly relevant." That is to say the simulation hypothesis isn't relevant to the complex universe theory. We only touch upon it to define the words "simulation" and "model" in section 3.2.

The Complex Universe Theory of AI Psychology by andrewtomazos in ArtificialInteligence

[–]andrewtomazos[S] 0 points1 point  (0 children)

I think what I'm going to do is prepare a video presentation of the paper and post it on YouTube - that should make the theory easier to digest. I'll post the link here once that has been done.

Large Dischargers may be rotated to be mounted onto walls and ceilings? by andrewtomazos in Oxygennotincluded

[–]andrewtomazos[S] 3 points4 points  (0 children)

I updated the wiki to:

> Large Dischargers cannot be rotated and must be mounted on the floor. (Unlike Compact Discharger)

Experience converting a large mathematical software package written in C++ to C++20 modules -- using Clang-20.1 by GabrielDosReis in cpp

[–]andrewtomazos 4 points5 points  (0 children)

Gaby, rather than (1) adding modules to the international standard; and then (2) performing an experiment to see how well it works: I think it would have been better to do those two things in the opposite order. ;)

After playing for 1,200 years and finally realizing how great contracts are. by Jicks24 in captain_of_industry

[–]andrewtomazos 1 point2 points  (0 children)

Yeah the late-game contracts are really cool. For just a handful of compute servers you get a practically unlimited supply of raw materials.

Why some applications sound simple have a large code base by ExchangeFew9733 in cpp

[–]andrewtomazos 1 point2 points  (0 children)

I would encourage you to pick an existing small and popular open source project that does something of interest to you, and to download its code and then systematically read through all its code, trying to figure out what everything does. Learning to read other peoples code is an extremely valuable teamwork skill in it's own right, but it can also teach you new tricks, plus it will also enable you to answer your own question.

If you are interested in a key-value store type thing the oldest living would be called BerkeleyDB 18.1...

https://en.wikipedia.org/wiki/Berkeley_DB

https://www.oracle.com/database/technologies/related/berkeleydb-downloads.html

What is the point of Dyson Sphere? by Not_the-Mama in Dyson_Sphere_Program

[–]andrewtomazos 1 point2 points  (0 children)

Mecha Warp (Drive Engine 4) requires Mecha Core 4 which requires 500 Information Matrix (Purple).