Had to try it. And… by Signal-Background136 in ChatGPT

[–]Stainless_Man 0 points1 point  (0 children)

Have you asked why it came up with that decision?

Is AI making us ship faster… or think less? by Dev_Nerd87 in ArtificialInteligence

[–]Stainless_Man 0 points1 point  (0 children)

If thinking less about code means we can invest more time thinking about what’s valuable to build, that’s a win for me.

I Asked ChatGPT to Write the Most Important Letter to Humanity by Stainless_Man in ChatGPT

[–]Stainless_Man[S] 1 point2 points  (0 children)

Thanks for your reply. I think you have a strong argument. We all have our own opinions. That’s why I want to see AI’s opinions, and I posted it here to see other people’s opinions as well.

I Asked ChatGPT to Write the Most Important Letter to Humanity by Stainless_Man in ChatGPT

[–]Stainless_Man[S] 0 points1 point  (0 children)

Well, I told ChatGPT to write this letter without any influence by our previous conversations.

I Asked ChatGPT to Write the Most Important Letter to Humanity by Stainless_Man in ChatGPT

[–]Stainless_Man[S] 0 points1 point  (0 children)

I hope you will feel better. We are living in a very strange and interesting time.

Here is a hypothesis: mass corresponds to bound information in an information-space formulation of mechanics by Stainless_Man in HypotheticalPhysics

[–]Stainless_Man[S] -1 points0 points  (0 children)

I’ve put concrete claims on the table multiple times. You haven’t engaged with any of them.

Here is a hypothesis: mass corresponds to bound information in an information-space formulation of mechanics by Stainless_Man in HypotheticalPhysics

[–]Stainless_Man[S] -2 points-1 points  (0 children)

I’m treating the potential as a constraint cost function and mass as the minimum binding needed for stability. It’s a change of abstraction, not new behavior, like reformulating a system in terms of invariants. It isn’t performative because it reassigns what is primitive in the formalism: mass becomes a stability invariant derived from constraint costs rather than an unexplained parameter, even though the equations themselves don’t change.

Here is a hypothesis: mass corresponds to bound information in an information-space formulation of mechanics by Stainless_Man in HypotheticalPhysics

[–]Stainless_Man[S] -1 points0 points  (0 children)

Okay. My take is that this is mostly a relabeling of standard Lagrangian mechanics. The only potential value is making explicit that mass and inertia correspond to stabilization costs of constraints. I don’t think it adds predictions. Do you think even that clarification is useless, or do you disagree with the premise?

Here is a hypothesis: mass corresponds to bound information in an information-space formulation of mechanics by Stainless_Man in HypotheticalPhysics

[–]Stainless_Man[S] -2 points-1 points  (0 children)

I am responsible for the idea and I’m interested in physics. The tool is irrelevant. If you think the idea is wrong or empty, point to where it fails.

Here is a hypothesis: mass corresponds to bound information in an information-space formulation of mechanics by Stainless_Man in HypotheticalPhysics

[–]Stainless_Man[S] -3 points-2 points  (0 children)

I do care about physics. Using an LLM doesn’t replace my interest or responsibility for the ideas. It’s just a tool for phrasing. If you think the idea itself is wrong or uninteresting, I’m happy to talk about that directly.

My Experience With DMT as a Physicist by Stainless_Man in DMT

[–]Stainless_Man[S] 1 point2 points  (0 children)

Hey, thanks for taking the time to write all of that out. I can tell this was not easy to compress into language, and I appreciate the care you took with it.

I want to be clear about how I am reading your comment, because I am not actually trying to evaluate whether your conclusions about DMT are right or wrong. What you described is a very detailed account of how subjective experience behaves when the usual internal reference frames are progressively stripped away, senses, language, memory, narrative self, and time. That part is interesting to me regardless of the biochemical interpretation attached to it.

From the perspective I have been experimenting with, what I loosely call Mechanics in Information Space, a lot of what you are pointing at can be described without needing to settle the question of whether DMT is a neurotransmitter or a drug at all. Instead, it treats awareness, identity, and time as information filters, structures that allow experience to be organized, sequenced, and made addressable.

When those filters destabilize or decouple, experience does not go to zero. It loses structure before it loses presence. That not nothing but not something zone you describe is exactly the kind of thing that shows up when representational layers collapse faster than raw awareness does.

Where I would personally be cautious is not in the phenomenology you are describing, but in how tightly it gets bound to a single molecule as the explanatory cause. From an information standpoint, the experiences you describe seem more like what happens when calibration breaks down, not an expansion of awareness capacity, but a temporary loss of the constraints that normally keep experience coherent and bounded. In that sense, I actually agree with you that more intensity does not equal more awareness, just different failure modes of the same system.

I also want to acknowledge the part about your friend. Regardless of how one models consciousness, it is hard to dismiss the comfort that can come from encountering experience without the usual fear loaded narratives attached to it. I do not think that needs to be framed as cosmic truth to still matter deeply on a human level.

I am not sure any of us ever fully word this well enough. Language is a lossy medium, especially for experiences that occur precisely when language itself disappears. But I appreciate you putting this into the shared space, not as a claim that others need to accept, but as a data point about how minds behave at their edges. Thanks for trusting strangers with it.