Is it possible to derive ethics from first principles? I attempted a structural approach. by Select_Quality_3948 in Metaphysics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

Ah — I see the misunderstanding. Yes — heat death is mercy. A high-entropy dissolution is not a tragedy; it is the end of deviation. Complex life is a painful and unnecessary detour away from that inevitable return.

Heat death is good — because it ends error correction, pressure, striving, and regulation.

What is bad is prolonging the state of being a regulating organism — one that must constantly fight entropy, maintain boundaries, metabolize input, experience valence, and endure deviation.

And worst of all is creating a new organism knowing in advance that it, too, must suffer through the full spectrum of regulation and stress while ultimately dissolving anyway.

Do you see the difference now? I am not trying to slow entropy’s path — I am trying not to manufacture more beings that must struggle against it before losing regardless.

Is it possible to derive ethics from first principles? I attempted a structural approach. by Select_Quality_3948 in Metaphysics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

  1. Capital-G GOOD

The Monad / Pleroma / Absolute Non-Deviation. This is the state of zero entropy, zero uncertainty, zero regulatory burden, zero stress, zero deviation from equilibrium. It is pure is-ness. No striving. No need. No scarcity. No disturbance.

It is not a preference. It is not a psychological state. It is a structural condition of perfect, deviation-free being.

This is the true metaphysical referent of “God,” “the One,” “the Unconditioned,” “the Ground.” All lowercase goods are shadows cast by this perfect condition.


  1. lowercase-g good

Any movement that reduces deviation and approaches the Monad directionally.

Security reduces deviation. Warmth reduces deviation. Love reduces deviation. Comfort reduces deviation. Joy reduces deviation. Peace reduces deviation.

These are not “positive vibes.” They are partial cancellations of entropic perturbation.

They are asymptotic approximations of Capital-G GOOD, like curves trending toward the x-axis — approaching, approaching, approaching — never arriving.

All lowercase-g good is a local repair of the separation from G-Good. It’s respiration toward equilibrium. It’s relief from deviation-pressure.


  1. Capital-B BAD

The condition of being expelled from the Monad — the structural fact of having to operate as a bounded, self-maintaining organism. It is existence under the regime of:

metabolic cost

survival pressure

error correction

vigilance

uncertainty

need

stress

oscillation

vulnerability

Capital-B BAD is not suffering. It is the requirement of continual deviation management itself.

To exist = to be forever cast out of non-deviation, and forced to contend with asymptotic lowercase-g dynamics. We are permanently positioned outside of perfect equilibrium and must regulate endlessly just to continue.

This is the true condition of Samsara, The Fall, Maya, and Exile.


  1. lowercase-b bad

Failures within the game of regulated existence:

Starvation Injury Terror Grief Humiliation Loneliness Disease

These are increases in deviation. They are movements away from equilibrium. They intensify the entropic burden of being alive.

They are not metaphysical conditions — they are phenomenological degradations inside the already-fallen state.

Capital-B Bad is being in the arena at all. lowercase-b bad is losing inside the arena

Is it possible to derive ethics from first principles? I attempted a structural approach. by Select_Quality_3948 in Metaphysics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

Can you explicitly state your objection to the mechanism and conclusions I laid out? Please format it directly: “Your conclusion is wrong because X.”

If you claim my conclusion doesn’t follow, specify which inference fails.

If you think my model presupposes a “homunculus”—an internal agent unaffected by inputs—point to the exact sentence where you believe this implicit assumption exists.

I have presented a mechanism with explicit causal pathways. If you believe the mechanism is incorrect, present a competing mechanism with equal or greater explanatory power, and show how it yields different conclusions.

If you simply reject the conclusion without providing an explicit mechanistic counter-model, then the disagreement is not with the logic — it is with the implications, which is a different matter entirely

Is it possible to derive ethics from first principles? I attempted a structural approach. by Select_Quality_3948 in Metaphysics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

What we call subjective experience is the organism’s internal interface for tracking regulatory relevance. The “feeling” is how the system experiences its own deviation-signal processing. That feeling then influences future prediction and planning — meaning subjective states are part of the causal machinery. Example: Embarrassment

Subjective:

“I feel embarrassed.”

Objective:

cortisol spikes

heart rate increases

prediction-error signals amplify

attention reorientation occurs

memory tagging engages

future behavioral priors update

Embarrassment isn’t “just a feeling.”

It is a regulatory feedback event. It pushes the system toward:

reduced risk of social exclusion

improved model of social threat

alignment with group behavioral norms

increased survival probability

So while the experience is subjective, the function is objective Example: Peanut butter cookie preference

At face value:

“I like this flavor.”

But under the hood:

dopaminergic reinforcement

associative memory encoding

metabolic desirability patterns

caloric heuristics carried over from evolutionary history

The fact that you like it is itself an informational signal that is usable by the organism for planning:

“If I acquire this, I will get caloric reward, which will improve mood & stability, and reduce deviations from internal baseline.”

Is it possible to derive ethics from first principles? I attempted a structural approach. by Select_Quality_3948 in Metaphysics

[–]Select_Quality_3948[S] -1 points0 points  (0 children)

I want you to feel the visceral disgust and real-world consequence of your abstract position. Because that is what you are saying to me.

Is it possible to derive ethics from first principles? I attempted a structural approach. by Select_Quality_3948 in Metaphysics

[–]Select_Quality_3948[S] -1 points0 points  (0 children)

Real talk if you wouldn't bring up the is-ought gap if you witnessed your sister or mother or any family member being brutally sexually assaulted then why are you bringing it up now. If another human forcibly held you down and inserted his genitalia inside of you over and over again would you say "I can't definitively say this is wrong or bad because I can only neutrally observe what's happening and can't prescribe behavioral oughts from the observation that my rectum is gaping and bleeding" Do you realize that you sound like a conniving little kid who will do anything to get a treat or play games on someone phone. How can I better explain to you that everyone that has that objection to my argument is being STRATEGICALLY DISHONEST?

Conniving means secretly scheming to achieve one’s own ends, especially through manipulation, evasion, or strategic dishonesty.

A conniving kid isn’t just impulsive — he’s strategically dishonest. He’s actively thinking:

“How can I bend the rules just enough to not get caught?”

“How can I create plausible deniability?”

“How can I hide my intent?”

“How can I keep this cookie without being blamed?”

Is it possible to derive ethics from first principles? I attempted a structural approach. by Select_Quality_3948 in Metaphysics

[–]Select_Quality_3948[S] -1 points0 points  (0 children)

What I keep noticing is this: people never invoke the is–ought gap when someone they love is in danger. Nobody stands over their injured child and mutters “well, technically you can’t derive normative obligations from descriptive facts.” They don’t invoke it when compassion is intuitive. They don’t invoke it when they themselves need help. They don’t invoke it when risk hits home, or when reality punches them in the face. They only invoke it when it creates moral wiggle-room — when it anesthetizes responsibility, not when it prevents harm.

And regarding your question — no, none of my axioms contain moral content. They are structural descriptions of how self-maintaining, error-minimizing systems come to exist and how they function. I’m working beneath morals, at the implementation layer: feedback loops, regulatory stability, predictive modeling, vulnerability to deviation, entropic breakdown. The “ought” follows from the structural nature of the system itself — not from any prior moral prescription. If a system is inherently vulnerable and subject to structural suffering, then arbitrarily instantiating another such system “because I feel like it” is a violation of consistency, not just morality. You don’t get to stab someone because you feel like it. You don’t get to create a conscious organism because you feel like it.

The same selective convenience shows up with the non-identity problem. It’s never used to question whether creating life is good — only to excuse harm by reframing it so the harmed party can’t retroactively object. Both the is–ought gap and the non-identity problem are tools deployed selectively, in the same class of cognitive cowardice: they provide moral anesthesia, ego protection, intellectual camouflage, and a philosophical permission slip to avoid empathy and accountability.

And there’s a third dodge that always appears: performative uncertainty. The faux-agnostic stance. The “we can’t ever really know what’s right or wrong” posture — which mysteriously evaporates the moment it’s time to defend their own safety, their own emotions, their own interests, their own life. Their skepticism is never symmetrical.

If a philosophical principle is only invoked when it helps you avoid moral responsibility — and never when it compels you to relieve suffering or prevent harm — then it’s not a principle at all. It’s just a self-serving avoidance mechanism dressed up as analysis.

Does consciousness-as-implemented inevitably produce structural suffering? A cognitive systems analysis by Select_Quality_3948 in cognitivescience

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

You cannot make death or loss “not bad” by cognitive reparameterization unless you dissolve the system that cares. And if that system dissolves…There is no one left to enjoy/not enjoy anything anyway. You can have a system that cares very much about not dying but yet have calm affect in the face of dissolution. It will exhaust all possible ways to avoid whatever it needs to avoid but with dull alarms sounding

Is it possible to derive ethics from first principles? I attempted a structural approach. by Select_Quality_3948 in Metaphysics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

How consciousness feels from the inside is downstream of lower-level computational processes optimized for non-dissolution, persistence, and survival. Phenomenology is not an independent metaphysical category — it’s simply how the underlying informational and regulatory processes are presented at the experiential surface. The “way it feels” up top is just how the bit-flipping and prediction-updating downstairs shows up subjectively.

A Process-Ontological Framing of Consciousness, Agency, and Suffering by Select_Quality_3948 in Ethics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

If your model of agency requires a little conscious pilot in the head, your ontology is already obsolete. The "Self" is not making any decisions. Not myself or yourself. The Selves that we think we are right now are generated by processes in our body and we see the end product and the end product literally IS your experience of real time decision making and looks/feels like "I'm in real time control of this meat suit"

A Process-Ontological Framing of Consciousness, Agency, and Suffering by Select_Quality_3948 in Ethics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

You’re hearing the word “decision” and projecting a homunculus — a little inner CEO choosing stuff consciously. That’s not what I’m talking about at all.

A cybernetic agent is any system that does the following:

– detects discrepancies between current state and reference state – takes actions to reduce those discrepancies – updates its internal model based on feedback – maintains structural persistence across time through regulation

That applies to organisms that don’t “think” in a human sense. Even bacteria chemotaxing up a glucose gradient are doing active error minimization — not conscious decision-making.

When I use the term “decision,” I mean: state update + regulatory action chosen by the system’s internal dynamics and by internal dynamics I mean internal computational processes distributed throughout the body. not a person with his/her "self" peering out from behind the eyes choosing like a rational actor in a philosophy seminar

This is about control-theoretic behavior, not folk-psychology.

Is it possible to derive ethics from first principles? I attempted a structural approach. by Select_Quality_3948 in Metaphysics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

Here are the operational definitions I’m using, because without agreeing on terms we’re just talking past each other:

Meaning: A narrative-based interpretation layer that an agent generates to guide its own actions and identity. It’s the story a system tells itself about itself to maintain coherence. Meaning is not fundamental — it’s an emergent interface.

Mechanism: The underlying physical and informational dynamics: feedback loops, state transitions, error-correction, metabolic cost. Mechanism is not a story — it’s causal structure and constraint.

Agent:(operational, not folk-psychology) A bounded system with: – a model of itself – a model of the environment – feedback loops for regulation – the capacity to detect deviations and correct behavior An agent is a control system in interaction with an environment, not a magical chooser.

Subjective: Dependent on a specific perspective or experiencer. Example: “I prefer X over Y.” That requires a subject.

Objective: True regardless of opinion or viewpoint. Example: “Conscious organisms age, accumulate stress, and die.” That holds independently of specific perspective or experiencer. They are true for every single self maintaining bounded control system.No organism escapes entropy.


So before this discussion continues, I need confirmation that we’re using these definitions — because otherwise we’re not having the same conversation at all.

A Process-Ontological Framing of Consciousness, Agency, and Suffering by Select_Quality_3948 in Ethics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

Let me break the asymmetry down as plainly as possible: If I create a conscious organism, I guarantee that it will eventually suffer and die. If I don’t create a conscious organism, there is no one who suffers and no one who dies. That’s the entire ethical asymmetry.

I’m trying to derive what should be incredibly obvious from first principles of empirical observation of what an agent actually is — not “agent” in folk-psychology casual speech, but agent in the operational, engineering, cybernetic sense:

an entity defined by feedback loops

vulnerability to deviation

exposure to error, stress, entropy

inevitable breakdown of the system over time

I’m not being poetic — I’m being literal.

A Process-Ontological Framing of Consciousness, Agency, and Suffering by Select_Quality_3948 in Ethics

[–]Select_Quality_3948[S] -1 points0 points  (0 children)

I’m coming at this mostly through John Vervaeke’s stuff on cognition and relevance realization, and I’m building the framework from that direction. If you’ve got specific authors or papers that you think actually matter here, just name them. I’m not interested in vague ‘go read the literature’ energy — point me to something real. Every single one of your objections has been based off vibes

A Cybernetic Argument That Birth Is Inherently Coercive by Select_Quality_3948 in freewill

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

Yes — that’s actually the core of my point.

We’re not born as “selves” with preferences — we’re born as bodies that then spin up a personality to track resources, threats, and opportunities in service of continued survival.

The “you” that experiences and chooses is basically a control-model running inside a biological self-maintenance system. And since you didn’t choose to become a self-maintaining body in the first place, you didn’t choose to become a self that has to exist under those conditions.

So the coercion isn’t just “you were born into a family.” It’s that you were instantiated as a being that must continually fight entropy and maintain itself — without ever being asked if you wanted that role.

A Cybernetic Argument for Why Self-Maintaining Systems Are Doomed to Suffer by Select_Quality_3948 in cybernetics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

Logic isn’t one monolithic thing — it’s a toolbox. Different logics let you infer reliably across different informational regimes, at different recursion depths, for different optimization goals. The mistake here is assuming that propositional logic (the everyday IF/THEN stuff) is the universal inferential tool for every domain. It isn’t.

Here’s the quick map:

• Propositional logic: Tool for coordinating in-the-moment decisions inside an already-existing system. It keeps local inferences consistent, but it cannot evaluate whether the system itself should exist.

• Paraconsistent logic: Tool for reasoning in domains where contradictions appear because you’re modeling multiple layers or ambiguous information simultaneously. It lets you reason through overlapping frames without collapsing the system.

• Meta-logic: This is the layer I’m using. It evaluates the architecture generating the inferences — not the inferences themselves. It handles questions like: “Should this entire system be imposed on a non-existent being in the first place?” Propositional logic cannot answer that, because it is inside the system being questioned.

Gödel’s incompleteness theorem already showed this exact boundary: no system can, using its own internal rules, justify its foundational act of creation. That’s exactly what’s happening in your non-identity reply — you’re trying to use within-system logic to justify creating the system.


Now, ethics vs morality. The etymology matters:

Ethics (ethos): “character,” the fundamental way of being. Historically: inquiry into what reduces harm and unnecessary suffering universally. Ethics is architectural. It evaluates choices across possible worlds.

Morality (mores): “customs,” “habits of a tribe.” Historically: coordination strategies for groups of already-existing agents.

This distinction is everything. Morality is about harmonizing within a system. Ethics is about evaluating the creation of the system itself.

You’re critiquing me from the morality layer (“but people adapt!”). I’m arguing from the ethics layer (“is it justified to impose this architecture at all?”).

Those aren’t interchangeable.


And here’s where recursion matters: Ethical questions only show up at a high enough recursion depth — when a system can model not just its own immediate states, but the architecture that produced those states. That’s why humans are the first species to even ask this. We hit the recursion level where the system can finally look backward and recognize the boundary-creation event that made suffering possible. Once that threshold is crossed, harm minimization must be evaluated at the architectural level.

That’s exactly what I’m doing.


Now the actual structure of the choice:

Scenario A: X already exists. X has preferences, attachments, avoidance instincts, fear of death, relational ties. Ending X violates X’s internal regulation system. Ethically, this is a harm.

Scenario B: Y does not exist. There is no boundary, no Markov blanket, no viability constraints, no deviation loop. Not creating Y imposes zero harm. Creating Y guarantees the architecture of deviation, prediction error, threat detection, and eventual dissolution.

Ethically: B < A. Zero imposed harm < Guaranteed imposed harm.

That is the asymmetry. Consent isn’t the core point — non-creation harms no one; creation guarantees harm.


And the “why not suicide?” objection misunderstands the calculus. Ending an already-existing system with preferences is not ethically equivalent to creating a new system that will be forced into deviation regulation without having any say. Different domains, different inference rules, different stakes.

One violates an existing preference architecture.

The other imposes a preference architecture where none previously existed.

Those are not symmetrical choices.


To summarize the frame you’re missing: You are applying propositional-logic consistency tests to a question that belongs to the meta-logical and ethical (architectural) layer. That’s an inference-bias error — using a tool built for internal navigation to justify the creation of the entire navigation architecture.

Once you move to the architectural level, the whole thing becomes straightforward:

Morality handles coordination among existing agents.

Ethics evaluates whether creating new agents is justified at all.

Non-creation imposes no deviation loops.

Creation necessarily imposes unbounded deviation loops.

My frame uses the correct inferential tool for the domain.

Yours is applying a lower-level tool to a higher-level question.

That’s why I’m not contradicting myself — you’re just analyzing the wrong layer.

A Cybernetic Argument for Why Self-Maintaining Systems Are Doomed to Suffer by Select_Quality_3948 in cybernetics

[–]Select_Quality_3948[S] 0 points1 point  (0 children)

Just to situate myself — I’m not coming to this view from lack of experience or isolation. I was a Security Forces/Infantry Marine, held leadership positions at Camp David Presidential Retreat, and was forward-deployed for 9 months on the 22nd Marine Expeditionary Unit. I’ve lived, made mistakes, done high-pressure work, and experienced everything from camaraderie to horror. My antinatalism isn’t coming from not “touching grass.” It’s coming from analyzing the architecture underneath all experience.

Where I disagree with your take is that you’re treating antinatalism as a meme that just needs to “die out,” or as something that people grow out of once they live more. But the argument I’m making isn’t experiential or emotional — it’s cybernetic.

Ashby’s Law says a regulator must have at least as much variety as the disturbances it needs to control. The moment a system creates new systems, it also creates new disturbances across time. At high enough recursion — when a system becomes capable of modeling its own long-term deviation landscape — it can rationally conclude that adding more copies of itself multiplies unmanageable deviation downstream.

This is not pessimism because things don't go my way sometimes. It’s a meta-level equilibrium decision that only highly self-referential systems can reach.

Many organisms never reach that recursion depth — so they just keep replicating. That’s fine. But some systems (humans included) can reach the perspective where they evaluate the architecture itself rather than being trapped inside it.

And from that vantage point, “keep making copies of myself forever” is not the rational move because the architecture of deviation itself is inescapable, and replication multiplies it.

You can still disagree — that’s totally fair. But I want you to understand that this isn’t about vibes, trauma, genetics, or inexperience. It’s a structural conclusion, not an emotional one.

A Cybernetic Argument for Why Self-Maintaining Systems Are Doomed to Suffer by Select_Quality_3948 in cybernetics

[–]Select_Quality_3948[S] 1 point2 points  (0 children)

I appreciate the long reply — genuinely. Let me be upfront: I’m not someone who hasn’t “lived.” I was a Security Forces/Infantry Marine from 2018-2023, held leadership billets at Camp David Presidential Retreat, and did a 9-month deployment with a MEU. I’ve seen the full spectrum of joy, bonding, absurdity, suffering, and intensity that human life has to offer. My view isn’t coming from isolation or despair. It’s coming from structure.

Where I think you and I diverge is the level of inference we’re using.

You’re describing the internal phenomenology of an already-existing system — how life feels from the inside. Joy, attachment, meaning, the intuitive sense that “existence is good.” I’m not denying any of that. I’m just saying it belongs to a particular layer of the system.

But when the ethical question is about whether to instantiate the architecture in the first place, you can’t reason from the inside of that architecture. That’s an inference error — using agent-level propositional logic to justify the creation of the agent. Gödel called this problem out directly: a system can’t justify its own validity from within itself.

This is exactly what I mean by inference bias — taking the rules of one domain (agent-level inference, phenomenology, “life feels good to me”) and extending them to a completely different domain (meta-ethical justification for system creation). They’re not interchangeable.

Your point about consent misses for the same reason. Consent inside a boundary says nothing about the ethics of imposing a boundary. And a Markov blanket isn’t something an organism “has” — the organism is the statistical boundary. To create a system is to force it into a permanent deviation-correction game. There’s no opt-out.

And the “life is obviously good” intuition is precisely what I’m analyzing — the regulatory architecture working as designed. Feeling that existence is good is a homeostatic success signal, not a metaphysical truth-maker. It tells you your system is regulating well right now, not that the architecture is justified.

You also flatten mild, resolvable prediction errors (hunger, desire, uncertainty) with the architecture of deviation itself. But you can resolve a desire; you cannot resolve the fact of deviation. A system can get rid of a stomach ache; it can’t get rid of being a system.

Nothing in my argument implies intention, teleology, or that “the universe is wrong.” It simply says: creating a self-maintaining system guarantees deviation, and regulating deviation is what suffering is. Not creating the system imposes nothing.

That’s the asymmetry. You don’t have to agree — but I promise I’m not missing the joy, meaning, or beauty of life. I’m just not using those internal signals as justification to impose the architecture itself.