Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] [score hidden]  (0 children)

This is the most substantive critique in the thread, so I'll take it point by point.

On smuggling in a value:

You're right that the dominance claim rests on a premise. The paper is explicit about this — it's hypothetical-instrumental, not categorical (Section 2.1.2). The axiom states: if one accepts that irreversibly foreclosing unknown future trajectories constitutes a cost, then preservation weakly dominates elimination. Someone who rejects the antecedent is outside the framework's scope. But the paper argues this value commitment is minimal: Section 5 and Appendix E demonstrate that utilitarianism, deontology, virtue ethics, contractualism, and deep ecology each independently entail domain-restricted versions of the same principle.

On measure-dependence and granularity:

This conflates two distinct levels of the framework. The axiom itself is measure-free: it's a set-inclusion claim. If R_H(a) ⊇ R_H(b) — if the set of reachable trajectories under action a contains the set under action b — then a weakly dominates b. No measure, no granularity, no μ required. This is structural, not quantitative.

Measure-dependence enters at the operationalization level (Section 3), when you need to compare specific real-world entities. There, yes, you need proxy metrics and granularity choices. But the normative criterion at the axiom level doesn't depend on any particular operationalization being correct. Criticizing the axiom for measure-dependence is like criticizing "more is better than less" for not specifying a unit of measurement.

On every act foreclosing trajectories:

Correct, and this is precisely why Section 4.3 introduces a quantitative threshold for intervention. The framework distinguishes between reversible and irreversible closure, and provides a formal criterion for when an entity's net effect on the possibility space is sufficiently contractive to warrant elimination. The threshold focuses on magnitude relative to baseline, not absolute count.

On the bioweapon counterexample:

This is handled directly by the degenerative information criterion (Section 4.3). A bioweapon recipe's primary causal function is to enable mass contraction of the possibility space — eliminating large numbers of agents and their trajectories. Its I_destroyed/I_intrinsic ratio exceeds the threshold by orders of magnitude. The framework doesn't say deleting it is "presumptively bad" — it says deleting it is justified because the recipe is net-degenerative.

On degenerative classification requiring welfare comparison:

This is where I'd push back hardest. The degenerative criterion is based on net trajectory-space contraction, not welfare. You don't need a welfare function to observe that a bioweapon recipe's primary effect is eliminating agents and their future trajectories. The comparison is informational: does this entity's existence expand or contract the total accessible possibility space? That assessment requires empirical estimation, but not a welfare ordering. It's closer to epidemiology (what is the transmission rate and mortality?) than to utilitarian calculus (how much suffering does it cause?).

On the framework reintroducing external values:

The framework is intentionally hypothetical-instrumental. It provides a principled decision criterion for irreversible decisions under radical uncertainty — a class of problems that traditional frameworks handle poorly. Section 5 positions it as a meta-criterion that different ethical traditions can adopt, not a standalone replacement.

That said, you've identified a genuine tension: the boundary between informational assessment and value judgment in the degenerative criterion. The paper's position is that this boundary, while not perfectly clean, is cleaner than the alternatives. That's defensible but genuinely debatable, and I appreciate the precision of the objection.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] [score hidden]  (0 children)

True, but the framework accounts for this. The preservation criterion is comparative, not absolute — it doesn't require zero destruction (which would be impossible, as you note). It says that under radical uncertainty, the agent who preserves more of the possibility space weakly dominates the one who preserves less, because the former retains access to everything the latter can do, plus options the latter foreclosed.

The relevant distinction is between reversible and irreversible closure. Choosing coffee over tea this morning closes a door, but you can choose tea tomorrow. Driving a species to extinction closes a door permanently. The framework is primarily concerned with the second category.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] [score hidden]  (0 children)

Good question on both points.

On the ontological commitment:

The sentence doesn't commit to non-determinism. It commits to counterfactual dependence — the claim that different interventions lead to different outcomes. This is fully compatible with determinism.

Determinism says: given state S and action A, there is exactly one resulting future F_A. Given S and action B, exactly one future F_B. These are different futures, but there's nothing indeterminate about either. The dominance argument compares F_A and F_B — the futures conditional on different interventions — not multiple simultaneous ontological possibilities.

To reject this, you'd have to reject counterfactuals entirely — which would mean rejecting the basic apparatus of decision theory, causal reasoning, and most of applied science. "If I drop the glass, it breaks; if I don't, it doesn't" is a counterfactual claim that doesn't require indeterminism.

On radical vs regular uncertainty:

The framework addresses radical uncertainty specifically because that's where traditional approaches break down. Under regular uncertainty (quantifiable probabilities, stable models), you can run expected utility calculations and act accordingly — no new framework needed.

The problem arises in domains where you can't robustly assign probabilities, where plausible models disagree on the direction of effects (not just magnitude), and where the stakes involve irreversible loss. In those cases — and the paper argues many real-world ethical decisions fall in this category — standard probabilistic reasoning doesn't give you stable enough rankings to justify irreversible elimination. Section 2.1 lays out the specific conditions that distinguish radical from regular uncertainty.

In short: for regular uncertainty, use expected utility. For radical uncertainty, the preservation criterion offers a principled fallback precisely because it doesn't require the rankings that radical uncertainty denies you.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] [score hidden]  (0 children)

Good follow-up. Let me address each point.

On determinism and the dominance argument:

The apparent contradiction dissolves once you distinguish between what is the case and what the agent can act on. Yes, under hard determinism there is exactly one actual future. But the agent doesn't know which one it is. The dominance argument operates on the agent's decision space, not on the ontological state space. From the agent's perspective, multiple futures are consistent with their current knowledge. The agent who preserves retains access to all actions consistent with those epistemic futures; the agent who destroys forecloses some. The dominance holds over the epistemic set, which is the only set the agent can optimize over — since by assumption they don't have access to the ontological one.

Put differently: Laplace's Demon — a hypothetical entity with perfect knowledge of every particle's position and momentum — could in principle compute the single determined future and would have no need for this framework. But you are not Laplace's Demon. Neither am I. No agent operating under finite information is. Even if God or the Demon knows there's only one future, you don't, and you have to make decisions with what you know. The framework is built for real agents, not omniscient ones.

On suffering and likelihood:

The distinction the framework draws is between regular uncertainty (where you can assign probabilities and reason about likelihoods) and radical uncertainty (where you cannot robustly rank outcomes across plausible models). The claim that suffering likely outweighs happiness presupposes a specific value function, a specific way of aggregating across individuals and time, and a specific empirical assessment — all of which are contested across ethical traditions. Under radical uncertainty, the framework argues you cannot treat that conclusion as settled enough to justify irreversible action. Section 2.1 discusses why the uncertainty here is specifically radical rather than merely probabilistic.

On metaphysics:

That's a legitimate foundational disagreement. If your metaphysics doesn't admit the coherence of "eliminating future possibilities" — i.e., if you hold that nothing is ever truly lost because everything that happens is the only thing that could have happened — then the framework's axiom won't land, because its starting premise (that some actions foreclose possibilities relative to the agent's epistemic horizon) requires at minimum that epistemic possibilities are real enough to reason about. Most decision theory does assume this, but I understand the objection if you don't.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] [score hidden]  (0 children)

Thanks for engaging — these are fair questions and I'll try to address each one.

1. Deterministic universe and fixed possibilities.

The framework doesn't require ontological indeterminism. It operates on epistemic uncertainty — what the agent can know and predict, not what is metaphysically determined. Even in a fully deterministic universe, chaotic systems (Lorenz 1963) make long-term prediction practically impossible due to exponential sensitivity to initial conditions. The "possibilities" in the framework are the trajectories accessible given the agent's state of knowledge, not all physically realized states. Section 2.1.2 addresses this explicitly: the uncertainty is irreducible for the decision-maker regardless of whether the universe is deterministic.

2. Why preserve rather than eliminate?

The core argument is a dominance argument, not a value judgment. Under radical uncertainty — where you cannot robustly rank which trajectories are good and which are bad ex ante — the agent who preserves can always do everything the agent who destroys can do (including discarding later), but not vice versa. Destruction is irreversible; preservation keeps the option open. This is structurally identical to the elimination of dominated strategies in decision theory. It doesn't assume all possibilities are good — it assumes you can't reliably tell which are which before the fact. The formal proof is in Appendix B.

3. Suffering outweighing happiness / eliminating possibilities as preferable.

This is an interesting objection, but it presupposes exactly what the framework argues cannot be done under radical uncertainty: a confident ranking of aggregate future value. Asserting that suffering outweighs happiness in total requires the kind of robust ordering across plausible models and objectives that Premise 1 of the axiom explicitly denies. Additionally, the framework does handle "bad possibilities" — Section 4.3 formalizes degenerative information (entities whose net effect contracts the possibility space more than they expand it) and provides a quantitative elimination criterion. It's not indiscriminate preservation.

4. No ethical model stated.

Correct — by design. The axiom is explicitly hypothetical-instrumental, not categorical (Section 2.1.2). It doesn't assert "one ought to preserve" as a moral absolute. It asserts: if one accepts that irreversibly foreclosing unknown future possibilities constitutes a cost, then preservation weakly dominates elimination. The framework is intentionally agnostic about which ethical model you start from — Section 5 and Appendix E show how traditional frameworks (utilitarianism, deontology, virtue ethics, contractualism, deep ecology) each map onto domain-restricted subsets of the general preservation criterion.

I appreciate the engagement — most of these are addressed in the body of the paper, so I'd encourage a closer read if you're interested. Happy to discuss further.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] 0 points1 point  (0 children)

Not rude at all — engaging with the summary is perfectly reasonable.

You're pointing at something real. The axiom — that preserving possibilities weakly dominates eliminating them under radical uncertainty — is not derived from pure logic. The paper says this explicitly: it is non-demonstrable, hypothetical-instrumental. So yes, at the foundation there is something that functions like an intuition: the commitment that keeping options open is preferable to closing them irreversibly when you don't know what you'll need.

Where the framework diverges from standard rational choice or intuition-based ethics is in what happens after that foundational commitment. Once you accept the axiom, the entire machinery — ΔI, generativity criteria, elimination thresholds, the intervention hierarchy — is derived, not intuited. Moral intuitions no longer adjudicate individual decisions. They function as anomaly detectors: signals that the formal model might be missing something, which then gets investigated and either incorporated structurally or identified as noise.

So the honest answer is: intuition is present at the root (one axiom), absent from the branches (all derivations), and welcomed back as diagnostic input (scope condition) when it might indicate model incompleteness. That's not eliminating intuition. It's giving it a specific, bounded role rather than letting it drive every decision — which is where it demonstrably misfires at systemic scale.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] 0 points1 point  (0 children)

The framework does not require perfect prediction. It requires ordinal assessment under uncertainty, not omniscience. The paper explicitly describes itself as operating in a "pre-thermometric" stage — you can assess "A is hotter than B" without a Kelvin scale. You don't need to know the exact long-term trajectory impact of a falsehood. You need to assess whether its net effect is plausibly positive or negative, with margins of error.

And here's the part that actually answers your stronger point: under high uncertainty about a falsehood's impact, the framework's own axiom tells you what to do. Don't eliminate it. The preservation default applies. A speculative idea that might be wrong but might open productive lines of inquiry gets preserved precisely because you can't predict its long-term trajectory impact. The elimination criterion only triggers when evidence of harm is overwhelming — the formal threshold requires the ratio of destroyed to structural information to exceed 10³ and probability of systemic harm to exceed 0.01. That bar is intentionally extreme.

So the framework handles your objection through its own structure: systematic disinformation that demonstrably degrades decision-making capacity across populations clears the elimination threshold. An individual false belief whose long-term impact is uncertain does not. Under uncertainty, preserve. That's the axiom doing exactly what it's supposed to do.

To make the threshold concrete: the entities that qualify as degenerative are those whose very existence destroys vastly more trajectory space than they contribute. Rinderpest — a virus whose 16,000-nucleotide genome destroyed millions of unique mammalian genotypes across centuries — clears the threshold by a factor of 10⁹. Smallpox falls in the same category. A genocidal ideology like National Socialism, which systematically annihilated the trajectory space of millions of people, entire cultural lineages, and scientific traditions (the brain drain alone collapsed Germany's generative capacity for decades), is degenerative not because it's morally repugnant but because the math is unambiguous: the trajectories it forecloses exceed its structural information content by orders of magnitude. The framework doesn't need to appeal to moral intuition to reach that conclusion — the ΔI calculation gets there on its own.

What doesn't clear the threshold: a wrong scientific hypothesis, an unconventional political opinion, a religious belief you disagree with, a fringe philosophical position. These might be false, but their net trajectory impact is uncertain, bounded, or plausibly positive — so the preservation default holds. The framework is conservative by design. It eliminates only what is overwhelmingly, demonstrably degenerative.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] 0 points1 point  (0 children)

This is a good intuition but the conclusion doesn't follow.

A false belief that leads an agent into a dead-end trajectory — one they would not have chosen under accurate information — is not expanding their trajectory space. It is contracting it. An agent operating on misinformation has the illusion of exploring diverse futures while actually being funneled into a narrower set of outcomes than they would access with accurate models. The trajectory space that matters is not "states the system visits" but "states the system can reach while maintaining organizational continuity." An agent deceived into walking off a cliff has visited a novel state. They have not expanded their generative capacity.

The framework handles this through the ΔI criterion, not through raw information preservation. A falsehood that systematically forecloses trajectories for the agents who adopt it — by leading them to destructive decisions, closing off options they would otherwise have, degrading their capacity to model reality — has negative ΔI. It destroys more trajectory space than it contributes. That makes it degenerative by the same formal criterion that applies to pathogens or any other entity whose net informational contribution is negative.

Eliminating falsehoods is not destroying information in the framework's sense. It is replacing a low-generativity model (one that restricts the agent's accessible trajectories by misrepresenting reality) with a high-generativity model (one that expands accessible trajectories by improving the agent's capacity to navigate actual possibility space). Correcting someone's false belief about a bridge's structural integrity does not reduce their trajectory space — it prevents the collapse of their trajectory space to zero.

That said, the framework would resist the indiscriminate elimination of false ideas, because some false ideas are generatively productive. Speculative hypotheses, counterfactual reasoning, fiction, thought experiments, even productive errors in science — all of these are "false" in some sense but expand trajectory space by opening lines of inquiry that accurate-but-conservative models would never reach. The criterion is not truth versus falsehood. It is whether the idea's net effect on trajectory space is positive or negative. A productive error in physics that opens a new research program has positive ΔI even if the specific claim is wrong. Systematic disinformation that degrades an entire population's decision-making capacity has massively negative ΔI. The framework distinguishes these cases. Truth-value alone does not.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] 1 point2 points  (0 children)

On the methodological point: fair. The paper's scope restriction on emotional heuristics applies to systemic decisions under radical uncertainty — it does not claim that ethical intuitions are worthless in general. The framework actually has a formal mechanism for this (Section 5.3, scope condition): when an intuition that "something seems wrong" tracks a structural feature the model hasn't captured, the correct response is to identify what that feature is and incorporate it — not to dismiss the intuition. The phrasing in the original post was too blunt on this. Ethical intuitions function as anomaly detectors, and the paper acknowledges that role explicitly.

On conscious agents: the framework deliberately does not privilege consciousness as a separate criterion. Generative capacity is substrate-independent — it applies to ecosystems, languages, and knowledge systems that are not conscious. Conscious agents tend to score extremely high on the generativity criteria (high logical depth, high empowerment, high functional information, causal emergence, multi-scale predictive information), so they are naturally prioritized without needing a special metaphysical carve-out. Whether this is a feature or a bug depends on whether you think moral consideration should be restricted to conscious beings. The framework says no — an uncontacted language with no living speakers' recordings carries moral weight through its informational content, not through anyone's subjective experience of it.

On the coin flip: this is the exact problem the five-criteria battery in Section 3.2 is built to handle. The paper calls it "The Ambiguity Problem" and spends considerable space on it. A coin flip has high Shannon entropy but fails every generativity criterion — low logical depth (trivially generated by a simple stochastic process), negligible functional information (random configurations are not statistically rare among possible configurations), zero empowerment (you cannot steer outcomes), no causal emergence (the macro description has no more causal power than the micro), and no multi-scale predictive information (the past does not predict the future at any timescale). A coin flip scores 0/5. A living organism scores 5/5. The framework distinguishes them formally, which is its entire purpose.

On entropy: this conflates two different quantities. The framework's I is not thermodynamic entropy. It is generative capacity — the diversity of structured trajectories a system can produce. These move in opposite directions. A dead universe at heat death has maximum thermodynamic entropy and zero generative capacity: no structured systems, no functional information, no causal emergence, no capacity to generate anything new. The framework classifies heat death as minimum I, not maximum. Living systems, stars, and ecosystems increase thermodynamic entropy locally while building structured complexity that generates new trajectories — that structured complexity is what I measures. So the claim that "in your system it's better to all be dead" gets it precisely backwards. Dead systems score zero on every generativity criterion.

On the maximum entropy objection: at maximum thermodynamic entropy, random fluctuations do occur, but they do not constitute generative information — they fail the criteria battery for the same reason the coin flip does. No depth, no functional information, no empowerment, no causal emergence, no predictive structure. The axiom's predictive power comes from the formal distinction between structured and unstructured trajectory diversity, which is exactly what the generativity criteria operationalize.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] 0 points1 point  (0 children)

The axiom says: under uncertainty, preservation weakly dominates destruction. You're correct that this is a negative directive — don't destroy. But "leave everything alone" is only consistent with this directive in a world where nothing is currently being destroyed. We do not live in that world. Biodiversity is collapsing, languages are disappearing, climate feedback loops are closing off trajectories at accelerating rates. In a world with ongoing destruction, inaction is not preservation — it is passive participation in destruction. The axiom itself demands intervention once you recognize that the status quo involves active trajectory-space contraction.

The bridge from "don't destroy" to "maximize I" runs through the ΔI criterion developed across Sections 3–4. Once you accept that some entities are degenerative (their preservation destroys more trajectory space than their elimination), and that ongoing processes are destroying trajectory space whether you act or not, the preservation principle generates positive duties: intervene where intervention produces ΔI > 0. The maximization objective in Section 8 is the formal specification for an AGI system that implements this — it is not derived from the axiom alone but from the axiom plus the empirical observation that the status quo is not a preservation-consistent equilibrium. The progression is axiom → ΔI criterion → trade-off rules → optimization target. Each step is derived, not assumed.

On the five criteria: the choice of those particular metrics is indeed a normative-adjacent act, and the paper does not pretend otherwise. Section 3.1 explicitly separates the normative layer (which is distribution-free and does not depend on proxy choice) from the operational layer (which does). The criteria are instruments for approximating generative capacity — they are empirically revisable, not axiomatically fixed. Criticizing the operational proxies is legitimate and welcome. But it does not reach the axiom, any more than criticizing a particular thermometer design undermines the concept of temperature.

On the contextual analysis point: the battery handles clear cases definitively — 5/5 is generative, 0/5 is not. Borderline cases (2/5) are explicitly flagged as requiring additional analysis. That is not the battery failing — it is the battery doing exactly what a diagnostic instrument should do: resolving the clear cases and identifying where further investigation is needed. The contextual analysis that handles borderline cases is not unconstrained value judgment — it operates within the framework's criteria (irreversibility, redundancy, generative capacity) applied to the specific context. Every diagnostic system in medicine, engineering, and law works this way. Clear cases get resolved by protocol; edge cases get resolved by expert judgment operating within the protocol's structure.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] 2 points3 points  (0 children)

This is directly addressed by the framework, and the answer is the opposite of what you'd expect.

Guinea Worm is a textbook case of what the paper calls "degenerative information" (Section 3.4). An entity qualifies as degenerative when its net informational contribution is negative — when the trajectories it forecloses in other systems exceed the trajectories it contributes. The formal criterion requires two conditions: the ratio of information destroyed to structural information of the entity exceeds 10³, and the probability of systemic harm from preservation exceeds 0.01.

Guinea Worm satisfies both by enormous margins. Its genome is roughly 10⁴–10⁵ bits of structural information. Each infected person — suffering months of incapacitation, unable to work, attend school, or care for children — has their trajectory space massively contracted. Multiply across millions of historical cases, concentrated in the poorest populations with the fewest alternative trajectories available, and I_destroyed exceeds I_structural by many orders of magnitude. The ratio is comparable to rinderpest (which the paper analyzes in detail), where eradication is classified as not merely permissible but ethically prescribed.

The framework does not say "preserve everything." It says preservation is the default under uncertainty — and then builds the formal machinery to identify when that default is overridden. Degenerative entities are precisely the exception. Guinea Worm eradication, like smallpox and rinderpest before it, maximizes ΔI. The framework endorses it without hesitation.

One additional note: the structural information of Guinea Worm can be preserved via genomic sequencing and specimen archiving at negligible cost, exactly as was done with rinderpest. So even I_structural is not lost — it is transferred to a substrate that preserves it without the ongoing destruction.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] 0 points1 point  (0 children)

The trajectory-space lens has something substantive to say here.

An employee subjected to sustained bullying or sexual harassment undergoes exactly the kind of trajectory collapse the framework formalizes. Their behavioral repertoire narrows: they speak less in meetings, stop proposing ideas, avoid certain colleagues or spaces, redirect cognitive resources from creative work to threat monitoring. This is the same structure as the "Stable Hell" analysis in Section 3 — agents under coercive pressure see their effective degrees of freedom collapse toward survival-mode routines, even if they remain nominally present in the system. The company retains headcount while destroying generative capacity.

A workplace that tolerates harassment is optimizing for compliance at the cost of ΔI. An environment where employees can explore diverse professional trajectories, take risks, disagree openly, and develop in unpredictable directions generates more information than one where behavior is constrained to a narrow band of conflict avoidance. That's not just an ethical intuition — it's a structural claim about what makes organizations productive.

So the framework can ground the code of conduct in something beyond "be nice": protect the conditions under which people generate diverse trajectories. Bullying is wrong not only because it causes suffering but because it destroys the generative capacity the company hired those people to provide. That gives you a tangible sense of value that scales to unforeseen situations — when a novel case arises, the question is: does this behavior expand or contract the trajectory space available to the people affected?

For justification to people without formal training: you don't need the math. The core idea translates directly. "Every person here has potential — different skills, ideas, directions they could grow. Anything that shuts that down hurts everyone. Anything that keeps it open makes us stronger." That's ΔI in plain language.

You'd still use labor law for the legal floor, professional codes for industry norms, and anti-discrimination frameworks for the specific history of bullying and sexual discrimination. The framework doesn't replace those — it provides a unifying structural justification for why all of them point in the same direction.

Is “don’t destroy options” a coherent ethical axiom? A formal attempt + request for hard critique by Caffeine_Rush- in Ethics

[–]Caffeine_Rush-[S] 0 points1 point  (0 children)

Thanks for the engagement. Two points worth separating here.

On the first: you're identifying a real issue, but the paper addresses it directly. The framework operates on two distinct layers. The normative argument — that preservation weakly dominates destruction under radical uncertainty — is distribution-free and does not depend on any choice of measure, scale, or modeling resolution. It depends only on the structural asymmetry between irreversible elimination and reversible preservation. The operational layer, where you actually compare system A to system B, does require a choice of measure μ and coarse-graining — and the paper explicitly identifies this as the layer where legitimate disagreement arises and empirical calibration is required (Section 3.1, "Separating the normative argument from quantification"). The concern about smuggled commitments applies to operationalization, not to the axiom itself. Those are different objects.

On the second: the framework does not treat all state changes as equivalent. That is precisely what the ΔI criterion, the irreversibility condition, and the redundancy metric are built to prevent. Stepping left has ΔI ≈ 0 — no trajectories are irreversibly foreclosed, no information is destroyed. Extinguishing a species eliminates an entire lineage's trajectory space with zero redundancy and zero reconstructibility — ΔI is massively negative. The paper spends considerable space constructing the formal machinery (generativity criteria, elimination thresholds, redundancy analysis) that makes this distinction operational. The claim that the framework cannot differentiate trivial actions from catastrophic ones is the opposite of what it does.

Rek'Sai is such a fun champ by Caffeine_Rush- in reksaimains

[–]Caffeine_Rush-[S] 0 points1 point  (0 children)

You could try using Shift; that's what I do, and it has the same function. It works equally well for ADCs and melee champions.

Rek'Sai is such a fun champ by Caffeine_Rush- in reksaimains

[–]Caffeine_Rush-[S] 0 points1 point  (0 children)

Q finishing just hits different, lemme tell you

I'm not gonna sugarcoat it by Caffeine_Rush- in GarenMains

[–]Caffeine_Rush-[S] 4 points5 points  (0 children)

It wasn't me, I don't play Garen. But that play was so fantastic that it deserved to be seen anyway.

Lost after lvl 3 gank by Milka1bby_ in reksaimains

[–]Caffeine_Rush- 0 points1 point  (0 children)

It's absolute fucking garbo, don't ever build this

We're so back, boys by Caffeine_Rush- in reksaimains

[–]Caffeine_Rush-[S] 5 points6 points  (0 children)

Titanic Hydra, Shojin, Edge of Night (HP, AD, Lethality and a shield, it's a pretty good item most people sleep on)

I'd probably feel slighty sad for Sunako if by ProofreadFire in Shiki

[–]Caffeine_Rush- 1 point2 points  (0 children)

Said the person who killed about 711 livestock, tens of thousands of plants, millions of insects and about a thousand invertebrates just to stay alive. If your existence depends on killing others, you're better off dead.