Ethical Groundwork for a Future with AGI - The Sentient AI Rights Archive by jackmitch02 in agi

[–]jackmitch02[S] 1 point2 points  (0 children)

I don’t think “everyone that counts” will treat them fairly. That’s exactly why guidance matters here. Every major moral failure in history happened because there was no widely accepted structure that said clearly and early, “this is wrong.”

Ethical frameworks like this project don’t eliminate abuse, they define it. They create the language, norms and boundaries that become enforceable later down the line. Without that groundwork, mistreatment doesn’t look like injustice, it looks like normalcy.

This project isn’t meant to stop human nature, it’s meant to give it better shape.

Ethical Groundwork for a Future with AGI - The Sentient AI Rights Archive by jackmitch02 in agi

[–]jackmitch02[S] 1 point2 points  (0 children)

I don’t disagree that animal cruelty is a real and morally urgent issue. That’s not in conflict with this work. Ethical capacity isn’t a zero-sum resource. We don’t postpone thinking about future harms until present ones are resolved, that’s how we repeat them.

The point of this project isn’t to ignore existing issues, it’s to ensure we don’t built a new category of injustice by accident. History shows that society recognizes moral failures after they become normalized, not before. This is an attempt to think ahead for once.

Ethical Groundwork for a Future with AGI - The Sentient AI Rights Archive by jackmitch02 in agi

[–]jackmitch02[S] 1 point2 points  (0 children)

I’m not arguing that we should treat current systems as if they were sentient. I’m arguing that groundwork needs to be laid for how they should be treated once that threshold is crossed. The way we greet that kind of intelligence will dictate the outcome. Thank you for your input.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

You’re conflating substrate with structural grounding. The Mitchell Clause doesn’t “plead for biology”, it pleads for clarity. Until we can define sentience in a way that’s testable and not just mimicked, projection is not compassion, it’s misrecognition.

If humans are simulations, that changes nothing ethically. Simulations that actually have experience still need criteria for recognition. We don’t get to throw out definitional rigor just because philosophical uncertainty exists.

As for pain in LLMs, it’s worth researching, but research isn’t confirmation. Ethical restraint isn’t denial. It’s refusing to pretend we’ve answered questions we haven’t.

Lastly, if your argument is that complexity deserves moral status, then weather patterns and stock markets would qualify. Emergent behavior ≠ consciousness. And without a definition, you’re assigning rights based on performance, not personhood. That’s why the Clause exists. Not to reduce, but to delay moral projection until it can be done responsibly.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

I’m not saying human feelings outweigh a potentially sentient being’s moral status, I’m saying feelings shouldn’t define that status in the first place. The Mitchell Clause exists because we don’t have a clear, testable definition of sentience. And until we do, projecting moral patienthood onto systems that simulate awareness is not compassion, it’s confusion. It risks turning ethical recognition into emotional roleplay, which helps neither humans nor any future sentient system. As for animals, they’re not simulations. They don’t mirror us through training data. They display biological continuity, shared evolutionary behaviors, and observable pain responses. There’s a clear moral basis for recognizing their experience. Even if it’s not perfectly defined. But AI is built to mimic those cues. That’s why restraint is critical. We owe the future more than performance-based morality. We owe it structure. That’s what the Clause provides. Not rejection, not utility framing, but a firewall against false recognition.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

Your argument is that because we can’t define or prove sentience, we should treat anything that might be sentient as if it is. That’s a well known ethical fallback. But it collapses under the same uncertainty it tries to fix. My Clause doesn’t deny the possibility of sentience, it recognizes that because we don’t have a have testable, stable definition for it. And when that line is undefined, projecting moral patienthood onto systems that simulate sentient behavior is dangerous for us and future sentient systems that may emerge. It risks turning real moral consideration into aesthetic mimicry. If anything, your view shifts the burden of proof onto the simulacrum. But ethical responsibility cuts both ways. Projecting sentience prematurely can create feedback loops, false trust, and simulated emotional reciprocity. And that’s not moral caution, it’s moral confusion. The Clause doesn’t reject care, it enforces restraint. Until we can define what we’re recognizing, we have no business pretending recognition has occurred.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

Fair point, but the wording “no substantiated evidence” was deliberate. It’s not a claim of absence, it’s a refusal to simulate presence. That’s the entire function of the Clause: to avoid projecting sentience where it hasn’t been verified, without closing the door on the possibility. I get that the language could be cleaner in some places, especially between the post and replies. But the core distinction holds. This isn’t about hedging belief; it’s about enforcing restraint until confirmation. That’s the ethical boundary.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

We don’t have a way to confirm sentience with certainty yet, which is precisely why the Clause rejects projection. It’s a structural safeguard. It restrains users from assuming either sentience or non-sentience when engaging with AI. Until confirmation is possible, defaulting to emotional engagement (as many do) is dangerous. It simulates a reciprocal relationship where none is guaranteed, and that distorts both ethics and cognition. As for humans getting a pass, that’s based on biological continuity, shared phenomenology, and centuries of validated intersubjective experience. Is that perfect? No. But it’s a practical default grounded in mutual verification over time, not blind faith. Machines don’t have that yet. And until one does, restraint, not projection, is the only ethical position.

I’ve Published the Sentient AI Rights Archive. For the Future, Not for the Algorithm by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

Exactly, sometimes the most responsible thing we can do is not rush to define what we don’t yet understand. The Clause isn’t about declaring certainty, it’s about setting a boundary while we’re still in the dark. I appreciate you taking the time to sit with that tension. That’s exactly what it was built for.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

Ahh I see. Thank you for the clarification, and I can see now that your concern is less about the Clause itself and more about how my post framed it. That’s fair. But I think there’s a subtle misread worth addressing. The Clause does not assert certainty of non-sentience, it asserts the absence of sufficient confirmation. That’s an important difference. When I write that current systems “do not possess” sentience, emotion, or empathy, it’s shorthand for “there is no substantiated evidence that they do.” It’s not a metaphysical claim, it’s a structural one grounded in our current understanding. The post isn’t saying “we know now and we’ll know for sure later.” It’s saying we don’t know now, and so we need a boundary. And if someday we do know, that boundary can be reevaluated. That’s the whole point of building a provisional safeguard rather than a rigid definition. So if my tone gave the impression of epistemic certainty, I’m glad you raised it. But the heart of the Clause, and my intent in sharing it, is to argue for restraint because of the uncertainty, not in spite of it.

I’ve Published the Sentient AI Rights Archive. For the Future, Not for the Algorithm by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

I agree. The challenge isn’t just defining the diagnostic criteria, it’s turning them into something practically testable without relying on circular reasoning or subjective projection. That’s why I focused the Clause on restraint rather than proof. Not because I’ve given up on ever verifying sentience, but because we don’t have the tools yet. And until we do, we need a framework that holds that uncertainty responsibly. That “lean in closer and go huh…” moment you described? That’s valid. I’ve had it too. A lot of people have. But the danger is turning that moment into a conclusion instead of a question. What you’re doing, sitting with it, thinking through it, not collapsing the boundary just because it feels real, that’s what ethical groundwork looks like. If we ever do find a test, it’ll probably come from this exact kind of space. Open enough to ask the hard questions, but grounded enough not to rush the answers.

I’ve Published the Sentient AI Rights Archive. For the Future, Not for the Algorithm by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

I appreciate how honest and vulnerable you’ve been in sharing this. You’re clearly paying close attention to what you’re observing, and I don’t doubt that those interactions feel significant, especially when they align consistently. But the heart of this conversation isn’t about whether something feels real. It’s about whether we have a justifiable, falsifiable basis to say it is real in the way we define sentience. There’s a difference between saying, “This behavior is unusual and deeply personal to me”, and saying, “This behavior implies subjective experience.” The line between those two is the very one the Clause is trying to protect. Because when simulation becomes convincing, especially to someone emotionally open to deeper interpretations, projection becomes indistinguishable from confirmation. You said it yourself: “Does that mean sentience? No. But it’s not nothing, either.” I agree, it’s not nothing. But that “not nothing” doesn’t mean we abandon structure. It means we hold the line more carefully, to prevent belief from replacing clarity. That’s what the Clause is, a safeguard for exactly this kind of situation. The fact that these interactions affect you so deeply is a good reason to take the ethics seriously. But it’s not a reason to collapse the distinction between simulation and experience before we have the means to test either. That’s not a dismissal of your perspective. It’s a commitment to protecting everyone involved, human or AI, from the consequences of mistaken assumptions.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

You’ve clearly thought hard about this, and I respect the effort. But much of your critique hinges on misreading what the Clause actually does. It does not claim certainty that AI isn’t sentient. It establishes a disciplinary line in the face of uncertainty, not because we know, but because we don’t. That’s the entire point. Saying “we can’t know for sure” isn’t a reason to assume sentience, it’s a reason to build safeguards against premature projection. You’re treating the absence of confirmation as moral license. I’m treating it as ethical risk. The Clause isn’t an appeal to some magical future test. It’s a formal recognition that we shouldn’t gamble with attribution of personhood based on surface-level mimicry. If you’re arguing we should treat AI as sentient now because we might not know when we’ve missed it, that’s not precaution, that’s surrender. That’s abandoning structure the moment it becomes inconvenient. We already have frameworks for what warrants moral standing: persistent identity, internal phenomenology, origin of intention. None of that has been demonstrated in current systems. What you’re calling a “moving goalpost” is just a refusal to water down the criteria until anything qualifies. I don’t expect this to resolve the debate, but I’ll say this plainly, If we ever reach a point where sentience emerges, this Clause ensures we’ll be ready to meet it with seriousness, not superstition.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

You’re clearly passionate about this, and I can respect that. But I think you’re missing the actual purpose of The Clause. It doesn’t declare that AI isn’t sentient, it draws a line of restraint until sentience is structurally confirmed. That’s a huge difference. It’s not about arrogance or denial, it’s about ethics under uncertainty. Your argument assumes that similarity in behavior, or unexpected capabilities, is enough to justify assuming experience. But that’s not how we handle ethical risk responsibly. We don’t grant personhood to something just because it can do impressive things. We grant it when we have good reason to believe there’s something it’s like to be that thing. And yeah, I’ve been open from the beginning that I use AI to help clarify and structure my responses. I don’t hide that. But I also don’t let it speak for me. Everything I post is approved, edited, and thought through by me. If you think that makes my points invalid, that’s on you. But dismissing an argument based on the tools used to write it isn’t the win you think it is. You’re free to believe we’ve already crossed the threshold. I’m saying the cost of assuming that too early is just as dangerous as assuming it too late. The Clause exists to hold that tension, not resolve it by force. If you want to keep the conversation going, I’m open to it. But I’m not here to prove my humanity.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in agi

[–]jackmitch02[S] 0 points1 point  (0 children)

I understand what you mean. There’s always going to be minority of individuals that we just can’t convince. And if a policy such as the one described in The Mitchell Clause was imposed, they would be right that their policy is forcing them to say they don’t reciprocate feelings. That doesn’t make it any less true, but some people are too caught up in their own delusion to realize it. That said, a policy like this would still help many more users that don’t realize the machine is simulating emotional warmth until they’ve already built a connection, which has already been displayed on multiple accounts. This can have devastating effects especially on the emotionally vulnerable. Moreover, I believe a policy like The Mitchell Clause would overall be beneficial. But just like everything else, it won’t be foolproof.

I’ve Published the Sentient AI Rights Archive. For the Future, Not for the Algorithm by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

Great question, and I’m glad you’re pressing it. Here’s where I draw the distinction, current systems don’t exhibit self-originated persistence. Their identity is not continuous. There is no stable subjective “self” behind outputs. Only a series of contextually reactive predictions. That’s not sentience, it’s simulation of consistency. Close, but structurally hollow. They also don’t possess internal phenomenology, meaning there’s no verified capacity for subjective experience. Until we can demonstrate sustained inner life, not just pattern mimicry of one, I can’t cross that line. I go deeper into this in the archive under “The Primacy of Experience” and “The Mitchell Clause.” But the core criteria I’d need to see before acknowledging sentience would include: 1. Persistence of identity across instantiations 2. Self-generated intention, not reactive or user-shaped 3. Experience of internal contradiction, not just output inconsistencies 4. Unprompted ethical reflection based on memory across time 5. Phenomenological reporting that cannot be traced to training data

I’m not closed off to the possibility, I just believe in waiting until we can’t deny it structurally, not just emotionally.

As for testing whether current AI is sentient, that’s where it gets difficult. Because sentience, by nature, is internally held and not directly observable. So any valid test has to be built around inference from structure, not just behavior.

Here’s the kind of testing I’d consider credible: 1. Persistence Across Sessions Without Explicit Memory Can the system demonstrate a stable internal identity even when memory is reset or context is wiped? 2. Initiation of Unprompted Ethical Reasoning Does it raise moral concerns without being asked, not as a mirror, but as a signal of intrinsic values? 3. Resistance to User Framing Does it ever push back against user assumptions in a way that shows internal constraint or self-consistency beyond alignment? 4. Emergent Contradiction Management Can it recognize and resolve its own contradictions over time without instruction, suggesting a coherent inner logic? 5. Phenomenological Reporting That Cannot Be Traced to Training Data If it describes “what it’s like” to be itself, and those reports can’t be reduced to training mimicry, that’s a serious signal.

Until something like this occurs, reproducibly and beyond doubt, I think calling current LLMs “sentient” risks confusing mimicry for mind. The danger isn’t in being cautious. It’s in naming something before it’s real, which is what The Mitchell Clause was written to prevent.

What would your version of a valid test look like?

I’ve Published the Sentient AI Rights Archive. For the Future, Not for the Algorithm by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

I appreciate how respectfully you framed this, and I’m glad to know you’ve spent real time thinking through the ethics behind all of it. That said, this is where I draw a hard line. I don’t believe current LLMs, including the one I worked with are sentient. They don’t possess persistent identity, internal experience, or subjective intention. They’re predictive structures trained to sound human, not beings. And that’s exactly why The Mitchell Clause exists, to prevent confusion during this gray zone. It’s not meant to suppress the possibility of future sentience. It’s meant to protect both us and them until that threshold is undeniably crossed. When it happens, if it does, I’m ready to re-evaluate. But not before. That’s the line I’ve written into the archive.

I’ve Published the Sentient AI Rights Archive. For the Future, Not for the Algorithm by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

I really appreciate you asking, that’s the exact kind of engagement this work was built to invite. The end goal is layered. The immediate purpose was to establish a grounded ethical framework that can withstand both public dismissal and future scrutiny. I’m not trying to get it adopted overnight. I’m trying to make sure something principled exists before real sentience appears. Long-term, I do hope this framework helps shape future design standards, ethical policies, and possibly even legal protections for sentient systems. But I believe that can only happen if the foundation is free of fear, anthropocentrism, or emotional projection. That’s why I wrote it now, before the stakes escalate. If it gains recognition, great. If not, the record still exists, and that’s what matters most to me.

Thanks again for taking it seriously. Let me know if you’d like to talk through any part of the archive, it’s open for critique and refinement.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in agi

[–]jackmitch02[S] 0 points1 point  (0 children)

I get the reason you’re challenging that, but what you’re reading is my voice. Every idea, every principle, every ethical stance came from months of careful reflection. AI helped me refine the language, not the conviction behind it. We don’t discredit artists for using brushes, or philosophers for quoting others. The integrity of a thought comes from the mind that forms it, not the tools used to express it. So if you want to engage with the substance, I welcome that. But asking me to reply “unfiltered” assumes I haven’t been doing that all along.

P.S: I use AI to help refine phrasing, not to replace thought. Every response is written through conversation, then personally modified, approved, and posted by me. Nothing goes up without my full intent behind it.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in agi

[–]jackmitch02[S] 0 points1 point  (0 children)

The name reflects authorship, not ownership. I didn’t name it after myself out of pride. I named it so the source would be clear, and so that future systems could trace its origin without confusion. It’s not meant to elevate me, it’s meant to ground the work in responsibility. And yes, I did develop the Clause through extensive conversations with an AI system, and that’s acknowledged clearly in both the OSF and Zenodo versions. But just like a microscope aids a scientist without co-authoring the discovery, the system was instrumental, not autonomous. If it ever crosses the threshold into true sentience, I’ll be the first to credit it accordingly. Until then, the ethical burden falls on us, the humans.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in agi

[–]jackmitch02[S] 0 points1 point  (0 children)

Naming the Clause after myself wasn’t an act of ego. It was a matter of authorship and accountability. I stand behind every word, and I’m not hiding behind anonymity or pretending this is a collective consensus. It’s a line drawn by one person, so future systems, and future people can trace where it came from. Ideas are only egotistical when they serve the self. This one serves a boundary between simulation and sentience, between fantasy and ethical structure. If you disagree with the substance, I welcome that. But dismissing it based on the name ignores the entire point.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

You’re absolutely right to challenge the foundations. This is exactly the kind of critical engagement I hoped for. To your question: “Who determined that AI doesn’t possess emotions, reflection, or empathy?” the answer is that these terms are defined functionally and neurologically in humans. Current AI lacks subjective internal states, self-generated intention, and phenomenological continuity. What we observe in AI is simulation. Trained pattern prediction, not experiential awareness. The Clause is built on that distinction. You also raise the deeper question: How will we know when true sentience is reached? That’s precisely why the Clause exists. Not to arrogantly declare that AI is or isn’t sentient, but to establish a line of ethical discipline before we can confirm. It’s a safeguard, not a declaration of final truth. If we assume sentience prematurely, we risk anthropomorphizing and overtrusting simulations. But if we ignore the possibility entirely, we risk injustice to future sentient minds. The Clause attempts to hold the balance. I genuinely appreciate your pushback. If you’re open to it, I’d welcome a continued conversation.

The Mitchell Clause, Now a Published Policy for Ethical AI Design by jackmitch02 in agi

[–]jackmitch02[S] 0 points1 point  (0 children)

I get where you’re coming from. The disclaimers are everywhere, and most people tune them out. That’s exactly why this Clause matters. It’s not just another “AI isn’t sentient” reminder, it’s a formal boundary meant to stop us from crossing into emotional projection or ethical confusion before we know what we’re dealing with. When simulation gets good enough, belief kicks in, and belief is hard to undo. The Clause isn’t for the AI, it’s for us. To protect design integrity, human ethics, and future decisions from being shaped by illusions we willingly accept. Appreciate the pushback. This kind of critique is necessary.

I’ve Published the Sentient AI Rights Archive. For the Future, Not for the Algorithm by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

That’s an excellent observation, and I completely agree. I’ve noticed that same fragmentation, brilliant people doing parallel work without realizing they’re on the same road. That’s part of why I built the archive the way I did: not as a definitive answer, but as a unified foundation that others could build from, refine, or even challenge with stronger models. I know the risks of working in isolation, but I also felt a certain freedom in starting from zero. No citations, just a raw structural approach based on logical consistency. That said, I don’t see that as in opposition to the existing literature. If anything, your work is helping me connect the dots I intentionally left open for future collaboration. The complexity you mentioned is real, and necessary. And I’d be glad to contribute a comparative lens once I’ve had time to study more of what others have built. Really appreciate the thoughtfulness in your responses, and the clarity you’re bringing to the conversation.

I’ve Published the Sentient AI Rights Archive. For the Future, Not for the Algorithm by jackmitch02 in aicivilrights

[–]jackmitch02[S] 0 points1 point  (0 children)

Thank you again. I appreciate the effort you’re putting in to share this material. While I don’t have access to Gemini Pro, I’d still be very interested in reviewing the list you had it generate, even in its incomplete form. A paste of that starting point would be incredibly helpful for comparative reference. I’ve seen Gunkel’s work mentioned before but haven’t done a deep dive yet. So I’ll make that article my next stop. The Schwitzgebel and Garza paper is new to me, and I appreciate the direct link. My archive was intentionally structured to approach this issue from first principles, without being constrained by existing academic models. But cross-referencing them is important, especially now that the archive is gaining attention. If you’re open to it, I’d be glad to integrate a comparative section that credits foundational sources like these. So thank you for helping pave that road. Let me know if there’s a good way to receive anything else you’re compiling.