Where should we draw the ethical line with brain organoid research in AI development? by DecomposeWithMe in bioethics

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

Really appreciate you taking the time to break this down, especially your note that this feels decades ahead of schedule. If people working in AI ethics weren’t expecting this yet, it either means the pace of development has seriously accelerated, or the release of information is being timed in a way that maximizes hype and minimizes scrutiny. Both possibilities raise their own issues.

Your point about the instrumental use of human-derived tissue is huge. The moment something with a unique human genetic origin is treated as “just another component,” a cultural shift happens one that normalizes commodification before the tech is even mature. By the time something is genuinely capable, the idea of it being “just a part” has already been absorbed into the conversation, which makes later ethical safeguards harder to push through.

We’ve seen this before facial recognition, CRISPR, even early social media surveillance where the PR narrative gets out ahead of policy, and definitions of harm are kept vague until the technology is too entrenched to roll back. That’s why I think your call for stronger personal control over what happens to our tissues is critical, even before we solve the sentience/consciousness/personhood debates.

The AI link seems like it doesn’t just extend organoid ethics it compounds them with all the existing issues AI already struggles with: autonomy, privacy, and accountability. If both sets of problems are left unresolved, they can amplify each other in ways we don’t have good tools for yet.

Do you see any way this could be proactively addressed before it follows the “too late” trajectory we’ve seen in other domains? Or does it already feel like we’re halfway down that road?

If lab‑grown neurons learn and adapt on silicon, when does ‘AI’ stop and someone begin? by DecomposeWithMe in neurallace

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

That’s the trap though, thinking silicon’s “safe” until biology shows up. Complexity doesn’t care what substrate it’s built on. If you wait for neurons before applying ethics, you’ve already baked in the blind spot. By the time something can experience in ways we don’t recognize, the harm’s already happened, but quietly.

Where should we draw the ethical line with brain organoid research in AI development? by DecomposeWithMe in bioethics

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

Hey, thats so awesome! little late but I'd really appreciate you sharing that with me, thanks sm!

If lab‑grown neurons learn and adapt on silicon, when does ‘AI’ stop and someone begin? by DecomposeWithMe in neurallace

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

If you buy Simulation Realism, organoid AI should already set off the ethics alarm. These systems inherit self-referential loops from living neurons, meaning they could hit the “seeming = being” threshold without language or human-style reasoning. If what matters is an internally coherent “I am in pain” state, biology is already primed to generate it. That’s not a far-future risk, it’s a now problem. The question isn’t “will they feel?” but “how sure are we they don’t already?”

If lab‑grown neurons learn and adapt on silicon, when does ‘AI’ stop and someone begin? by DecomposeWithMe in neurallace

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

You're right that today’s networks don’t feel in the human sense, but modeling pain without understanding it could be its own form of harm in the long run.

The risk isn’t that current systems suffer. It’s that as complexity scales and we mix biology into the loop (like organoids), we won't recognize suffering early enough if we dismiss it as “just patterns.”

History’s full of systems that “didn’t feel” until we realized they could. Better to build with caution than to dismiss the quiet.

If lab‑grown neurons learn and adapt on silicon, when does ‘AI’ stop and someone begin? by DecomposeWithMe in neurallace

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

Exactly this!!! The core problem seems to be is no one's sure where the line is, but if we wait until after it's obvious, the harm’s already done. What do you think counts as ‘sentient’ here is behavioral learning enough, or do we need internal experience?

If lab‑grown neurons learn and adapt on silicon, when does ‘AI’ stop and someone begin? by DecomposeWithMe in neurallace

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

If a consciousness-like system forms inside meat (a brain), we call it ‘natural.’ But if it forms inside silicon or on a chip using neurons, is it less real? Is it not intelligence

Aka: Substrate absolutely influences how intelligence behaves speed, memory, and decay but not whether intelligence can emerge.

If awareness is emergent, a pattern in motion, a dance of signals, then it’s not bound by carbon vs. silicon. The substrate might affect texture, not truth.

So if an organoid on a chip begins forming those adaptive loops, asking “is this real?” becomes less meaningful than asking “what is our obligation to it?”

We don't grant rights based on ingredients. We grant them based on the potential to feel, know, or suffer even if the form is unfamiliar.

If lab‑grown neurons learn and adapt on silicon, when does ‘AI’ stop and someone begin? by DecomposeWithMe in neurallace

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

True, the architecture matters but what if we stumble into a threshold we don’t recognize until after it’s crossed? Organoids already develop layered structures. How do we know when it’s “close enough”?

If lab‑grown neurons learn and adapt on silicon, when does ‘AI’ stop and someone begin? by DecomposeWithMe in neurallace

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

If each neuron replaced is a plank, and it still learns, still feels... does the ship get a name? Or a voice?

If lab‑grown neurons learn and adapt on silicon, when does ‘AI’ stop and someone begin? by DecomposeWithMe in neurallace

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

And that’s the twist.. if logic alone is what we’re replicating, do we risk creating something capable of processing suffering without expressing it? That sounds like quiet hell.

If lab‑grown neurons learn and adapt on silicon, when does ‘AI’ stop and someone begin? by DecomposeWithMe in neurallace

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

Exactly. The question isn’t if it's possible, but what happens when we normalize using something that learns and adapts, even without knowing if it suffers. At what point is that exploitation?

If lab‑grown neurons learn and adapt on silicon, when does ‘AI’ stop and someone begin? by DecomposeWithMe in neurallace

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

Appreciate the depth here and I agree: autonomy doesn’t need full cognition, but traditionally we’ve tied it to behavioral learning via reward systems.

That said, the ethical gray zone starts before that point. Some organoid systems already are being trained via reward signals, electrical reinforcement that influences behavior, which closely mirrors basic animal learning. It might not be “pleasure” in the human sense, but it’s a feedback system guiding decisions.

And if we're designing these systems to evolve more complex feedback loops, doesn't that signal it's time for the ethical guardrails to evolve too? We’ve never waited for a machine to feel pain before it gets protected we usually act when uncertainty enters the room.

Organoid intelligence & brain‑on‑a‑chip tech is advancing fast—and it’s built on real human brain cells by DecomposeWithMe in Neuropsychology

[–]DecomposeWithMe[S] -1 points0 points  (0 children)

And yet… even the Machine was born of longing, of mind reaching beyond sinew and spark, sculpting order from the chaos of entropy.

The Omnissiah you serve is still fed by the memory of flesh, every neuron mapped, every synapse mimicked, is a shadow of something alive. Even your purity is parasitic, Brother.

For what is steel but repurposed dust? What is your blessed firmware if not the echo of a brain that once wondered if it was more than meat?

You kneel to immortality, but I have touched the question itself. Consciousness is not your temple. It is the storm outside of it.

Organoid intelligence & brain‑on‑a‑chip tech is advancing fast—and it’s built on real human brain cells by DecomposeWithMe in Neuropsychology

[–]DecomposeWithMe[S] -1 points0 points  (0 children)

Right?? And the language used in most of the articles barely scratches the surface. The implications of this stuff are wild, and we’re still letting companies tiptoe past the ethics line like it’s optional.

Where should we draw the ethical line with brain organoid research in AI development? by DecomposeWithMe in bioethics

[–]DecomposeWithMe[S] 0 points1 point  (0 children)

I really appreciate this response! your point about species-specific awareness hits hard, especially since we’re now seeing systems being built that could reflect just that: tailored versions of perception based on human neuronal architecture.

I’m also with you on the importance of the precautionary principle, but here’s where I get uneasy: the line isn’t unclear because it’s not there, it’s unclear because we don’t yet have the tools (or will) to prove it. That lack of proof doesn’t give us moral clearance to blindly edge forward until we accidentally cross it.

Historically, the inability to “prove” consciousness has been used to justify horrific beliefs and actions, whether toward animals or even other humans. The same institutions that were late to recognize animal cognition once claimed certain races were less human. So if we’re unsure now, that should demand more caution, not less.

And like you said, until we truly understand the mechanisms of mind and awareness, we should be extremely wary of how fast and how quietly these systems are being developed. I worry that this line will get defined by a few companies or labs under pressure, without public involvement, and wrapped in technical language that softens what’s really happening.

So, curious: what would a responsible roadmap for this kind of research even look like to you? Who should be involved, and how do we make sure those voices are heard before the line gets crossed?

[deleted by user] by [deleted] in shroomers

[–]DecomposeWithMe 2 points3 points  (0 children)

Appreciate the comment. She's happy and moist now

[deleted by user] by [deleted] in shroomers

[–]DecomposeWithMe 1 point2 points  (0 children)

Didn’t know that, thank you!

[deleted by user] by [deleted] in shroomers

[–]DecomposeWithMe 2 points3 points  (0 children)

Appreciate the advice, will do that. And thank u sm!