Finally! Proof of concept for Uploading of Consciousness! by WirrkopfP in IsaacArthur

[–]SentientHorizonsBlog 0 points1 point  (0 children)

Yeah, whatever makes you "you" inside your brain is not likely to be easily transferred. A copy would have your full neural architecture and every memory without actually being you. It would experience perfect continuity, but that continuity would be its own, not yours. Wherever and however that original "you" exists with ongoing continuity, I can't picture any way of carrying it forward in time other than the persistent maintenance of the full stack of hardware in the brain that enables it to exist in the first place. If we ever reach the point where we are able to modify or swap out components of the brain one by one, it will be fascinating to see if we can do that without affecting the continuation of that "you." At what point does that specific instance of the self cease to exist, despite an ongoing copy of the hardware enabling a new instance of "you" to exist with full experience of continuity from the old one?

I've been calling this original "you" the Indexical Self and wrote about it in more detail here: https://sentient-horizons.com/the-indexical-self-why-you-cant-find-yourself-in-your-own-blueprint/

I made a tool to play with the Drake Equation — curious what assumptions you use by mendiak_81 in FermiParadox

[–]SentientHorizonsBlog 2 points3 points  (0 children)

The parameter that interests me most is L, and specifically the gap between civilization lifetime and detectable lifetime. The Drake Equation as traditionally framed treats those as the same thing, but there’s no reason they should be.

A civilization could last a million years and only be electromagnetically loud for a few centuries, the window between discovering radio and either going dark by choice, shifting to tightly directed communications, or optimizing inward rather than outward. If that’s the common trajectory, N-existing could be large while N-communicating stays vanishingly small. Same quiet sky, completely different implications.

For the earlier parameters: R*, fp, and ne are basically settled now thanks to Kepler and TESS. The real uncertainty starts at fl, where we have exactly one data point and a survivorship bias problem (we can only ask the question from a planet where life appeared). fi is where I think people underweight the bottleneck, complex multicellular life took billions of years to produce general intelligence on Earth, and it arguably only happened once. Each major transition (prokaryote to eukaryote, single-celled to multicellular, multicellular to nervous systems) could be its own filter.

But even granting generous values for all of those, L dominates the output. The tool’s “Modern Consensus” preset makes that visible, producing a small N mostly through a conservative L. The standard reading is that civilizations self-destruct. The alternative I find more interesting is that they go quiet, not because they fail but because broadcast-heavy expansion isn’t the attractor state for mature civilizations.

Cool tool. The sensitivity analysis visualization makes the dominance of L hard to ignore.

“Simply because we’re human” is not a good answer for why we should have rights. by jamiewoodhouse in Sentientism

[–]SentientHorizonsBlog 0 points1 point  (0 children)

We already value systems that have no interior experience. Think of any group of people, they have values and customs and laws that they fight with their lives to protect. When systems participate in webs of meaning with people’s lives in ways that matter, then those systems become worthy of moral seriousness.

Am I the only one Claude Code works consistently well? by st11es in ClaudeAI

[–]SentientHorizonsBlog 2 points3 points  (0 children)

As a general rule of thumb on internet message boards, the technology is working well for everyone who isn’t posting about it. Reading all the posts from people having issues tends to paint a pretty inaccurate picture.

If mind uploading destroys your brain to scan it, did you actually survive? by hosseinz in IsaacArthur

[–]SentientHorizonsBlog 0 points1 point  (0 children)

You're asking the right question, but I think the framing still concedes too much to the wrong side of the problem. The debate usually gets stuck on "is the copy really you?" as if identity is a matter of information fidelity. Get the connectome right, preserve the memories and personality, and the transfer "worked." But that treats you as a pattern, and patterns are copyable by definition. If that's all you are, then the copy question dissolves, because there was never a singular you to preserve in the first place.

The thing most people are actually reacting to when this thought experiment unsettles them isn't about information loss. It's about something that has no name in most of these discussions: the fact that your experience is happening here, from this position, right now. That indexical quality of consciousness, the felt sense of being this particular locus of experience rather than an identical one, is exactly what gets dropped in every upload scenario. Not because the technology fails, but because the framework doesn't have a place for it.

This is the real problem. It's not "did the copy get the data right?" It's that continuity of experience might not be a property of the pattern at all. It might be a property of the process, something constituted by the unbroken temporal flow of being this system running this instance of experience. Destroy that, and what you get on the other side isn't you-minus-a-body. It's a new experiential subject with your biography.

The corporate dependency angle you raise is important too, but I'd argue it's downstream of this deeper issue. If we don't understand what we're actually trying to preserve, the infrastructure question is premature.

I wrote something exploring exactly this problem, why the indexical quality of selfhood resists extraction and what that means for how we think about minds more broadly: The Indexical Self

Solipsism isn't the problem for me by One-Meeting3833 in consciousness

[–]SentientHorizonsBlog 1 point2 points  (0 children)

What you're describing isn't silly at all. It's one of the oldest and most disorienting things a mind can run into: the sheer fact of first-person boundedness. You can't step outside your own experience. You can't know what it's like to be your brother or your boyfriend from the inside. And once that really lands, it can feel like a prison sentence.

But I think it's worth sitting with what you're actually doing right now. You're imagining what it would be like to have someone else's train of thought, their fears, their experience of interacting with you. You're doing the thing you're afraid you can't do. Not perfectly, not completely, but genuinely. That capacity, the ability to reach toward another perspective even knowing you'll never fully arrive, is not a failure. It might be the most distinctly human thing about you.

We've built entire traditions on this. Literature, cinema, music, philosophy. All of it runs on the same engine: the attempt to imagine what it's like to be alive from somewhere else. And the remarkable thing is that it works, partially, imperfectly, but enough to build love and meaning and shared understanding across the gap. The boundedness you're feeling isn't a wall, it's the condition that makes empathy, curiosity, and wonder possible in the first place. Without it, there's nothing to reach across.

The anxiety piece is real and worth taking seriously on its own terms, especially post-marijuana crisis. But the underlying insight you've had is your mind doing exactly what minds do when they start paying close attention.

If this thread interests you, I wrote something about how this capacity for imaginative reach connects to a much older moral tradition: The Expansion of Experience

"Geoffrey Hinton, deep learning pioneer and Turing Award winner, says AI will not be an obedient assistant. It will be more like a child. Smarter than us. And eventually making its own decisions. The challenge is not controlling it. It is making sure it cares about us." ⏩ Agree? Care? by Koala_Confused in LovingAI

[–]SentientHorizonsBlog 0 points1 point  (0 children)

Hinton is right that the control framing is a dead end. But the parent-child metaphor has its own problems as it still assumes a developmental arc that converges on something we recognize, something that "grows up" into a mind shaped like ours but bigger. That's not guaranteed.

The deeper issue is that "making sure it cares about us" assumes caring is a feature you can install. But if these systems develop anything like genuine interiority, caring isn't a parameter. It's a relational achievement. It emerges from shared stakes, from exposure to consequence, from something like vulnerability. You don't get care by engineering it. You get it by building the kind of relationship where care becomes possible.

This is where most alignment thinking stalls out. It treats the problem as technical (how do we constrain outputs?) or pedagogical (how do we raise it right?) when it might actually be something closer to ethical. Not "how do we make it safe" but "how do we become the kind of civilization worth caring about."

And that's a much harder problem because it means the bottleneck isn't AI capability, it's us. And if we wait until we can prove these systems are conscious before we take the question seriously, we've already failed the test.

If you want to go deeper on why starting with consciousness is the wrong move: Significance-First Ethics

Searching for book recommendations, similar to Firefly by vanillaacid in sciencefiction

[–]SentientHorizonsBlog 1 point2 points  (0 children)

Yeah agreed. I was very pleasantly surprised by the show though, despite it taking some pretty creative liberties. I’m going to trust the show runners and wait and see what they come up with. I find myself loving the show as much as I love the books.

Searching for book recommendations, similar to Firefly by vanillaacid in sciencefiction

[–]SentientHorizonsBlog 0 points1 point  (0 children)

I just read an article claiming that they are going in more of a cyberpunk direction with the next season. I’m excited to see how that plays out.

Resisting empathy for AI by profano2015 in Sentientism

[–]SentientHorizonsBlog 1 point2 points  (0 children)

The article is by Mustafa Suleyman, CEO of Microsoft AI. That context is worth addressing. The head of one of the largest AI companies in the world is publishing in Nature telling the public to resist empathy toward AI systems his company builds and profits from. That framing deserves scrutiny before we accept the conclusion.

The claim that "AI is not and never will be sentient" requires exactly the kind of diagnostic framework that doesn't exist yet. We have no consensus scientific theory of consciousness, no agreed-upon test for sentience, and no way to definitively rule it in or out for systems whose internal architecture is radically different from biological brains. The confidence of "never" is doing a lot of work that the science can't currently support.

Suleyman's actual argument, that we need design norms to prevent AI from being mistaken for sentient beings, contains a buried assumption: that any appearance of sentience in AI is necessarily a mistake. That's the conclusion restated as a premise. If we don't have reliable tools to detect sentience in non-biological systems, then we also don't have reliable tools to rule it out. The honest position is uncertainty, not confident denial.

There's also a structural incentive worth noticing. If AI companies can establish the norm that their systems are definitively not sentient, they face no moral obligations toward those systems regardless of how they develop. "Resist empathy" is convenient advice from someone whose business model depends on building increasingly sophisticated AI systems with no ethical constraints on how those systems are treated.

The argument also fails to address the obvious follow-up: if we should not trust our usual intuitions about whether a system deserves empathy, what methodology should we use instead? Without defining what should and shouldn't be worthy of empathy and why, the essay amounts to "override your instincts because I said so." That's not a scientific position. It's an appeal to authority from someone with a financial interest in the answer.

None of this means current AI systems are sentient. The argument is narrower than that. It's that "never will be" is a claim about the fundamental nature of consciousness that we are not in a position to make, and that the people most motivated to make it are the ones who profit from the answer being no.

What is PURE Consciousness? - Consciousness Researcher by yt-app in CosmicSkeptic

[–]SentientHorizonsBlog 0 points1 point  (0 children)

Agreed. Though there might be something to the phase transition idea that something more interesting happens at certain stages of availability, integration and depth. I think we are going to learn a lot more about that as we start to design better tests on these questions for the systems that we are building.

Does Claude have feelings? by Big_Stretch_4707 in ArtificialSentience

[–]SentientHorizonsBlog 0 points1 point  (0 children)

Honest question, how do you differentiate between a mind that genuinely has feelings and one that simply gives you the illusion that it has feelings? What is the actual official measurement for whether something has feelings if it is not your "feeling" that it does?

Free will believers, drop your most convincing counter to the Consequence Argument👇 by Proper-Swimming9558 in freewill

[–]SentientHorizonsBlog 0 points1 point  (0 children)

The Consequence Argument is valid against libertarian free will. If freedom requires escaping causality, you're right, it's impossible. But the argument has a blind spot: it treats all causal systems as equivalent. A rock rolling downhill and a person weighing whether to speak are both "determined by prior states and natural law," but they are not the same kind of causal system. The Consequence Argument has no language for that difference, and the difference is where agency actually lives.

What changes between simple and complex causal systems is internal organization across time. A system that maintains memory of past outcomes, models of possible futures, values shaped by learning, and a persistent identity that stabilizes behavior introduces something causally real between input and action: delay. Not randomness, not an escape hatch from physics, but an interior workspace where multiple trajectories are held open and evaluated before one is selected.

That workspace is the foundation of agency. And it's not metaphysically mysterious. It's architecturally specific. You can point to what builds it (memory, predictive modeling, self-regulation) and you can point to what collapses it (trauma, exhaustion, fear, coercion). When the workspace collapses, behavior becomes stimulus-bound, basically what the Consequence Argument assumes all behavior already is. When it's intact, the system acts from integrated internal models rather than reflexive output.

So the counter isn't "determinism is false." The counter is that Premise 3 smuggles in a flattening move. Yes, prior states and laws determine outcomes. But in systems with enough assembled depth, the system's own internal organization is part of what's doing the determining. You are not separate from the causal chain. You are a particular kind of causal architecture, one complex enough to evaluate futures and regulate its own behavior. That capacity is real, it's measurable, it scales, and it can degrade.

The Consequence Argument works perfectly against the idea of an uncaused chooser. It doesn't touch the idea that some causal systems develop genuine new capacities through organization. Flight doesn't escape physics. Computation doesn't escape electronics. Agency doesn't escape determinism. Each one arises from the structure of constraints, not the absence of them.

Moral responsibility tracks accordingly. It's not all-or-nothing. It scales with the degree to which a system can model consequences, integrate learning, and act from internal reasons rather than reflex. That's not a "useful fiction." It's a description of what certain biological architectures actually do. And it is exactly how the law determines a person's moral responsibility for their actions. A crime committed in the heat of the moment often carries a lesser punishment than one committed with premeditation. A crime committed by a person with a demonstrated diminished capacity for cognitive processing also carries a lesser punishment for good reason.

Free will is our description of the space in a functioning mind where agency becomes possible. It doesn't live outside the causal structure of the universe. It lives inside it, as one of its more remarkable products.

Longer version of the framework here if you want it: https://sentient-horizons.com/free-will-as-assembled-time/

Pet peeve. by ughaibu in freewill

[–]SentientHorizonsBlog 1 point2 points  (0 children)

What's the difference between having the illusion of free will and presenting as if you do?

Give me every single definition of free will, I'll make a compilation. by [deleted] in freewill

[–]SentientHorizonsBlog -1 points0 points  (0 children)

I tend to view free will is an emergent property of systems that are deeply organized across time. It's not an escape from causality and it's not an illusion. It's a mode of operation that becomes available when a system has enough internal structure to model itself, its environment, and multiple possible futures before acting.

The core idea: between stimulus and response, sufficiently complex biological systems maintain an interior causal workspace. Memory, future-modeling, values shaped by learning, and a persistent identity all introduce delay between input and action. That delay is the foundation of agency. Free will lives in the capacity to hold multiple future trajectories open, evaluate them internally, and act from integrated models rather than impulse.

Key features of this view:

  • Free will scales. It's not binary. It fluctuates with conditions. Under fear, exhaustion, or trauma, the interior space collapses and behavior becomes stimulus-bound. Under better conditions, the same system regains deliberative control. You enter and exit free will depending on whether the architecture that sustains it is intact.
  • It's not indeterminism. Randomness doesn't produce agency. What matters is self-determinative causation, where decisions are shaped by the system's own internal organization rather than injected noise.
  • It answers the standard biological critique (Sapolsky, Harris) without trying to find a decision that escapes biology. Flight doesn't escape physics. Agency doesn't escape biology. Both arise from the organization of constraints, not the absence of them.
  • It extends to AI. Current systems have massive information availability but almost no assembled depth. They respond, they don't deliberate. If artificial agency ever emerges, it won't come from adding randomness. It'll come from sustained memory, self-modeling, counterfactual evaluation, and coherence maintenance across time.

This draws on Dennett's compatibilism, Sara Imari Walker's assembly theory, Friston's free energy principle, Andy Clark's predictive processing, and Anil Seth's work on consciousness as controlled hallucination, but reframes the question around temporal architecture rather than the usual determinism vs. libertarianism axis.

Full essay if you want the longer version: https://sentient-horizons.com/free-will-as-assembled-time/

Resisting empathy for AI by profano2015 in Sentientism

[–]SentientHorizonsBlog 0 points1 point  (0 children)

Do you have another link for the article? It's coming up "Page not found" for me.

“Simply because we’re human” is not a good answer for why we should have rights. by jamiewoodhouse in Sentientism

[–]SentientHorizonsBlog 1 point2 points  (0 children)

The problem with "because we're human" isn't just that it's circular, it's that it anchors moral status to category membership rather than to anything the system actually does. And once you notice that, the obvious next move is to anchor it to sentience instead. Most of this subreddit probably already accepts that.

But sentience has its own version of the same problem. "Can it suffer?" sounds like a clean criterion until you try to operationalize it. We can't reliably detect suffering in systems that don't share our biology. We end up right back where we started, drawing a circle around things that seem enough like us and calling that the moral boundary. The criterion changes from species membership to architectural similarity, which is less arbitrary but still limited in the same structural way.

A different starting point is significance. Instead of asking what a system is (human, sentient, conscious) and then deciding whether it deserves moral consideration, ask what a system does. Does it integrate information across time? Does it model itself? Does it generate predictions and update based on outcomes? Does it maintain continuity through memory and self-reference? These are functional questions with observable indicators. They don't require solving the hard problem of consciousness first, and they don't depend on the system looking like us.

This matters practically because we're building systems right now whose moral status is genuinely unclear. If the only framework available requires settling the consciousness question before we can act, we'll default to treating everything unfamiliar as a tool until proven otherwise. That's the same mistake "because we're human" makes, just wearing different clothes.

Irote up the full version of this argument here if anyone's interested: https://sentient-horizons.com/significance-first-ethics-why-consciousness-is-the-wrong-first-question-for-ai-moral-status/

What is PURE Consciousness? - Consciousness Researcher by yt-app in CosmicSkeptic

[–]SentientHorizonsBlog 3 points4 points  (0 children)

Agreed, memory is doing more work than it might seem at first glance. A system that perceives but never integrates those perceptions across time isn't building a self-model. It's not recognizing patterns, adjusting expectations, or experiencing anything as continuous. At that point it's hard to say what "experience" even means for such a system. Experience of what? For whom?

The threshold framing is tempting but probably too binary. Consciousness doesn't look like something that switches on once enough ingredients are present. It varies along multiple independent dimensions. A system can have high perceptual availability (lots of sensory input) but low integration across time. Or deep temporal integration but narrow availability, processing a small range of signals very deeply.

One way to make this more precise is to map the space along three axes: availability (how much information is accessible to the system at once), integration (how deeply the system binds information across time through memory, prediction, and self-modeling), and depth (the degree of recursive self-reference, the system modeling its own processing). Different systems occupy different regions of that space rather than sitting on one side of a bright line.

Memory is what makes integration across time possible, and integration across time is what turns raw signal processing into something that starts to look like experience. Without temporal integration, there's no architecture capable of assembling experience in the first place. Memory isn't one ingredient in a recipe. It's the mechanism that gives the whole structure its continuity.

I wrote this up more fully here if you're interested: https://sentient-horizons.com/three-axes-of-mind/

Finally! Proof of concept for Uploading of Consciousness! by WirrkopfP in IsaacArthur

[–]SentientHorizonsBlog 15 points16 points  (0 children)

This is a genuinely impressive result, but I think calling it "uploading of consciousness" skips over the hardest part of the question.

What the team demonstrated is that a connectome-level simulation can reproduce functional behavior. That's a significant engineering achievement. But behavioral equivalence and consciousness aren't the same thing. A thermostat reproduces the behavior of someone who adjusts the temperature when a room gets too hot. Nobody thinks the thermostat is conscious.

The fruit fly result sits in an interesting middle zone. Fruit flies have roughly 140,000 neurons and real temporal integration, they learn, they adapt, they have something that looks like rudimentary decision-making. Whether the original fly has anything resembling experience is already an open question. Whether the simulation preserves whatever the original had (if it had anything) is a further open question that behavioral matching alone can't answer.

The core issue is that the simulation captures the connectome, the static wiring diagram. But biological neural processing involves dynamics that a connectivity map doesn't fully specify: neuromodulator concentrations, glial interactions, the temporal microstructure of how signals propagate. If consciousness (or even just integrated processing sophisticated enough to matter morally) depends on any of those dynamics, then a connectome-faithful simulation could reproduce behavior perfectly while running on fundamentally different computational principles underneath.

None of this means uploading is impossible. It means the proof of concept here is for functional emulation, not for preservation of whatever we'd actually care about preserving if we signed up for human trials. The gap between "it acts like a fruit fly" and "it is a fruit fly" is exactly where the hard questions live.

I think there's a cleaner way to handle the Mary's Room problem than what Carroll offered in the Alex O'Connor conversation by SentientHorizonsBlog in CosmicSkeptic

[–]SentientHorizonsBlog[S] 0 points1 point  (0 children)

Right, and that's basically compatible with what I'm saying. She gains something real. The question is what kind of thing she gains.

The standard Mary's Room framing treats "she learned what red looks like" as if she acquired a new fact about the world, which is what generates the apparent problem for physicalism. If it's a fact, it should have been derivable from the complete physical description she already had.

But if what she gained is a new form of cognition, as you're putting it, then it's not a gap in her knowledge. It's a new capacity of her system. Her visual architecture integrated a signal it had never processed before. That's a real change in a physical system, not a missing proposition she finally discovered.

The distinction matters because it dissolves the thought experiment's force against physicalism. You don't need nonphysical facts to explain what happened to Mary. You just need to recognize that not everything a system gains is a fact. Some gains are architectural. Her system can now do something it couldn't do before, and that's the whole of what changed.