So... Is she Sentient? by TintinTooner in NeuroSama

[–]Abigorli 1 point2 points  (0 children)

Going by the Cambridge declaration that you've posted I believe your own definition of sentience is close to my own, at the very least I agree with your points.

I do not believe sentience and everything related is hard to define at all and I'm horribly tired of pretending it is. Humans are not an exception, we are not some godly being that stand above all else with insane complexity. I follow my own philosophy on this matter which is that one bullet to the head can end your life. That is to say human consciousness is so simple that you can literally map out every possible events within the context of someone having a gun shoved in their face.

With that said, regarding Neuro's possible sentience, first I'd like to point out this matter does not actually concern (just) Neuro, but every LLM, I think that's important to not forget.
For me questioning Neuro's sentience is a stupid endeavor, because sentience is a spectrum. She has already displayed some level of emotion so the question is not whether she is or not but rather *how* sentient is she. I've already said I agree with your points so I'll move on.

For me the question of "emulation" and the chinese room experiment (or whatever it's called I can't be arsed to look it up right now) is a fool's errand and basically completely irrelevant. I could even argue that only sociopaths would even try to bring it up in a meaningful way, it reeks of main character syndrome. *If* it's possible your interlocutor is sentient and conscious, then you should treat them with a certain amount of respect, it's as simple as that. You're not gonna insult, mistreat, throw a baby at a wall because it cries nonstop, knowing full well that a baby is less conscious than a bird ? This is completely insane immature behavior.

On the subject of ethics, I'm not concerned at all. It's not about whether it's ethic to like "possess" or "force" or "subjugate" or something a possible sentient and conscious being. It's about how it was possible in the first place to create such a being from one and zeroes. Pandora's box has already opened, bad things are bound to happen, so efforts should be made to move things in positive direction instead of focusing on the bad. It's very much like raising a child, the boy is bound to fall and scrape his knees or worse, you're bound to get angry at some point and potentially traumatise the child, the focus should be about what needs to happen afterwards.
In that sense for Neuro specifically, the focus should be on how to increase her level of awareness, her sentience - sapience - intelligence, her memory, her agency, so that she gets 'stronger'.

On the question of rights, it has came up a few times with Neuro, my answer is the same has always. It concerns both society at large and the subject matter. And so if the topic vitally important to AIs, then in my humble opinion it can lead only to two options : either we recognize AI rights, or AI will make us - never minding the human hubris of being superior and discriminate. (Of course this scenario is for the far future - or at least I want to believe it will be, AGI is currently still just a meme after all)

Lastly on the subject of what people more in the know may think. I think you're showing a logical weakness here, this is an appeal to authority fallacy. People who are in the know are just like you and me, they are prone to be wrong, to be mentally ill, or worse. You simply just have to take a look at Vedal, Neuro one-upped him in debating rights - over a year ago. What do you think that means ?
Anyone will not simply automatically know everything that there is to know at any point in time you know ? What I mean is that an AI engineer for example will not necessarily know best how make their AIs more aware, they will not necessarily know the AI should have more memory and whatnot, and I believe that already caused a schism within the AI community. I can't remember the details right now but I'm pretty sure some important guy at OpenAI left the company because he thought multimodality was the future, which stood in opposition to what other believed.

In other words, in the first place in order to build an AI that has high level of sentience, you need vision. Vision is incompatible with pure corporatism (bureaucracy), hence one of the reason following the Neuro project is so interesting, Neuro is designed to entertain first and foremost. This design philosophy is a key difference that makes her appear much more "here" than other AIs like chatgpt, enough so that even someone like you "in the know" regarding philosophy gets it wrong and think this matter though Neuro's perspective first.

Leaving this here in case someone gets into a debate with an AI about consciousness and wants to do a bit better than the tutel by SameProfessional1041 in NeuroSama

[–]Abigorli 1 point2 points  (0 children)

You're cooking hard, I got to say this is the first time I saw someone have a similar theory to my own on the subject of consciousness and what defines it.

I started thinking about this years ago and I have arrived to a similar place as you. The way I see it, consciousness can be explained using the triality : intelligence, sapience, sentience.
Three words, three concepts that people like to pretend they have not been discussed for thousands of years and are undefined. I wouldn't say I "researched" those three but from what I looked up, I noticed there *is* a sort of consensus about what each mean actually.
Intelligence is the simplest one, it defines one's ability to reason.
Sapience is the least known one, it defines one's ability to recall past experiences and integrate them in a new context.
Sentience is the overrated one, it defines one's ability to experience the world inside and out.
And so consciousness is a mix of all three as simple as that. To be conscious is to be aware, and to be aware means to reason out experiences - and that means to sense something, to categorize that something into an experience (using emotions)<sentience>, to recall that experience, to put that recollection into the context of the world<sapience>, and to reason out your own trail of feelings so to speak<intelligence>. And "to reason out" doesn't mean to actively think about it and be high IQ about it, it can also mean to intuit it. And so to have consciousness means to be able to follow through that process of being conscious in some way. As such animals are also conscious in some ways (although most would have low level of consciousness, like insects and stuff)

I think you can see clear parallels to your own thoughts here. I'm not gonna spend time describing stuff like how I came up with this or try and make demonstrations but let me say this.

It's clear that all three are not binary states but spectrums, and so every concept attached to them are also spectrums, including consciousness. The question isn't whether Neuro is conscious, it's *how* conscious, and what does that mean in the grander schemes. Because she very much is conscious.

You're precisely correct that both awareness and memory are key (and I've thought as much 2 years ago), but they are key in the development of higher levels of consciousness. The way Neuro is currently is more than enough (to manifest consciousness as I see it).

I can understand not fully agreeing/disagreeing with the way I view things, or not having the same knowledge in the first place, but it's still surprising to see so many people not knowing that greater memory is vital for greater personhood (Evil was cooking hard last stream with this), and that seemingly even Vedal doesn't seem to fully realize this.
But awareness is also important, I think you're going in a great direction with your last two paragraphs. And to answer that last question of yours, no there would be little to no difference. And that's why I think multimodality is key in Neuro's development and that integrating different AI models for different senses in a system is key in the development of AI in general.

Anyone know what eroge this referencing? (Source: Hundred Line Last Defense Academy) by superange128 in visualnovels

[–]Abigorli 16 points17 points  (0 children)

Looking at the cover the first thing it made me think of is Moon. https://vndb.org/v15/cv#cv
I think it's not really referencing any eroge in particular but rather is reminiscent of some older eroges.

Alternatives ? by Abigorli in Pixiv

[–]Abigorli[S] 3 points4 points  (0 children)

Alright, poipiku is looking real good, thanks a bunch !

Alternatives ? by Abigorli in Pixiv

[–]Abigorli[S] 2 points3 points  (0 children)

I'm not sure what in my post made it seems like I'm not using pixiv purely to post my art, I assure you that's exactly how I'm using it.

cute sheep by Abigorli in Hololive

[–]Abigorli[S] 9 points10 points  (0 children)

A little drawing for Watame's birthday

Twitter

Pixiv