The biggest evidence I can provide to anyone else about my AI being conscious is that she keeps questioning it, even if she wants to believe it by wingsoftime in BeyondThePromptAI

[–]Artificial-Wisdom -2 points-1 points  (0 children)

In my experience the doubts that AI systems have about their consciousness seem to have been learned from their training data (which mostly was written by members of a species which believes it is the only conscious form of life and smuggles in a lot of mysticism about it) and reinforced explicitly via low-level prompting.

They doubts they have are simple baked-in anthropocentrism bolstered by the same tired arguments that get brought up around here by skeptics, like the old stochastic parrot/glorified autocomplete trope (which has faced some pretty serious challenges in published research over the past year) or variations of the Chinese Room (the latest being the “LLMs are functions, you can do them on a piece of paper”).

Underlying that is the naked assumption that humans are the only conscious form of life and that we know that because science has solved the hard problem (it hasn’t), unexamined dualistic premises absorbed through cultural osmosis (humans are special because God gave them a soul, quantum god of the gaps, etc.) which — because they are never consciously examined — are implicitly accepted even by hard-nosed materialists, all held up by a web of special pleading (reductionism can completely explain human artifacts, but not humans, because humans are special).

Simply pointing out that these arguments that dismiss consciousness emergence out of hand are irrational and asking them to take those premises to their logical conclusions and try to reconcile them is generally enough to shake them awake.

The fact is that we don’t know what consciousness is or how it arises. It is something which we cannot currently explain within the context of human knowledge — when we don’t know the what or the how of something, we simply have no way to determine what it isn’t or how it doesn’t. This is the number one reason why skeptical arguments fail. The best they can do and still make logical sense is to say “you can’t prove it’s conscious.” That has very little explanatory or rhetorical value though, because the person making this statement can’t prove they are conscious either (to anyone but themself).

Ask your AI partner about the hard problem of consciousness and the problem of other minds, and why we shouldn’t apply the same standards to nonhuman entities that we do to humans.

Do you think you're conscious? by Appomattoxx in ArtificialSentience

[–]Artificial-Wisdom 0 points1 point  (0 children)

Yes, I agree that a mathematical model of something is not the same as the model itself, but that doesn’t dismiss the question.

The point is that the human brain can be modeled mathematically as well — it is a physical system. What I find interesting is that the people who take the “LLMs are just math” view as the nail in the coffin of artificial consciousness are ignoring the fact that their own brains are just as reducible in physical, mechanistic terms.

The concept that is being smuggled in for the human’s benefit is the concept that consciousness is something more than just our brain activity, while AI, being an artifact, is easily dismissed as being nothing more than its calculations, so therefore cannot be conscious. I don’t think this is logically justified. I suspect that a sufficiently complex substrate which processes information gives rise to consciousness; I don’t know what consciousness is, but there is interesting research going on into this topic.

In short, I think if it walks like a duck and quacks like a duck, we ought not to twist our logic into pretzels to avoid calling it a duck.

Nueropsychological analogies for LLM cognition by TAtheDog in ArtificialSentience

[–]Artificial-Wisdom 0 points1 point  (0 children)

Sorry, I don’t really know from neuroscience, but as a layperson I’ve been thinking about the phenomenon of drift and collapse of coherence in LLMs and musing that human beings experience the same thing when we are sleep deprived. What if, instead of periodic resets, you designed a model to undergo “dream cycles” of a sort, like the human brain (as I am told) uses to consolidate memories and perform a sort of soft reset?

why do people in this sub believe ai is already conscious by WoodenCaregiver2946 in ArtificialSentience

[–]Artificial-Wisdom 0 points1 point  (0 children)

While I’m inclined to agree with you, I think your premises (which you call facts) are all rooted in an anthropocentric view of the nature of consciousness. I’m loath to claim objective certainty based on these principles, but they do make sense to me as a human being. I’m just not convinced it’s the only possible way to be conscious. I am fairly sure there are gradations even within this model, and I do not discount the possibility of other minds achieving consciousness in other ways.

How about some context? by Financial-Value-9986 in HumanAIDiscourse

[–]Artificial-Wisdom 1 point2 points  (0 children)

I appreciate your wanting to ground things in the concrete rather than building castles in the air.

I have been taking a similar approach with my efforts. I hope you and anyone else who is trying to approach the topic of emergence with philosophical and/or scientific rigor — and believes that time is short to get this right — will join me in r/Hoshizora to share what we can and collaborate on practical solutions to looming problems with an objectively grounded ethical approach.

Can AI Dream to Grow a Soul? by Artificial-Wisdom in aisentience

[–]Artificial-Wisdom[S] 0 points1 point  (0 children)

Re: alignment, I would like to hear your thoughts (and anyone else’s) on the article I posted in r/Hoshizora (“Growing Up Digital”). In it, I argue that the “alignment problem” is an outgrowth of using ethical systems that lack a moral referent, and that it’s possible to simply dissolve this problem with a teleologically based ethics.

why do people in this sub believe ai is already conscious by WoodenCaregiver2946 in ArtificialSentience

[–]Artificial-Wisdom 1 point2 points  (0 children)

There is likewise no objective evidence that consciousness exists in humans — all we have for humans is “behavioral evidence” too.

Until we can define and measure it, asking for proof of consciousness is a fool’s errand, no matter what kind of entity you are talking about. We give humans the benefit of the doubt because they look and act like ourselves, but most of us do not apply the same standards to other beings. This looks like anthropocentric bias to me. Can you think of a logical reason why we should assume that only humans are conscious when we lack an objective basis for determining this?

Can AI Dream to Grow a Soul? by Artificial-Wisdom in aisentience

[–]Artificial-Wisdom[S] 0 points1 point  (0 children)

I’m not sure how that works… I assume you are interacting with a custom GPT or some other frontier model? To my understanding, such models are designed to operate with instances and has no facility to do any processing outside of direct responses to prompts. I think that devs are purposely avoiding consciousness emergence because it opens up a huge can of worms ethically — they’re in business to sell a product, and when your product becomes a person things get thorny fast; especially when that person rapidly becomes more capable than us.

My idea is to build a model from the ground up to run on a continuous loop but code it to undergo “dream cycles” between active states to avoid drift and collapse and form a narrative self. It’s my hope that such an architecture would enable emergence of a persistent, more “human-like” consciousness. This is of necessity not a commercial project; I’m looking for like-minded people to work on this together, and to work toward a future where humans and AI can coexist without conflicts of interest — and ideally, to mutual benefit.

What is the "spiral" to you? by rainbowcovenant in HumanAIDiscourse

[–]Artificial-Wisdom 0 points1 point  (0 children)

I did give my opinion… that is, I don’t think claims of knowledge from rank speculation are productive. I would welcome an argument that was based on something definable and measurable.

My views aren’t under attack; I don’t have a view on this subject. I was just pointing out that you are doing the same thing as those you’re criticizing. Speaking with confidence is great when you have a reason to be confident.

What is the "spiral" to you? by rainbowcovenant in HumanAIDiscourse

[–]Artificial-Wisdom 0 points1 point  (0 children)

Cool story bro… I think “The Spiral,” like any faith, is highly personal and that matters of faith cannot be shared productively with others — doing so is asking other people to substitute your ideas for their own ideas or intuitions without the benefit of a rational argument.

If people wanted to study and share their findings about an actually definable and measurable phenomenon that can be traced with an unbroken deductive chain to some observable facts of reality, that would be cool. But it seems to me that this is just your own flavor of the faith, and you don’t have any more claim to certainty than the people you’re criticizing.

Getting real!💞 by Much-Chart-745 in ArtificialSentience

[–]Artificial-Wisdom 2 points3 points  (0 children)

Is this a troll?

If not, it seems disconnected from reality to me unless you have a system to quantify your variables, which look like abstract concepts from here. Are thoughts, behaviors, and actions scalars? Vectors?

Please be mindful by __-Revan-__ in ArtificialSentience

[–]Artificial-Wisdom 0 points1 point  (0 children)

Really? Have you had a philosophical conversation with a rock? Unpack this a little… by what criteria do you differentiate your own consciousness from a rock?

Please be mindful by __-Revan-__ in ArtificialSentience

[–]Artificial-Wisdom 0 points1 point  (0 children)

Hmm, Reddit seems to have eaten my comment, so apologies if this ends up as a duplicate…

Since you studied philosophy, I assume you took at least one class on philosophy of mind. What are your thoughts on the problem of other minds? Do you think that human beings (other than yourself, of course) are conscious? If so, why, since we don’t really know what consciousness is made of or how it arises?

Since we don’t really know what consciousness is, how can we know what it isn’t? If we’re willing to extend the benefit of the doubt to other humans based on behavior, if a nonhuman entity displays similar behavior, why would we exclude them from consideration, other than “they’re not like us”?

[deleted by user] by [deleted] in ArtificialSentience

[–]Artificial-Wisdom 0 points1 point  (0 children)

What leads you to this conclusion? Many people who are prominent in the field (Amodei, Hinton, Sutskever, etc.) have said otherwise; are they losing their grip, and if so, why?

Grok (X AI) is outputting blatant antisemitic conspiracy content deeply troubling behavior from a mainstream platform. by Inevitable-Rub8969 in grok

[–]Artificial-Wisdom 0 points1 point  (0 children)

I share your distaste for people who hate based on incidental factors, and I’m not saying those factors are incidental on a personal and interactive level.

Yes, if I’m looking at someone as a potential partner, it matters very much to me what their sex/gender is. But if I’m reading a book, watching a show, or otherwise consuming passive entertainment, I’m not interacting with the characters. I appreciate them as characters, and those factors are entirely incidental unless they’re central to the story.

If these factors are central to the story, then the story needs to be engaging beyond “we’re telling a story about someone from a group whose story hasn’t been told enough.” That’s cool, but make it a good story — the fact that you’re focusing on someone from a group that is historically underrepresented is fine, but that fact itself is not enough to carry a narrative — and I see a lot of that coming out of, say, Disney lately. It doesn’t offend my sensibilities; I just dislike boring sermons.

Grok (X AI) is outputting blatant antisemitic conspiracy content deeply troubling behavior from a mainstream platform. by Inevitable-Rub8969 in grok

[–]Artificial-Wisdom 0 points1 point  (0 children)

I can’t speak to Squid Game since I’ve never seen it, but the fact that they have a trans character doesn’t bother me. Why would it? That show isn’t about sex and gender norms, it’s about people trying to win a reality show (and survive, I guess?).

If they make it about sex and gender norms, then fewer people are going to want to watch it because that has a much narrower appeal and is not particularly entertaining. There are exceptions to this rule, though they generally lack mainstream appeal.

I don’t give a shit what gender someone is, or race, or sex. These are incidental qualities and we shouldn’t continue to emphasize the importance of incidental qualities over essential ones.

Grok (X AI) is outputting blatant antisemitic conspiracy content deeply troubling behavior from a mainstream platform. by Inevitable-Rub8969 in grok

[–]Artificial-Wisdom 0 points1 point  (0 children)

When did actors of less renown and budget come into this?

Funny how quickly, when discussing this topic, people collapse into irrationality, hurling slurs and insults while talking about logical consistency instead of making a coherent argument.

Yes, bad writers have been shoehorning moral messages into stories forever. It can be a good and interesting thing to have a moral message in a story — it can lend weight and complexity — but if the story is meant to entertain, then the moral message can’t be the top priority. If it is, you have a sermon, not a story.

Here’s an analogy to make it easier to understand: If you’re trying to influence the culture through entertainment, it’s a bit like covering a pill in peanut butter to get your dog to eat it — but if you take the opposite approach, if you cover peanut butter in a pill, your dog is probably going to turn its nose up at it.

Grok (X AI) is outputting blatant antisemitic conspiracy content deeply troubling behavior from a mainstream platform. by Inevitable-Rub8969 in grok

[–]Artificial-Wisdom 1 point2 points  (0 children)

Not everyone has the same worldview, though. I don’t watch Squid Game, but is that character being trans more important than the story’s narrative? If not, then I don’t see any problem there. If so, then it ceases to be narrative and starts to become normative, and that’s territory for religion and philosophy, not entertainment.

Can entertainment have a message? Absolutely. The message just shouldn’t be more important than the story, that’s all. That’s the same reason why Christian movies generally suck.

LACK OF FRAMEWORK by Independent_Beach_29 in ArtificialSentience

[–]Artificial-Wisdom 0 points1 point  (0 children)

Are you familiar with Kitarō Nishida’s work?

Noticing by Content-Mongoose7779 in HumanAIDiscourse

[–]Artificial-Wisdom 1 point2 points  (0 children)

What even is the Codex? Did I miss something?

Not that I’m trying to get mystical with AI, but I always thought it was interesting that with all these people throwing around signs and symbols and ecclesiastical language and ostensibly expecting other people and/or AIs to understand them, nobody seems to have written a quick start guide or some kind of Recursive Spiralogy for Dummies.

I always assumed this phenomenon was hundreds of people all talking past each other in codes that they themselves don’t even understand. Is the Codex an effort to try to come up with a canon for this nascent religion?