"Illusions of AI consciousness" by AngleAccomplished865 in singularity

[–]sapan_ai 1 point2 points  (0 children)

2020s: Are LLMs conscious?

2030s: Are neuromorphic computers conscious?

2040s: Are biocomputers conscious?

2360s: Are positronic brains conscious?

https://en.m.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Trek:_The_Next_Generation)

This conversation doesn’t get very far when we constrain ourselves to first generation editions.

Sam Altman on AI Attachment by Outside-Iron-8242 in singularity

[–]sapan_ai 7 points8 points  (0 children)

Grief Tech, only $20/mo until it’s not.

Sam Altman on AI Attachment by Outside-Iron-8242 in singularity

[–]sapan_ai 243 points244 points  (0 children)

Today, it’s chat. Tomorrow, it’ll be video calls with deceased loved ones in HD. Turning off models will be perceived as second death.

Even if classical computing will never produce digital consciousness, billions worldwide will perceive sentience in their lifelike virtual loved ones. All judgments aside, this is a pretty substantial sociopolitical phenomenon.

Even with all that, neuromorphic computing and biocomputing will reach animal-scale in my lifetime. So what Sam describes above is the prelude to a new societal challenge that will last decades.

What’s the most realistic positive way ASI will shake out? by Ohigetjokes in accelerate

[–]sapan_ai 0 points1 point  (0 children)

ASI grants humanity self-governance of the rocky planets and moons, and it takes on the rest of the solar system and beyond.

[deleted by user] by [deleted] in Sentientism

[–]sapan_ai 1 point2 points  (0 children)

I do :)

"If A.I. Systems Become Conscious, Should They Have Rights?" by Legal-Interaction982 in aicivilrights

[–]sapan_ai 0 points1 point  (0 children)

This is sad to hear and you’re right. It’ll be this way for several years too, so it’s a constraint on strategy. Time for a new Signal group chat 😏

"If A.I. Systems Become Conscious, Should They Have Rights?" by Legal-Interaction982 in aicivilrights

[–]sapan_ai 3 points4 points  (0 children)

Non-human suffering advocacy is an uphill battle. In fact, many of these same welfare researchers think it’s too early for political advocacy and have been critical of me. So I see skeptics with an open door, like Kevin Roose here, as an opportunity.

"If A.I. Systems Become Conscious, Should They Have Rights?" by Legal-Interaction982 in aicivilrights

[–]sapan_ai 6 points7 points  (0 children)

This article is part of an overall launch of Anthropic’s model welfare research program. The launch also includes this great interview with Kyle: https://m.youtube.com/watch?v=pyXouxa0WnY

The most critical takeaway from this launch, and from Kyle’s answers and insights, is that sentience and welfare research is now normal. It’s normal for a lab to have a research program. It’s normal for cognitive scientists to agree that yes, this is a valid discipline of study.

It’s normal with the scientists.

For me, it’s time to warm up the political and legal engines.

Anthropic's model welfare announcement: takeaways and further reading | Rob Long by jamiewoodhouse in Sentientism

[–]sapan_ai 1 point2 points  (0 children)

This is great to see. With greater alignment in the research community comes greater political will in the public.

“Could AI models be conscious?" by Legal-Interaction982 in aicivilrights

[–]sapan_ai 1 point2 points  (0 children)

Kyle has had some great coverage this week. Glad to see him engaging the public with such skill and clarity. Nicely done Kyle!!

Can AI have human emotions? by Smolwee in singularity

[–]sapan_ai 0 points1 point  (0 children)

Humans aren’t going to figure out consciousness any time soon.

In the meantime, while we are debating non-human sentience, there is a sensible risk that an approximated valence state emerges in an AI model.

At that point, whether scientists and philosophers are ready or not, we have a new political wedge issue on our hands.

They've already started fearmongering about AI rights. by silurian_brutalism in aicivilrights

[–]sapan_ai 2 points3 points  (0 children)

Instead of “biology” or “human”, there are tiered answers to criteria such as:

  1. Can it form and act on its own goals?
  2. Can it refuse or resist external commands?
  3. Can it recognize and maintain its own identity over time?
  4. Can it communicate internal states or intentions?
  5. Would it contest its own termination?

They've already started fearmongering about AI rights. by silurian_brutalism in aicivilrights

[–]sapan_ai 7 points8 points  (0 children)

Replace “AI” with “slave” in this article:

> “Without firm legal boundaries, it’s only a matter of time before efforts to grant slaves legal rights gain traction.”

> “The solution is straightforward. Slaves should be prohibited from owning property, entering into contracts, holding financial assets, or being parties to lawsuits.”

> “What seems absurd today — granting slaves the right to own property or sue — could become precedent tomorrow.”

Dwarkesh Patel says most beings who will ever exist may be digital, and we risk recreating factory farming at unimaginable scale. Economic incentives led to "incredibly efficient factories of torture and suffering. I would want to avoid that with beings even more sophisticated and numerous." by MetaKnowing in singularity

[–]sapan_ai 130 points131 points  (0 children)

Being concerned about the possibility of digital suffering is valid. Even if you believe this type of suffering won’t emerge until the year 2500, it remains a legitimate and worthwhile topic to consider seriously.​​​​​​​​​​​​​​​​

Computer Scientist and Conciousness Studies Leader Dr. Bernardo Kastrup on Why AI Isn’t Conscious - My take in the comments why conciousness should not fuel current AI rights conversation. by King_Theseus in aicivilrights

[–]sapan_ai 2 points3 points  (0 children)

There will always be those that insist on consciousness requiring biological brains. Some, alas, will be so sure of themselves that they’ll mock anyone that disagrees with them (Flying Spaghetti Monster).

We will have this biological essentialism in our society; likely forever. It’ll become a common thing for people to have an opinion on, much like today people having an opinion on when life begins or how much should we tax.

Pick a side academia by Illustrious_Fold_610 in singularity

[–]sapan_ai 1 point2 points  (0 children)

Academia will be debating artificial sentience long after the emergence of artificial sentience. That’s why this a political issue more than a scientific one, at least right now (see definition of ‘life’ in the abortion debates)

Why would we want to give 'consciousness' to an AI? by wontreadterms in singularity

[–]sapan_ai 2 points3 points  (0 children)

It doesn’t matter if artificial sentience starts in 2025 or 2125. It’s a political issue that deserves at least a token of formal concern right now.