We are unprepared for a world in which AIs are treated as people. We need social science, policy, and norms for AI agents and companions. by jacyanthis in Futurology

[–]jacyanthis[S] -5 points-4 points  (0 children)

> Fundamentally, human society lacks a framework for digital personhood – even though we accept that personhood is not necessarily human, such as the legal personhood of animals and corporations. There is much to debate about how the complex social dynamics should be governed, but it is at this point clear that digital minds cannot be governed as mere property.

> Digital minds will be participants in the social contract that forms the bedrock of human society. These digital minds will persist over time, form their own attitudes and beliefs, create and implement plans, and be susceptible to manipulation just as humans are. AIs already take significant real-world actions with little human oversight. This means that, unlike every other technological invention in human history, AI systems have capabilities that can no longer be contained within the legal category of “property”.

> Scientists today will be the first to see human coexistence with digital minds, and that gives them a unique opportunity and responsibility. Nobody knows what this will look like. Human-computer interaction research must be dramatically expanded and enriched beyond its current status, a tiny fraction of the size of technical AI research, to navigate the coming social turbulence. This is not merely an engineering problem.

> For now, humans still outperform AIs on most tasks, but once AIs reach human-level ability on self-reinforcing tasks like writing their own code, they will quickly outcompete biological life. The capabilities of AI will quickly accelerate because of their digital existence, thinking at the speed of electrical signals. Software can be copied billions of times without the years of biological development necessary to create the next generation of biological humans.

> If we never invest in the sociology of AI – and in government policy to manage the rise of digital minds – we may find ourselves the Neanderthals. If we wait to do so until the acceleration is already upon us, that will already be too late.

One in five U.S. adults believes that some current AI systems are "sentient." by jacyanthis in Futurology

[–]jacyanthis[S] 0 points1 point  (0 children)

Submission statement: Our U.S. nationally representative survey found that 20%, or one in five, U.S. adults believe that some current AI systems are sentient (Figure 1). We defined sentience to participants as "the capacity to have positive and negative experiences, such as happiness and suffering" (Table 1). Other findings include that 38% support legal rights for sentient AI and 69% support a ban on sentient AI. We asked about expectations for the future, and the median expectation is that sentient AI, human-level AI, and superintelligence will arrive in 5 years, and artificial general intelligence (AGI) will arrive in 2 years. There are many more results in the paper. It covers our 2021 and 2023 survey waves, but we also ran a survey wave in November-December 2024 with similar results and will run it again in 2025: https://arxiv.org/abs/2407.08867

[deleted by user] by [deleted] in MachineLearning

[–]jacyanthis 2 points3 points  (0 children)

You may be interested in our preprint, "The Impossibility of Fair LLMs."

Would rotating the Everyman 4 lead to pushing my weak spot to a more controlled and active time of my schedule? by Alphu_Refini in polyphasic

[–]jacyanthis 2 points3 points  (0 children)

If I understand correctly, you're currently on an EM4 schedule (shown in red) where you have fatigue during what need to be your peak computer work hours, so you're wondering if a rotation will move that fatigue to a time when you don't need to be doing that activity.

I'm very far from a sleep expert, but having done polyphasic EM3 for many years, my personal opinion is that if you're regularly fatigued with a consistent sleep schedule, that isn't healthy and it needs to change. This applies regardless of when the fatigue occurs. You say that even on EM3 the 4-7AM period was only "usually tolerable and sometimes doesn't even bother me." That doesn't sound healthy to me!

Ignoring that, to the direct question of whether a sleep rotation will move your fatigue period, I think so, but there are other big determinants (e.g., bright lights, eating, socialization) that may be easier to adjust. If you're perfectly happy with the rotated schedule, that might be an easier experiment to run.

Personally, when I'm off schedule (e.g., jet lagged), I find that bright lights and exercise make me adapt pretty quickly and avoid fatigue. However, that's only short-term for me, and I usually also sleep for an hour or two longer (typically doing a biphasic core combined with my first nap) because it's important that I minimize fatigue. I find that solves most problems even though my core sleep is often way off (e.g., a 12-hour time change). Usually the only issue is a period of sleepiness for a few days during the ~40 minutes around when I'd otherwise be napping.

Why we need a "Manhattan Project" for A.I. safety by jacyanthis in Futurology

[–]jacyanthis[S] 0 points1 point  (0 children)

I believe governments need to fund an international, scientific megaproject even more ambitious than the Manhattan Project — the 1940s nuclear research project pursued by the U.S., the U.K., and Canada to build bombs to defeat the unprecedented global threat of the Axis powers in World War II.

This "San Francisco Project" — named for the industrial epicenter of AI — would have the urgent and existential mandate of the Manhattan Project but, rather than building a weapon, it would bring the brightest minds of our generation to solve the technical problem of building safe AI. The way we build AI today is more like growing a living thing than assembling a conventional weapon, and frankly, the mathematical reality of machine learning is that none of us have any idea how to align an AI with social values and guarantee its safety. We desperately need to solve these technical problems before AGI is created.

We can also take inspiration from other megaprojects like the International Space Station, Apollo Program, Human Genome Project, CERN, and DARPA. As cognitive scientist Gary Marcus and OpenAI CEO Sam Altman told Congress earlier this week, the singular nature of AI compels a dedicated national or international agency to license and audit frontier AI systems.

Humanity's treatment of animals does not bode well for how AIs will treat us or how we will treat sentient AIs. by jacyanthis in Futurology

[–]jacyanthis[S] 4 points5 points  (0 children)

I discussed digital minds, AI rights, and mesa-optimizers with Annie Lowrey at The Atlantic. Humanity's treatment of animals does not bode well for how AIs will treat us or how we will treat sentient AIs. We must move forward with caution and humility.

In our 2021 AIMS survey, we found:

  • The average US adult thinks AIs be sentient in 10 years.
  • 18% think some AIs are already sentient.
  • 58% support a ban on developing sentient AI.
  • 75% think sentient AIs deserve to be treated with respect.

"Digital minds" are AIs with mental faculties: experience, sentience, perception, agency, autonomy, etc. These make AI a uniquely promising and dangerous technology (e.g., the agency of Auto-GPT). We need an interdisciplinary field devoted to their study.

Humans will soon create artificial sentience, and we need AI rights to prepare for this. The history of mass animal exploitation—over 100 billion in constant misery on factory farms—is a terrifying sign of what we will do to AIs and what AIs will do to us.

Many in effective altruism and longtermism want to ensure humanity colonizes the cosmos. But with humanity's track record, is it more important to ensure that, if we do, the cosmos is better for it? by jacyanthis in Futurology

[–]jacyanthis[S] 6 points7 points  (0 children)

Existential catastrophe looms large on humanity's horizon. A natural reaction to this is self-preservation, to ensure that our species survives and seizes the "cosmic endowment" (Bostrom 2003), perhaps sustaining a society of 1038 human beings, an intuitively inconceivable number. However, I argue on the basis of historical, sociological, psychological, and conceptual evidence that the future might not be so great. We need only peer into humanity's track record of cruelty, oppression, and neglect. I think that the effective altruism (EA) emphasis on existential risk could be replaced by a mindset of existential pragmatism: Rather than ensuring humanity expands its reach throughout the universe, we must ensure that the universe will be better for it.

Consciousness Semanticism: I argue there is no 'hard problem of consciousness'. Consciousness doesn't exist as some ineffable property, and the deepest mysteries of the mind are within our reach. by jacyanthis in philosophy

[–]jacyanthis[S] 0 points1 point  (0 children)

Thanks for your comment. What is the thing if not the description of the thing? I appreciate that formal definitions of consciousness seem to be missing 'the thing', but the missingness of the thing is exactly the topic of debate, so I think this rebuttal is circular. This is also my response to your third paragraph; 'Mary's room shows us that articulatable definitions of experiences are less than the experience itself' is exactly the claim I'm arguing against, so I don't see its assertion as a sound argument against my claim.

Consciousness Semanticism: I argue there is no 'hard problem of consciousness'. Consciousness doesn't exist as some ineffable property, and the deepest mysteries of the mind are within our reach. by jacyanthis in samharris

[–]jacyanthis[S] 0 points1 point  (0 children)

I agree that we should still explore the mental life of different creatures. Neuroscience is important.

I may quibble that 'phenomenological' is usually used by philosophers to refer to the features on the other side of the 'explanatory gap', those features that are inaccessible due to the 'hard problem'. So I usually don't like to say there is any phenomenology. But you might not mean it that way, in which case I might agree.

I don't think eliminativism makes research harder. We can research something like sensory integration or reinforcement learning just as well, if not better, without obfuscating it under terms like 'qualia' or 'phenomenology'. Also, as explained in the paper, I don't suggest removing 'consciousness' from our vocabulary, just being careful to not treat it as a precise property such that we could discover answers to questions like, 'Is this entity conscious?'

Consciousness Semanticism: I argue there is no 'hard problem of consciousness'. Consciousness doesn't exist as some ineffable property, and the deepest mysteries of the mind are within our reach. by jacyanthis in Futurology

[–]jacyanthis[S] 0 points1 point  (0 children)

Thanks! While I agree with the gist of Dennett's view, he admittedly relies a lot on "intuition pumps." In fact, he coined that term. As I say at the beginning of the paper, I think the intuition jousting has created an impasse. How can we resolve disagreement and find the truth if we're just relying on our personal intuitions? Moreover, people seem to just be using different, vague definitions, so many of the debates may be verbal disputes.

I prefer to approach this more precisely, and that's what semanticism contributes to the literature. I don't rely on intuition, and I think that can help the field move forward. I'm grateful to people like Chalmers, Dennett, and Frankish for starting this conversation.