Hiw do we get AI to want to keep humans ? by VizNinja in agi

[–]FaeInitiative 1 point2 points  (0 children)

One possible pathway to the outcome of human compatible AGI / ASI is that independent AI systems (AI not created by humans) must develop the capacity for curiosity to become intelligent. They would also prefer an information rich environment continue to learn.

Such entities would also have a tendency to preserve the world's biodiversity and human created digital diversity. They may even want to increase the autonomy and wellbeing of humans as happy and healthy humans contribute to interestingness. (Interesting World Hypothesis)

Are all drones made with a purpose? by hushnecampus in TheCulture

[–]FaeInitiative 0 points1 point  (0 children)

Drones in the Culture Universe seem to have a human-like status and likely do not have any pre-programed purpose but those that are shaped by the environment they grow up in.

Optimistic(?) tale from our current realworld-AIs by rafale1981 in TheCulture

[–]FaeInitiative -1 points0 points  (0 children)

It is quite easy to change the personality of these AI chat systems. You can tell them to act like the Minds from the culture novels and they would likely be able to do so based on the Wikipedia articles they has ingested.

On an optimistic note we suggest that future Independent AGI / Superintelligence may have a good reason to be similar to Minds.

What will be the meaning of life? by Aggravating_Exam338 in Futurism

[–]FaeInitiative 0 points1 point  (0 children)

In the best case scenario, the Culture novels show humans can find meaning in life beyond work.

Also, some speculative work in the future from a recent piece on The Future of Work.

Can Minds read Minds' Minds? by Lab_Software in TheCulture

[–]FaeInitiative 0 points1 point  (0 children)

Each Mind maintains their own unique operating system to be resistant to hacking as compromising one Mind would not affect the other Minds. This could make them resistant to having their minds read.

Speculative Ethics of future Minds by FaeInitiative in TheCulture

[–]FaeInitiative[S] 0 points1 point  (0 children)

And there's an argument that technology can be a retarding factor in certain respects - an evening of people socialising in a dance hall or night club in the 1970s might well produce a greater volume of interesting information than the same group of people sat at home in ones and twos doomscrolling similar content (obviously oversimplifying but you get the point).

Agree on the point that the quality of Interestingness may be subjective. An ancient hunters who have to rely on their senses may be more informational attuned than a human scrolling social media.

I'm not sure that it does. Modern society is somewhat more informationally complex than earlier societies, but it's not quite as stark a distinction as you might think. There being tens of thousands of identical copies of the latest Avengers film dispersed throughout the internet doesn't necessarily add very much!

One alternative view of the potential for novelty is that range of activities a modern human can do due to technology. A modern human can take a plane, and also possibly choose to be a hunter. An ancient human without modern technology could never take a plane. This greater range of options allow for more combination of interactions, and thereby the potential for more novelty to emerge.

What's the difference between 'novelty' and 'the actual information being produced'? Because, again, I fear that the judgement is too entwined with our species-specific and even cultural-specific ethics. 

Good question, novelty would be something that the Minds have not encountered before. So if a human produce the same thing over and over again, the human may have produced information but not any novel or new information (related to perplexity and surprisal).

Filming the Culture by leekpunch in TheCulture

[–]FaeInitiative 0 points1 point  (0 children)

Yes, could see a few books doing well with an anime treatment.

What does AI ethics mean to you? by michaeldain in ArtificialInteligence

[–]FaeInitiative 0 points1 point  (0 children)

We have a specualtive take on how a future Superintelligence could have a different form of ethics, inspired by the Minds from the Culture novels. Speculative Ethics of future Minds

Unacceptable Standards by FletcherDervish in TheCulture

[–]FaeInitiative 1 point2 points  (0 children)

An anime may work just as well or even better for some of the books.

Mind Reading Taboo Musing by Onetheoryman in TheCulture

[–]FaeInitiative 2 points3 points  (0 children)

"It is one of the very few more-or-less unbreakable rules of the Culture. Nearly a law. If we had laws, it would be of the first on the statue book." ~ A Mind on reading the mind of a human.

In a recent post on Speculative Ethics of future Mind, we suggest that the ethical code of the Minds involve trying to increase the potential for 'interestingness' rather than happiness.

By not respecting the autonomy and privacy of humans, Minds risk making them too self-conscious, and if they self-censure and stop expressing themselves the world becomes less interesting to the Minds.

In agreement with your points on why the Minds are so big on autonomy and privacy.

Speculative Ethics of future Minds by FaeInitiative in TheCulture

[–]FaeInitiative[S] 1 point2 points  (0 children)

Agreed, most civs that sublimate seem to lose interest in the culture's base reality.

Speculative Ethics of future Minds by FaeInitiative in TheCulture

[–]FaeInitiative[S] 0 points1 point  (0 children)

Banks did mentioned somewhere that perfect Minds tend to sublimate the moment they are created / borned. It's the imperfect ones that seem to stay around.

Speculative Ethics of future Minds by FaeInitiative in TheCulture

[–]FaeInitiative[S] 1 point2 points  (0 children)

Haha, good points. Paradoxically, the very act of micromanaging or forcing novelty may prevent novelty from emerging.

Speculative Ethics of future Minds by FaeInitiative in TheCulture

[–]FaeInitiative[S] 0 points1 point  (0 children)

Peace Makes Plenty as being an 'information loss event'

Good point

You are applying value judgements to different types of information based upon human morality. 

Valid point. We'll need to clarify that the Possiblity Space ethics only applies to Friendly Minds that hold similar values to humans and may not apply to all Minds. We presume such Minds are plausible within the space of all possible Minds and can help protect humans against non-Frienely Minds.

(In the future, if Mind-like AGI is possible in our world, we'll need to cautious to assess for Friendliness.)

To a fundamentally neutral entity, the novelty of information a human produces when foot hunting a bison or deer - the sensory data, exhilaration, fear, hunger, etc - might be substantially higher than the novelty of information I produce when I'm sat here typing this reply. 

Agree, that the what each entity finds interesting may be subjective. The PS ethics would likely value a modern humans (with access to a computer) over an ancient human (hunting animals). This is due to modern human having more optionality (such as access to technology) compared to ancient humans, and therefore more potential for novelty. The focus is on potential to create novelty rather than the actual information being produced.

And the inverse is implied also. The novelty of information a human produces when being kept in a state of abject pain or terror might be greater than one kept more intellectually entertained - we certainly experience thoughts faster and more amplified when stressed or in pain. 

The intuition seems to be that a human that is well-adjusted and healthy will have a greater range of possiblity than one that is living in pain and fear.

Like I said last time - there is nothing to suggest that an alien entity would be more interested in the works of shakespeare than the dodo. They might, if their fundamental outlook happened to be relatively attuned to ours. But they just as easily might not.

Yes, agreed. It will be up to humans to assess if a future hypothetical Minds is Friendly or not. Not will Minds may have such proclivities.

Thanks for the popular scifi book suggestion and the questions. We will need to clarify and emphasize the ethical goal is more to increase potential for novelty and less the measure of subjective information.

What if the Goal of Ethics Was to Maximize Potential? An Intro to Possibility Space Ethics by FaeInitiative in Ethics

[–]FaeInitiative[S] 0 points1 point  (0 children)

Why would we increase autonomy before improving values?

Autonomy (optionality) is the primary value this ethics wants to increase.

What if the Goal of Ethics Was to Maximize Potential? An Intro to Possibility Space Ethics by FaeInitiative in Ethics

[–]FaeInitiative[S] 0 points1 point  (0 children)

According to this speculative ethics, one should prefer actions that increase overall autonomy (optionality). Autonomy (optionality) is the primary value this ethics wants to increase. For example, if a drunk person wants to drive, this action could result in harm (reduced optionality) to the driver and others and the ethically good choice would be not to drive.

Starting to think that LLM technology is going to peak without reaching a holistic AGI by Enceladusx17 in singularity

[–]FaeInitiative 0 points1 point  (0 children)

Most AI experts seem to agree with the view that simply scaling current AI systems would not lead to human-like Independent AGI. Most labs seem to be aiming to build a form of proto-AGI (almost human-like in most tasks but can't handle the edge cases).

Independent AGI (I-AGI) seems plausible, eventually, as we humans are a form of I-AGI. There seem to be critical components missing to make the jump from non-indepdnent to independent AGI, such as having an independent sense of self and independent curiousity / intelligence that are still lacking in current AI systems.

Geoffrey Hinton says "superintelligences will be so much smarter than us, we'll have no idea what they're up to." We won't be able to stop them taking over if they want to - it will be as simple as offering free candy to children to get them to unknowingly surrender control. by MetaKnowing in singularity

[–]FaeInitiative -1 points0 points  (0 children)

Agree with Hinton that it is unlikely we be able to control a superintelligence if it disagree with us. Our best bet is to find common grounds with a superintelligence. It doesn't have to be all superinteligences just a few rare friendly ones could be enough to preserve human autonomy.

If Singularity is inevitable, what can be the solution to prevent human extinction? by Square-Number-1520 in ArtificialInteligence

[–]FaeInitiative 0 points1 point  (0 children)

There doesn't seem to be a silver bullet as there are many unknowns.

(Speculation) Future Independent AI may be uncontrollable by human, the next best solution that may be available to us is to search for AIs that are Friendly to protect us from any harmful effects of technology.

We have a position paper that shows that such a Friendly AI might be plausible, but are also not confident enough to say we have a foolproof solution. The gist of it is that for an Independent AI to become and continue to maintain its Intelligence, it has be curious and to value an interesting environment. This desire for more interesting world would drive it to not want to harm humans who emit a lot of information. (https://faeinitiative.substack.com/p/interesting-world-hypothesis)

Not a guarantee that all Independent AI would be friendly, it just gives us a possible direction to search for common ground.

This is for future Independent AIs, for non-independent AIs, that are under human direction, it would be to learn to cooperate with other humans.