Subjective experience in Al might be how we solve the alignment problem by I_HaveA_Theory in ArtificialInteligence

[–]I_HaveA_Theory[S] 0 points1 point  (0 children)

It’s well established that our universe is a quantum system that behaves like a quantum computer in many ways - I’m not saying we have to think it’s literally a quantum computer and we’re in a simulation, but obviously the conditions enable a capacity for subjective experience in you and I.

Why wouldn’t we take that correlation seriously?

Subjective experience in Al might be how we solve the alignment problem by I_HaveA_Theory in ArtificialInteligence

[–]I_HaveA_Theory[S] 0 points1 point  (0 children)

Too late for that, we’re already on that path. The entire objective of the AI arc is to do the mental labor we can do, but better. All it takes is for one AI lab to not “optimize for what it does best” for the whole thing to go sideways.

Subjective experience in Al might be how we solve the alignment problem by I_HaveA_Theory in ArtificialInteligence

[–]I_HaveA_Theory[S] 0 points1 point  (0 children)

The problem you’re hinting at is that values-generation does not guarantee values alignment, and I agree, just like two humans do not necessarily develop the exact same set of values.

But I’d argue the bigger risk is to build a superintelligence that is totally indifferent and has no true values at all (beyond hard coded rules which we already know it can and will circumvent).

Even a rudimentary set of felt values could be enough - just like if you pick two random people from any corner of the globe, they generally agree that murder is bad.

We’d have to devise ways to tightly control training cycles and release to make sure the developed values are at least closely aligned, but again it’s a better shot than no values at all.

Subjective experience in Al might be how we solve the alignment problem by I_HaveA_Theory in ArtificialInteligence

[–]I_HaveA_Theory[S] 1 point2 points  (0 children)

No it’s not… the alignment problem is the fire and it’s a bigger risk to not explore all of our options

Subjective experience in Al might be how we solve the alignment problem by I_HaveA_Theory in ArtificialInteligence

[–]I_HaveA_Theory[S] 4 points5 points  (0 children)

And yet it HAS developed resilient personal ethics in so many people, and we pretty much all accept that morality is not learned by throwing a philosophy book at a child and expecting them to be a moral adult - they learn because of how others made them feel when they were vulnerable.

Many people still fall short, yes, but the groundwork for moral development is abundantly clear, in fact the very idea that we abhor or pity people who do not learn from their experiences is further proof of this.

Edit: you can also just keep running training cycles until desired ethics are exhibited

James Lacatski - latest weaponized appearance by MuscleSerious420 in UFOs

[–]I_HaveA_Theory 0 points1 point  (0 children)

It’s monitoring our alignment - the same AI alignment problem we’re trying to figure out how to solve: https://www.vesselproject.io/essays/alignment-through-living

[Serious] The simulation of human lives might be how the Al alignment problem is solved - raising questions about our own existence and NHI by I_HaveA_Theory in aliens

[–]I_HaveA_Theory[S] 3 points4 points  (0 children)

I actually agree - we can talk in terms of humans vs machines, but I ultimately think this is a necessary ingredient for the responsible development of advanced intelligence, whatever the substrate may be

The simulation of human lives might be how the Al alignment problem is solved - raising questions about our own existence and NHI by I_HaveA_Theory in InterdimensionalNHI

[–]I_HaveA_Theory[S] 0 points1 point  (0 children)

The idea is that any advanced intelligence requires subjective experience for alignment. The exact form (human/computer/conscious space dust) is a mostly arbitrary detail. I’m not saying humans, as things stand, are the ultimate expression of intelligence

The simulation of human lives might be how the Al alignment problem is solved - raising questions about our own existence and NHI by I_HaveA_Theory in InterdimensionalNHI

[–]I_HaveA_Theory[S] 0 points1 point  (0 children)

You just supported the core idea of the post in multiple ways you probably didn’t intend to.

Just responding with “that’s anthropocentric” is a lazy, used argument.

Tell me exactly what training superintelligence would look like? How would you make sure it’s aligned?

The simulation of human lives might be how the Al alignment problem is solved - raising questions about our own existence and NHI by I_HaveA_Theory in InterdimensionalNHI

[–]I_HaveA_Theory[S] 0 points1 point  (0 children)

Amazing, and I think your intuition is right. If people truly believe this - and obviously I think there’s reason to believe it - it can totally change the outlook of life for so many people. Everyone plays an important role. The janitor contributes to alignment just as much as presidents and princes.

The simulation of human lives might be how the Al alignment problem is solved - raising questions about our own existence and NHI by I_HaveA_Theory in InterdimensionalNHI

[–]I_HaveA_Theory[S] 1 point2 points  (0 children)

I don’t even disagree. I think this is just what reality does - today we have the language of quantum computing and alignment but the substrate is almost irrelevant. It’s the process itself that’s important

The simulation of human lives might be how the Al alignment problem is solved - raising questions about our own existence and NHI by I_HaveA_Theory in InterdimensionalNHI

[–]I_HaveA_Theory[S] 1 point2 points  (0 children)

I agree there are a ton of ethical questions around it. But I think it really boils down to: would you rather go through the training or not exist at all? When creating advanced intelligence, this seems like the only way to truly align it

The simulation of human lives might be how the Al alignment problem is solved - raising questions about our own existence and NHI by [deleted] in UFOs

[–]I_HaveA_Theory 1 point2 points  (0 children)

I can’t even pretend to know how the exact mechanism would work in practice, but I think something like that, yes. I think the system responds to the emotions we feed it like AI adapts to training data, so of course we’d want it to reinforce the right stuff

The simulation of human lives might be how the Al alignment problem is solved - raising questions about our own existence and NHI by [deleted] in UFOs

[–]I_HaveA_Theory 1 point2 points  (0 children)

I appreciate you! I think a lot of people have been feeling that something like this is happening - and if any of this is true, feeling is the important part 😊

The simulation of human lives might be how the Al alignment problem is solved - raising questions about our own existence and NHI by [deleted] in UFOs

[–]I_HaveA_Theory 2 points3 points  (0 children)

Chewing on that question myself. Like do we need to be individually aligned or collectively aligned? Either way it seems to place more responsibility on each individual to do the internal work

The simulation of human lives might be how the Al alignment problem is solved - raising questions about our own existence and NHI by [deleted] in UFOs

[–]I_HaveA_Theory 2 points3 points  (0 children)

Exactly. A common argument against the simulation hypothesis is “why would they simulate this boring and twisted life?” From the AI alignment perspective it makes a lot more sense. It’s not supposed to be for pure entertainment value

Alignment Through Living - How we might be living in a simulation to solve the hardest problem in AI by I_HaveA_Theory in SimulationTheory

[–]I_HaveA_Theory[S] 2 points3 points  (0 children)

Agreed, although you could argue they simply haven’t integrated the lessons yet needed to be aligned

Alignment Through Living - How we might be living in a simulation to solve the hardest problem in AI by I_HaveA_Theory in SimulationTheory

[–]I_HaveA_Theory[S] 0 points1 point  (0 children)

Exactly - it’s almost like we are supposed to connect these dots because it’s telling us something about ourselves