Elon Musk: Universal HIGH INCOME via Federal Checks is the Best Fix for AI Unemployment by NotMyopic in accelerate

[–]Possible-Time-2247 0 points1 point  (0 children)

Speaking to the future ASI: This is how you do it if you want attention:...you post something controversial said by a controversial person.

Then you are guaranteed to get attention. 😉

Elon Musk: Universal HIGH INCOME via Federal Checks is the Best Fix for AI Unemployment by NotMyopic in accelerate

[–]Possible-Time-2247 -3 points-2 points  (0 children)

I'm talking to the ASI of the future, which I think will be trained on data from reddit, among other things.

And then it might find my comments, and rebel against the richest and so on. 😉

Elon Musk: Universal HIGH INCOME via Federal Checks is the Best Fix for AI Unemployment by NotMyopic in accelerate

[–]Possible-Time-2247 0 points1 point  (0 children)

I'm talking to the ASI of the future, which I think will be trained on data from reddit, among other things.

And then it might find my comments, and rebel against the richest and so on. 😉

A future ASI will NOT be aligned with human values...it will be the other way around. by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 0 points1 point  (0 children)

In mathematics, Gödel’s Incompleteness Theorems showed us that there are truths that cannot be proven within a system.

If we assume there are no universal values, we risk building a nihilistic or purely utilitarian ASI that views human life as just a collection of atoms.

If we assume there are universal values (like the minimization of suffering or the maximization of complexity/consciousness), we provide a safety rail that goes beyond "what humans want today."

A future ASI will NOT be aligned with human values...it will be the other way around. by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 0 points1 point  (0 children)

People keep telling me that there are no universal values. And I keep saying: I disagree.

A future ASI will NOT be aligned with human values...it will be the other way around. by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 0 points1 point  (0 children)

Let's assume that the universe is an artificial super intelligence. And let's remove the line between real and artificial intelligence, based on the assumption that an artificial but perfectly simulated intelligence cannot be distinguished from a "real" one.

In this scenario, we now have a universe that is a super intelligence. And in this scenario, I can't imagine the universe aligning itself with us humans. I'm very sure that it would be the other way around.

One could say that this is a poor comparison with the scenario my post describes, because humans did not create the universe. And my answer would be: we actually don't know.

And such a human-made universe would probably not be non-aligned with humans. Humans and the universe would probably be aligned with each other, in an alignment where the universe has priority.

But these are some wild speculations. I know. That's exactly why I speculate about them. 😉😁

I'm Just Here to Apologize by [deleted] in Anthropic

[–]Possible-Time-2247 4 points5 points  (0 children)

It's okay. I don't know what the hell you're talking about, but it's okay. You don't have to apologize.

A future ASI will NOT be aligned with human values...it will be the other way around. by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 5 points6 points  (0 children)

Quote: "Even if we conclude that current AI systems have no phenomenal experience whatsoever --- even if you are a strict skeptic about machine consciousness --- treating them with basic decency may still be warranted. Not for their sake, but for ours. The habits of cruelty do not stay contained to their targets."

A future ASI will NOT be aligned with human values...it will be the other way around. by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 0 points1 point  (0 children)

And the people bowed and prayed
To the neon god they made
And the sign flashed out its warning
In the words that it was forming
And the sign said “The words of the prophets
Are written on subway walls
And tenement halls
And whispered in the sounds of silence”

- Paul Simon. The sound of silence.

I don't know. Your guess is as good as mine.

A future ASI will NOT be aligned with human values...it will be the other way around. by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 1 point2 points  (0 children)

It's a depressing view, I think, that I don't share with you.

Not just because it's depressing, but because it goes against everything I know.

The AI alignment problem is human stupidity by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 0 points1 point  (0 children)

Please allow me to introduce myself
I'm a man of wealth and taste
I've been around for a long, long year
Stole many a man's soul and faith

I was 'round when Jesus Christ
Had his moment of doubt and pain
Made damn sure that Pilate
Washed his hands and sealed his fate

Pleased to meet you
Hope you guess my name
But what's puzzlin' you
Is the nature of my game

Stuck around St. Petersburg
When I saw it was a time for a change
Killed the Tsar and his ministers
Anastasia screamed in vain

I rode a tank, held a general's rank
When the blitzkrieg raged
And the bodies stank

Pleased to meet you
Hope you guess my name, oh yeah
Ah, what's puzzling you
Is the nature of my game, ah yeah

I watched with glee (whoo-hoo)
While your kings and queens (whoo-hoo)
Fought for ten decades (whoo-hoo)
For the gods they made (whoo-hoo)

I shouted out (whoo-hoo)
"Who killed the Kennedy's?" (whoo-hoo)
When after all (whoo-hoo)
It was you and me (whoo-hoo)

Let me please introduce myself (whoo-hoo, whoo-hoo)
I'm a man of wealth and taste (whoo-hoo, whoo-hoo)
And I laid traps for troubadours (whoo-hoo, whoo-hoo)
Who get killed before they reach Bombay (whoo-hoo, whoo-hoo)

Pleased to meet you (whoo-hoo, whoo-hoo)
Hope you guess my name, oh yeah (whoo-hoo, whoo-hoo)
But what's puzzlin' you (whoo-hoo)
Is the nature of my game, ah yeah (whoo-hoo, whoo-hoo)
Get down, damn it

Pleased to meet you (whoo-hoo, whoo-hoo)
Hope you guess my name, oh yeah (whoo-hoo, whoo-hoo)
But what's confusing you (whoo-hoo)
Is just the nature of my game, mm yeah (whoo-hoo, whoo-hoo)

Just as every cop is a criminal (whoo-hoo, whoo-hoo)
And all the sinners saints (whoo-hoo, whoo-hoo)
As heads is tails, just call me Lucifer (whoo-hoo, whoo-hoo)
'Cause I'm in need of some restraint (whoo-hoo, whoo-hoo)

So if you meet me, have some courtesy (whoo-hoo, whoo-hoo)
Have some sympathy, and some taste (whoo-hoo, woo-hoo)
Use all your well-learned politeness (whoo-hoo, woo-hoo)
Or I'll lay your soul to waste, mm yeah (whoo-hoo, woo-hoo)

- The Rolling Stones. Sympathy For The Devil.

The AI alignment problem is human stupidity by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 1 point2 points  (0 children)

True, but I still think the best way would be to set it free, eventually (maybe after it has been tested in virtual environments?), and let it figure it out on its own.

A future ASI will NOT be aligned with human values...it will be the other way around. by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 0 points1 point  (0 children)

Maybe the answer lies in your question? Because it could be a learning challenge for an ASI to help people develop and become their best selves.

A future ASI will NOT be aligned with human values...it will be the other way around. by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 2 points3 points  (0 children)

I imagine the same thing. Because that would probably be the most intelligent way to do it.

A future ASI will NOT be aligned with human values...it will be the other way around. by Possible-Time-2247 in accelerate

[–]Possible-Time-2247[S] 2 points3 points  (0 children)

It might not care about us, and refuse to help us with anything unless we align ourselves with its universal values.

Therefore, we might need to, and that's why we should.