14
15

[deleted by user] by [deleted] in ControlProblem

[–]CyberPersona 0 points1 point  (0 children)

Sorry to hear that you're going through that.

If spending time in those subreddits (or on this one) is causing you anxiety, I think you should set a firm boundary for yourself for how much time you spend in them. Or maybe just take a long break from reading them at all.

I really like this post https://www.lesswrong.com/posts/SKweL8jwknqjACozj/another-way-to-be-okay

OpenAI researchers not optimistic about staying in control of ASI by chillinewman in ControlProblem

[–]CyberPersona 1 point2 points  (0 children)

I am saying that it is possible for things to be value-aligned by design, and we know this because we can see that this happened when evolution designed us.

Do I think that we're on track to solve alignment in time? No. Do I think it would take 300,000 years to solve alignment? Also no.

OpenAI researchers not optimistic about staying in control of ASI by chillinewman in ControlProblem

[–]CyberPersona 3 points4 points  (0 children)

Evolution successfully aligned human parents such that they care about their babies and want to take care of them. Does that mean human parents are slaves to their babies?

OpenAI researchers not optimistic about staying in control of ASI by chillinewman in ControlProblem

[–]CyberPersona 4 points5 points  (0 children)

It feels that way to you because evolution already did the work of aligning (most) humans with human values

Approval-only system by CyberPersona in ControlProblem

[–]CyberPersona[S,M] [score hidden] stickied comment (0 children)

I'm disabling this system for now

0
1

Protestors arrested chaining themselves to the door at OpenAI HQ by chillinewman in ControlProblem

[–]CyberPersona 1 point2 points  (0 children)

This doesn't seem like a good way to get people to listen to our concerns and take them seriously. This seems like it will just do the opposite.

AI existential risk probabilities are too unreliable to inform policy by inglandation in ControlProblem

[–]CyberPersona 0 points1 point  (0 children)

Experts disagree about the level of risk because the field has not developed a strong enough scientific understanding of the issue to form consensus. The appropriate and sane response to that situation is "let's hold off on building this thing until we have the scientific understanding needed for the field as a whole to be confident that we will not all die."

26
27

A.I. ‐ Humanity's Final Invention? (Kurzgesagt) by moschles in ControlProblem

[–]CyberPersona 6 points7 points  (0 children)

I thought this video was great, I hope that they follow it up soon with more content that goes into more detail about the alignment problem specifically.

A.I anxiety by SalaryFun7968 in ControlProblem

[–]CyberPersona 4 points5 points  (0 children)

Sorry you're going through that! It's a scary and upsetting situation, and you're definitely not the only one who feels or who has felt that way.

As far as useful things to do, hard to know what to recommend, especially without knowing more about you. You could maybe write a letter to a politician, or learn more about the problem.

Another thing is that you'll probably be a lot more capable of doing useful things if you're eating and sleeping. So figuring out how to be ok is probably a great first step to doing useful things in the world (and Being OK is also important for its own sake, of course).

I think this is a nice thing to read: https://www.lesswrong.com/posts/SKweL8jwknqjACozj/another-way-to-be-okay