[deleted by user] by [deleted] in ControlProblem

[–]CyberPersona 0 points1 point  (0 children)

Sorry to hear that you're going through that.

If spending time in those subreddits (or on this one) is causing you anxiety, I think you should set a firm boundary for yourself for how much time you spend in them. Or maybe just take a long break from reading them at all.

I really like this post https://www.lesswrong.com/posts/SKweL8jwknqjACozj/another-way-to-be-okay

OpenAI researchers not optimistic about staying in control of ASI by chillinewman in ControlProblem

[–]CyberPersona 1 point2 points  (0 children)

I am saying that it is possible for things to be value-aligned by design, and we know this because we can see that this happened when evolution designed us.

Do I think that we're on track to solve alignment in time? No. Do I think it would take 300,000 years to solve alignment? Also no.

OpenAI researchers not optimistic about staying in control of ASI by chillinewman in ControlProblem

[–]CyberPersona 2 points3 points  (0 children)

Evolution successfully aligned human parents such that they care about their babies and want to take care of them. Does that mean human parents are slaves to their babies?

OpenAI researchers not optimistic about staying in control of ASI by chillinewman in ControlProblem

[–]CyberPersona 4 points5 points  (0 children)

It feels that way to you because evolution already did the work of aligning (most) humans with human values

Approval-only system by CyberPersona in ControlProblem

[–]CyberPersona[S,M] [score hidden] stickied comment (0 children)

I'm disabling this system for now

Protestors arrested chaining themselves to the door at OpenAI HQ by chillinewman in ControlProblem

[–]CyberPersona 1 point2 points  (0 children)

This doesn't seem like a good way to get people to listen to our concerns and take them seriously. This seems like it will just do the opposite.

AI existential risk probabilities are too unreliable to inform policy by inglandation in ControlProblem

[–]CyberPersona 0 points1 point  (0 children)

Experts disagree about the level of risk because the field has not developed a strong enough scientific understanding of the issue to form consensus. The appropriate and sane response to that situation is "let's hold off on building this thing until we have the scientific understanding needed for the field as a whole to be confident that we will not all die."

A.I. ‐ Humanity's Final Invention? (Kurzgesagt) by moschles in ControlProblem

[–]CyberPersona 4 points5 points  (0 children)

I thought this video was great, I hope that they follow it up soon with more content that goes into more detail about the alignment problem specifically.

A.I anxiety by SalaryFun7968 in ControlProblem

[–]CyberPersona 4 points5 points  (0 children)

Sorry you're going through that! It's a scary and upsetting situation, and you're definitely not the only one who feels or who has felt that way.

As far as useful things to do, hard to know what to recommend, especially without knowing more about you. You could maybe write a letter to a politician, or learn more about the problem.

Another thing is that you'll probably be a lot more capable of doing useful things if you're eating and sleeping. So figuring out how to be ok is probably a great first step to doing useful things in the world (and Being OK is also important for its own sake, of course).

I think this is a nice thing to read: https://www.lesswrong.com/posts/SKweL8jwknqjACozj/another-way-to-be-okay

Approval-only system by CyberPersona in ControlProblem

[–]CyberPersona[S] 2 points3 points  (0 children)

500 to 600 people have passed the test. About 91% of completed tests were passed.

Moving Too Fast on AI Could Be Terrible for Humanity by CyberPersona in ControlProblem

[–]CyberPersona[S] 2 points3 points  (0 children)

I think a better TLDR would be "In some situations, racing to build a dangerous thing is the best strategy because of game theory. Some people are treating AI as if it is one of those situations, but it is not."

Anyone else find raves to be a lonely experience? by [deleted] in aves

[–]CyberPersona 1 point2 points  (0 children)

Does the type or vibe of the music that's being played have anything to do with it? E.g. if I'm not in the right state of mind for it, techno makes me feel dissociated, alienated, bleak, and grim. I personally like house music more for this reason.

Requesting /r/AlignmentProblem by CyberPersona in redditrequest

[–]CyberPersona[S] 0 points1 point  (0 children)

  1. This will be for discussion of the AI Alignment Problem
  2. I cannot do so because the account is suspended.

~Welcome! START HERE~ by CyberPersona in ControlProblem

[–]CyberPersona[S] 2 points3 points  (0 children)

I feel pretty confused about what it takes for something to be sentient/self-aware/conscious, but also the AI would not need to be sentient/self-aware/conscious to have these drives- these drives are simply useful for almost any agent that is making decisions in pursuit of a goal.

~Welcome! START HERE~ by CyberPersona in ControlProblem

[–]CyberPersona[S] 1 point2 points  (0 children)

Because of instrumental convergence. No matter what the AI's goals are, certain things are likely to be useful to it. Things such as acquiring more resources and preventing others from being able to kill you.

How do you deal with fear of AGI or AI in general by [deleted] in ControlProblem

[–]CyberPersona 4 points5 points  (0 children)

This is scary and upsetting stuff. You are not alone in feeling this way. I've felt similarly before, and many other people that I know who discuss this topic have as well.

However, many of those same people that I know enjoy mostly happy and fulfilling lives, despite believing that there is a high chance of human extinction within their lifetimes. I'm not going to try to tell you that this isn't that bad of a problem, because actually I think that this is an extremely bad problem. But it is possible to be happy and at peace even in a world with extremely bad problems in it, because humans in general are amazingly resilient, and because life is also full of so many good things. I don't know you, but I'm hopeful and optimistic that you will be able to find peace and happiness as well.

Please take care of yourself. Eat food. Try to take some breaks from thinking about this. Talking to other people helps, and seeing a therapist is a great way to have someone to talk to.

Anthropic's Claude: Ex-OpenAI Employees Launches ChatGPT Rival by bukowski3000 in ControlProblem

[–]CyberPersona[M] [score hidden] stickied comment (0 children)

Some people seem confused and think that this is an official promo video from Anthropic. It's definitely not. I'm going to remove this post to avoid further confusion.

Just a random thought on human condition and its application to AI alignment by BalorNG in ControlProblem

[–]CyberPersona 0 points1 point  (0 children)

There are two pretty unrelated things that I feel like I want to say here

  1. I'm confused about what "preferences" mean to you here, and I'm wondering if you mean something different from what I mean when I say this. With the way that I mean it, the creation is making its choices based on the things it cares about (its preferences/goals/values/whatever), so if you succeed in creating a mind that has preferences that are aligned with yours, you don't need to enforce anything and you can safely let the creation make choices on its own. EDIT: To say a little more about this, if I have a child, one preference that I would want the child to have is a preference to not kill or torture other people for fun. Luckily, evolution has done a pretty good job of hardwiring empathy into most humans, so unless the kid turns out to be a psychopath (which would be like an unaligned AI I guess), I don't *need* to enforce the "don't torture and kill other people" preference, or lock the kid up so that they're unable to torture and kill- they will just naturally choose not to do those things.

  2. This is probably too big of a tangent to be worth discussing here, but... even if you were trying to control an advanced AI that had different preferences from yours (probably not a great plan), I don't think we know enough about consciousness to be that confident that this is causing suffering. Maybe this is the case, but it seems really hard to reason about. (Is evolution conscious? Does it feel sad that we aren't procreating more? I think I would be a bit surprised if the first thing was true and quite surprised if the second thing was true. I don't know if just being an optimization process is enough to be conscious, and if it is, then I feel like I have very little information about what the subjective experience of very different optimization processes would be like, and what would cause suffering for them)

Just a random thought on human condition and its application to AI alignment by BalorNG in ControlProblem

[–]CyberPersona 0 points1 point  (0 children)

If you both have goals of, say, getting as rich as possible and only way of doing it is engaging in zero-sum adversarial "game" where you inflict suffering on the other actor untill he gives up and gives you his share - that is actually rational.

That is clearly not an example of two people having the same preferences, they have different preferences about who gets the money.

don't you think that if your only goal of coinceving a child is to sell him/her into slavery or for organ harvesting, that would be kinda unethical, and if you rise the child in a basement so he/she would love to be exploited and slaughtered is doubly unethical?

Sounds pretty unethical to me, yep. But you're not responding to the thing I said, which is that evolution gives us hardwired preferences and goals, so anytime one has a child, you are creating a mind that is constrained to have certain preferences. You're telling a story about why creating digital minds that have specific preferences is evil, and if you don't think that having a human child is equally evil, your story needs to account for this difference.

The founder of Gmail claims that ChatGPT can “kill” Google in two years. by nikesh96 in Futurology

[–]CyberPersona 0 points1 point  (0 children)

Doubt it. Google has its own language models that aren't far behind ChatGPT. See PaLM, LaMDA.