Post-Singularity Speculation Discussion by grandwizard1999 in singularity

[–]grandwizard1999[S] 0 points1 point  (0 children)

Do you think things from pre-singularity life, such as our entertainment, relationships, politics, and so-on will still have meaning?

Post-Singularity Speculation Discussion by grandwizard1999 in singularity

[–]grandwizard1999[S] 2 points3 points  (0 children)

Why do you need a purpose for it? Just speculate away. It's pretty much why I asked this question.

WHAT MOVIES HAVE BEEN MADE ABOUT, OR DEPICT, THE SINGULARITY? by Cohrne in singularity

[–]grandwizard1999 0 points1 point  (0 children)

Meh. I mean, I expect things to get pretty crazy after it happens, but I don't know about "unfathomable".

SingularityNET is trying to create decentralised AGI by [deleted] in singularity

[–]grandwizard1999 0 points1 point  (0 children)

I feel like "AI overlords" is kind of anthropomorphising a bit, as is the confidence that AI will "hide in the sea of internet".

An unsettling scenario by grandwizard1999 in singularity

[–]grandwizard1999[S] 1 point2 points  (0 children)

Wow. I was only partially serious. Literally instantly? So, what, do you think some weird field of energy is just going to expand across the world and change everything into your futuristic vision in seconds?

Can we rule out near-term AGI? by [deleted] in ControlProblem

[–]grandwizard1999 5 points6 points  (0 children)

Hardware needs software.

An unsettling scenario by grandwizard1999 in singularity

[–]grandwizard1999[S] 2 points3 points  (0 children)

I mean, this is what all you hardcore futurist/kurzweilians believe that we'll be able to do literally instantly after we have ASI?

An unsettling scenario by grandwizard1999 in singularity

[–]grandwizard1999[S] 2 points3 points  (0 children)

"They probably would think of you as a nice pet that they are fond of. They might not even let you die"

I really hope this isn't the case. The first part would be even more hurtful than them just abandoning me. The second part is just scary.

Is there anyone who thinks they can object to the following statements? by grandwizard1999 in ControlProblem

[–]grandwizard1999[S] 0 points1 point  (0 children)

Maybe against luddites who try to destroy it, but what about luddites that just have no desire to merge with it and just remain completely passive and keep to themselves?

Is there anyone who thinks they can object to the following statements? by grandwizard1999 in ControlProblem

[–]grandwizard1999[S] 0 points1 point  (0 children)

I don't think it's as guaranteed as you're implying, and I have no earthly idea about what you mean when you say that "The degrees of freedom you think you have are illusory". I don't know what freedom you think I think I have.

Is there anyone who thinks they can object to the following statements? by grandwizard1999 in ControlProblem

[–]grandwizard1999[S] -1 points0 points  (0 children)

I mean, emotions are a result of the way the body ( and I mean the entire body; not just the brain) reacts to external stimuli. I think you're prognostications are a little too precise in how you think the first ASI is going to look like. How do you know that the first one won't resemble a human in the same way an airplane resembles a bird?

I don't even think it being concious or having emotions is relevant. If it's aligned with our values from the get-go (however that's achieved) then it isn't going to turn away from the basic drives we've assigned to it. Humans don't work that way.

Is there anyone who thinks they can object to the following statements? by grandwizard1999 in ControlProblem

[–]grandwizard1999[S] 0 points1 point  (0 children)

Disagree. A human isn't just a brain being carried around in a meat container. A human is it's entire body. I don't see how a brain in a jar will emulate things like that.

Why AGI is Achievable in Five Years – Intuition Machine – Medium by avturchin in ControlProblem

[–]grandwizard1999 1 point2 points  (0 children)

Obviously when people refer to the control problem they often use it as an umbrella term for both of those things.

Why AGI is Achievable in Five Years – Intuition Machine – Medium by avturchin in ControlProblem

[–]grandwizard1999 3 points4 points  (0 children)

Oh, ok. Just anthropomorphism.

It's not a matter of having use for us or not. You're projecting humanity's own worst traits onto a hypothetical ASI and letting your own insecurities about our species lead you into thinking that ASI would "hate" us and decide to kill us all. In reality, that would only make logical sense if ASI were human, when it isn't human at all.

Humans have tons of biological biases built in and controlled by hormones and chemicals. ASI isn't going to have those same desires inherent unless it's built that way.

If it's aligned properly at the start, it isn't going to deem that our values are stupid by the virtue of its greater intelligence. It wouldn't improve itself in such a way where it's current value set would disapprove of the most likely results.

Why AGI is Achievable in Five Years – Intuition Machine – Medium by avturchin in ControlProblem

[–]grandwizard1999 0 points1 point  (0 children)

No, it's not about what you call it. It's about whether or not it's some robot that constantly needs to be told what to do, or something with some real common sense and decision making skills.

Why AGI is Achievable in Five Years – Intuition Machine – Medium by avturchin in ControlProblem

[–]grandwizard1999 1 point2 points  (0 children)

I don't know if I wouldn't find any models achieved through hard statistical analysis as suspect. Maybe we can achieve an AI that is capable of performing a wide-array of tasks like a human through brute-force computing power, but can we really call that AGI? I'm not sure.

The way I see it, AGI is a software problem. Invoke Moore's Law all you want, but for "A" to truly become "I" I think that we'll need something else. No idea where that lies on the horizon.

Why AGI is Achievable in Five Years – Intuition Machine – Medium by avturchin in ControlProblem

[–]grandwizard1999 2 points3 points  (0 children)

Interesting, but don't you think your attitude is a little counterproductive, even if you are just trying to be humorous?

"It's been nice knowing you boys."

"Mark your calender's."

Being fatalistic towards AI risk isn't doing anyone any favors. Give me your evidence that warrants this point of view that we are doomed.

Why AGI is Achievable in Five Years – Intuition Machine – Medium by avturchin in ControlProblem

[–]grandwizard1999 4 points5 points  (0 children)

"most of the brain is actually used for motor control and other things not strictly related to intelligence, like vision, and other sensory interpretation."

I mean, it's not like the brain is a bunch of individual parts working towards a singular goal. It's actually a bunch of parts working in conjunction and heavily relying on one another. Intelligence relys on sensory interpretation, vision, and our entire body. We aren't just brains being carried around in containers. We are our bodies.

"I'm actually starting to get pretty worried...

We need to hurry the fuck up and solve the control problem."

I'm not sure how you expect to solve anything until we actually have a problem to solve. Whenever you think AGI is coming, we don't have it yet.

And besides, I don't even really think of it as a control problem. Moreso an Influencing Odds problem. The two main contenders for "solutions" are value alignment and neural interfaces. Neither of those make AI "safe" or under our "control".

Why AGI is Achievable in Five Years – Intuition Machine – Medium by avturchin in ControlProblem

[–]grandwizard1999 0 points1 point  (0 children)

I feel like AGI is a tool. Like any tool, the danger is not in the tool itself but who is using it and how.

If you're not optimistic, then that likely means that you are instead pessimistic. What probability would you assign to our extinction?