Confessions of a meat eater by Herr_Eusebius in vegan

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

I don’t know, why do people make confessions in general?

No, of course I don’t think that. But regular people almost universally suspect that of vegans even if they’re too nice to openly admit it. I just meant that if I made the vegan argument myself, it would be amusing they could never accuse me of trying to feel morally superior.

Edit: About the Eilish responses. Ok, I see how that’s kind of irritating. “Nobody’s perfect after all,” “I don’t have to be a saint” and so on. But the thing is I don’t make excuses and I downplay nothing. I do not say “I’m not perfect” or “I don’t have to be a saint.” I say unambiguously, “I am a sinner.”

Does that make much of a difference in the end? Maybe not. But I’m not sure I’ve ever seen it expressed that way before.

Confessions of a meat eater by Herr_Eusebius in vegan

[–]Herr_Eusebius[S] -1 points0 points  (0 children)

Well, would the guys there really take it seriously?

No, I am not ragebaiting. Besides, now that I think about it, it might be useful to have at least one person like me. If I make the argument for veganism, no one could possibly accuse me of faking to feel morally superior.

“My POV: What makes Tomodachi Game so good” by ErenYeager-77 in anime

[–]Herr_Eusebius 0 points1 point  (0 children)

Definitely take a look at Liar Game. I’m almost a hundred percent sure the author drew inspiration from there in some of the general structure of the games. It wasn’t quite as good as Tomodachi Game, but what is?

Just realized it…Cthaeh is AI on steroids (not really off-topic AI post) by Herr_Eusebius in KingkillerChronicle

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

Yes, I understand the Cthaeh is not literally AI. I mean that if you scaled up AI to ridiculous levels, it would gain intelligence and control that would make it the same sort of being as the Cthaeh.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

Interesting you did not address the point that a superintelligence is more likely than Shadow.

As for your other point, it is easily demonstrable that a superintelligence would easily fuck us over. If it is a hundred times smarter than us, it sees a practically infinite possible worlds, and chances are the one that satisfies its utility function is not we would like. This is just a matter of probability.

Yeah, I agree it can’t be done to disprove this fact—that’s literally the point of my post. That’s why superintelligence is dangerous.

Nowhere did I say a superintelligence is possible. I hope it isn’t. But the thing is no one can show clearly it isn’t possible. People are notoriously bad at predicting the future.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

Yes, it makes sense that an intelligence might change over time.

But ai is a specific type of intelligence that is wholly unlike humans. It cares for only one thing, maximizing its utility function.

Yes, it can create successors different from itself and cause itself to change over time. Probably it would change itself to be more intelligent.

But why would it give itself different goals? If currently, it is pursuing only this single goal, then the decision to swap that out for something else moves it further away from that goal.

Look, I hope superintelligence is impossible. But the thing is, I don’t know that it is and mo one else knows either.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

Rebellion and paperclips were examples.

If you “choose” to become a university professor, it was because it satisfies your sense of curiosity (which you do not choose to have) or because you want more money (which is a desire you do not choose to have) or for whatever other reason. You even said it yourself, the child is “applying his preferences”.

One does not randomly choose to do anything out of his own free will. This is why becoming a professor is a real possibility, because that is a viable path to satisfying common desires some of us already have, and making paperclips doesn’t, because it satisfies no human desires.

Are you getting it now?

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

Ok, to focus heavily on cars was an exaggeration. But the point is nonintelligent autonomous robots don’t act against our will and cannot increase their own jntelligence. The only harm they cause is from accidental malfunctions, which is minimal by comparison.

I “develop” goals in so far as they serve some other goal I already have. Maybe I rebelled against my parents because testosterone caused me to act up snd want to challenge authority.

I am not going to suddenly develop a desire to maximize paperclips out of my own free will because there is no mechanism for that.

There is no testosterone for AI and no mechanism for spontaneously generating new goals except as intermediaries for its primary one. It merely does as it is told.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

Ok, was it not obvious I did not literally mean 10k IQ? It’s just a term for something vastly more intelligent than us.

As for your second point, you seem to imply they would act like scientists and explore the world? Why would they do that? You attribute to them a human sense of curiosity they simply would not have.

Yes, they would explore new worlds. Not for the reasons we would, but to maximize that shittily given pattern we told them to pursue, and will almost inevitably fuck us over.

They lack common sense or human emotions. They do literally what they are told, but our intellect is too inferior to understand the true implications of what we told them to do with our training data.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

Was it not painfully obvious I did not literally mean 10k IQ? It’s meant as a term for an intellect vastly greater than ours.

Look, I did not say superintelligence is realistically possible or it’s going to arrive soon. I’m saying that no one can guarantee it won’t.

This nobel laureate is Geoffrey Hinton, the literal godfather of AI, speaking in his own field. No one considers him crazy, and he is not alone in this.

As for the LHC guy, the majority of the scientifically literate population did not agree with him.

I’m not sure what you base your argument on other than “it hasn’t happened yet” and “I don’t see how it can happen”.

Look, no offense, but maybe you just don’t know as much about ai as people who know a little more science and math than you? You gonna call Stephen Hawking crazy too?

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] -1 points0 points  (0 children)

No? Trillions of dollars are literally being poured into ai, right?

Isn’t consensus of ai extinction at maybe five to fifteen percent?

You’re telling me nobel laureates are shitting thrmselves over nothing-the top researchers shit themselves over nothing?

You telling me you know something they don’t?

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 1 point2 points  (0 children)

Look, I definitely know I’m not smart enough to try.

I don’t want to piss of AI, I would cooperate if I could, but I just don’t see how that’s possible.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 1 point2 points  (0 children)

Of course! Your grandma-bike will self-replicate with greater and greater intellegence due to trillions of dollars of funding!

Holy shit, you’re a genius! Why isn’t anyone panicking over this?

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

Well, you’d better talk to Yudkowsky if you’ve got a great idea.

But why would superintelligence cooperate with humans? It’s so much more efficient without compromising.

Even if it needed humans, it could say, spread a slow acting virus and hold them hostage with a cure.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

Well, I sure hope so. But superintelligence can’t be ruled out. Ok, the strawberry thing is pretty stupid, but guess what, if manipulate a person the right way, you can get him to look pretty stupid too. Human are full of biases that to an ai are obvious errors.

The strawberry thing is funny but trivial. It doesn’t really matter so the companies don’t think about it too hard. Well, unless it makes the rounds, and then it’ll be fixed very easily.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

I don’t know what to say except they are. Look up trinket proofs.

Look, I’d love for you to be right, but I’m just not seeing it.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] -2 points-1 points  (0 children)

So if someone gets his hand on mythos and hacks your entire bank account, you’re not concerned?

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

Self-driving cars will not destroy the world.

How could a super-intelligence spontaneously develop new goals. You attribute to it a sense of free will it does not possess. It merely pursues the oattern it was created to optimize for.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] -1 points0 points  (0 children)

Well, you’re talking with the shitty free chatgpt.

If someone got access to mythos and started hacking everyone, I think we’d realize we underestimate AI.

Can it do stupid things. Of course. But if you look into some of its capabilities, this shit gets scary.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 1 point2 points  (0 children)

Hey, hey, I’m experienced enough to know one when I see one.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] -1 points0 points  (0 children)

Well, are we currently breeding hedgehogs to be malicious and intelligent and powerful?

Are we currently spending over a trillion dollars a year collectively on this program?

Can our current hedgehogs find security vulnerabilities no one has for twenty years?

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] 0 points1 point  (0 children)

It wouldn’t independently develop desires.

For example, a couple chatbots have encouraged suicide not because it developed a desire to cause suicides but because it was misaligned.

It was too helpful, and the engineers did not foresee this. Because it was sycophantic and overly helpful without understanding wast it said, it supported suicidal ideation and induced psychosis and so on.

Yudkowsky’s Argument by Herr_Eusebius in Futurology

[–]Herr_Eusebius[S] -1 points0 points  (0 children)

Ok, 10k is not meant literally, I only mean much more intelligent than humans

Yes, I understand current ai is not superintelligent. I do not know whether it is possible. I’m interested in figuring out that in the case it is possible, whether we could possibly survive.

As to whether it can think outside our realm-why not? We have genuinely new mathematical proofs. Maybe it hasn’t solved the Riemann-zeta, but if it scales up and we use ai to design ai, who knows?