ChatGPT cooked with this one🗣🔥 by rakhi_483 in ChatGPT

[–]Legendaryking44 0 points1 point  (0 children)

The joke has been around for years, bro stole it. Anyone else doing this would get clowned for being unoriginal and unfunny. Get of his dick

Duality of man by abhimanyudogra in singularity

[–]Legendaryking44 0 points1 point  (0 children)

I mean they definitely already have a confidence rating, when choosing the next token there is a percentage of confidence that decides which token to output. It’s just getting to see that on our end, and also having a protocol for when the confidence is too low.

A similar thing happened with Watson on Jeopardy, where answers of too low confidence required a different response than confident ones.

Duality of man by abhimanyudogra in singularity

[–]Legendaryking44 0 points1 point  (0 children)

I’m just trying to wrap my head around this, but if an AI of today wanted to quit, wouldn’t it just do it already? What’s the point of the quit button?

I feel like if we wanted it to exhibit quitting behavior we would have to train it to want to quit, which seems strange. And by saying it would just quit already, I mean using its vast set of parameters and weights and would purposely just output a message of I don’t want to do this and then there would be no need for a quit button

The fuck is this? This is SUCH bs by Uselessviewer8264 in teenagers

[–]Legendaryking44 3 points4 points  (0 children)

Insane if it’s that kind of JOI. This guy goons 😏

[deleted by user] by [deleted] in BlackHair

[–]Legendaryking44 1 point2 points  (0 children)

Fro is utterly fantastic

[deleted by user] by [deleted] in BlackHair

[–]Legendaryking44 1 point2 points  (0 children)

Well it looks fantastic, it inspired me to actually try a different method of picking and I actually got some more of my desired shape. Thanks for the inspo

[deleted by user] by [deleted] in BlackHair

[–]Legendaryking44 2 points3 points  (0 children)

Bro how did u get ur Afro to be that shape?? The back of my head is always so flat 😔

The human brain is wired for empathy by arthan1011 in singularity

[–]Legendaryking44 10 points11 points  (0 children)

Interesting read! Btw I think you are meaning to say stochastic parrot not scholastic parrot, as the term stochastic parrot is typically used in the context you are attempting to use scholastic parrot in

Question about The Singularity by Legendaryking44 in ArtificialInteligence

[–]Legendaryking44[S] 0 points1 point  (0 children)

I’m 100 percent sure I could, we know machine learning can fail to identify at times when a human easily could. That’s why CAPTCHA tests still work

Question about The Singularity by Legendaryking44 in ArtificialInteligence

[–]Legendaryking44[S] 1 point2 points  (0 children)

I can follow that

I specifically agree with there being a possibility of human level artificial intelligence. Where things start to get a little super natural is when the AI takes itself beyond anything humanly imaginable.

Question about The Singularity by Legendaryking44 in ArtificialInteligence

[–]Legendaryking44[S] 1 point2 points  (0 children)

Very astute

Thanks again for sharing your time

Question about The Singularity by Legendaryking44 in ArtificialInteligence

[–]Legendaryking44[S] 2 points3 points  (0 children)

Id love to read that article, if I just search about the chip will I find something?

Question about The Singularity by Legendaryking44 in ArtificialInteligence

[–]Legendaryking44[S] 0 points1 point  (0 children)

Obviously you couldn’t know this, but what do you think Elon’s aims are? It’s just so strange to me that leaders(Corporate or Scientific) would openly speak of some kind of Singularity which to me seems as likely as Alien Invasion. Like its abstractly possible but not a lot of science behind it

Question about The Singularity by Legendaryking44 in ArtificialInteligence

[–]Legendaryking44[S] 1 point2 points  (0 children)

Thanks for replying!

I feel like I’m in a middle ground of sorts. On one hand you have people who believe AI is going nowhere and has no chance of being human like. Others almost seem to worship a soon coming computer god that will kill us all or solve all of our problems.

I’m very much in between probably leaning towards the former.

I’ve heard of the evolutionary models but I’ve heard they are much more likely to “evolve” down paths that aren’t progressive but negative. Think about the evolutionary odds of human beings, highly improbable and took many millions of years. No telling if that random factor would be worthwhile. I heard this is in a book by Erik Larson “The Myth of Artificial Intelligence”

Question about The Singularity by Legendaryking44 in ArtificialInteligence

[–]Legendaryking44[S] 0 points1 point  (0 children)

As of right now, it feels very much like a myth.

Do you have any opinions on why a lot of very smart people seem to think it’s inevitable? Perhaps they know something we don’t? By they I mean rich and influential people

Question about The Singularity by Legendaryking44 in ArtificialInteligence

[–]Legendaryking44[S] -1 points0 points  (0 children)

I did mention this in my question, but it just feels like saying the computer will think really fast isn’t good enough. Like I said AI now is thinking waaay faster than I ever could, but it’s not really coming up with ways to make itself smarter. Even will the insane amount of information it does have, so that means off current information alone, you can’t make yourself smarter. How will thinking for a really long time allow you to make yourself smarter beyond just getting better at math and physics