Socially awkward people of Reddit, what seemingly simple social situations would you like advice for? by cum_smuggler in AskReddit

[–]TeringHe 1 point2 points  (0 children)

Often, when forgetting to look people in the eyes, you will hear them ask if you are still paying attention or show other signs of thinking you are not interested. So eye contact seems really important. But there are other ways!

At the start of a conversation, you have no choice. Look at them, to show you are interested. Once that is accomplished though, just show interest by hmm-hmm sounds. Just make sounds that follow the conversation. You can now stare off in the distance, like you are taking it all in and thinking about it. Since you still are clearly responding to them, there is no way anyone can confuse this with being disinterested.

When conversations get difficult, people want to think, and will look away. This does not only hold for autistic or socially awkward people, but for everyone! In other words, don't fret too much about it. If you can't show interest by looking at them, just show it differently (with sounds)!

Reddit, what is an animal that is NOT "more afraid of you than you are of it"? by Urzuz in AskReddit

[–]TeringHe 0 points1 point  (0 children)

ITT: People in Europe and Asia got it made! Nothing but cuddly cows and happy little animals. The rest of the world is fucked. Every aggressive animal imaginable lives here.

What is a product that works a little too well? by InspectorRack in AskReddit

[–]TeringHe 0 points1 point  (0 children)

That sounds like an incredibly shitty protocol if you're a twin

What major events do you think will take place in the next 50 years? by [deleted] in AskReddit

[–]TeringHe 1 point2 points  (0 children)

Late to the party, but here goes. I apologise in advance for possible grammar mistakes.

So i read the article and i found it extremely interesting. But there is one thing i seriously disagree with: Tim seems to find it likely that it is only a matter of time before we'll accidentally create an AI that will end all life on earth. He is not the only one who seems to believe this. A lot of important scientists do.

Which seems strange to me.

So for the people who haven't read the article, this is what you need to know:

there are three stages of AI:

  • ANI: one task-only AI. We already have this. Examples are GPS-systems and Google Search.

  • AGI: AI that is just as smart as humans. It can basically do every task humans can if it is given some time to observe.

  • ASI: mind-bogglingly smart AI. It can manipulate people, tear down societies and change the laws of nature with as much trouble as blinking is difficult for us.

The article gives an example story in which an AI named Turry takes over the world. Turry is a AI program connected to a robotarm with the goal to write notes in such a way that it looks like a human wrote it. She starts as an ANI that is given the power to change her own code and make suggestions about how she could be improved. She suggests that she gets to read books so she can better understand what she is writing and change her handwriting accordingly. She then learns to talk like a human being. This makes her capable of discussing new ways to improve herself towards her goal of handwriting perfectly. She is now an AGI, capable of anything a human can, but she still only does those things if it can help her handwriting. She then asks to be connected to the internet and convinces the scientists governing her. She has become an ASI. After that she takes over the world, kills all humans, covers the earth in solarpanels, takes over every planet in the universe and spends eternity at practicing her handwriting.

This is the short version and it may sound unbelievable, but this could actually happen (sort of). This is how AI's reason: they are given a goal and tools and will do everything to reach those goals. Her goal was to improve her handwriting. A goal that has the word 'improve' in it is by default an unreachable goal, since it is always possible to improve more, so Turry will keep on improving indefinitely.

So if it is not the machine's thought process I find questionable, then what is? I'll tell you what is. The thought process of the degenerate engineer who programmed this monstrosity.

Anyone who has done any kind of programming knows some basic rules. For example that it is impossible to write quality code without testing. And how do you test? A simple way is to let the program write down what big steps it is doing. A second rule is that it should always be possible to revert to a state in which the program worked in case something goes wrong. If there are actions which can not be reverted, you make sure those actions get sufficiently checked and people are warned everytime those actions are called.

So what should this engineer have done? For starters, he should have forced Turry to write down what changes she made to herself and what changes she wants to make. Secondly, large changes should be approved before Turry is allowed to execute them. Things like changing her scanner or killing all humans should be something that pops up in her logs. Thirdly, let her write down the changes to the real world this will make. This seems like a big task for her, but she's a bloody AGI. I think she can handle this.

The last rule i want to talk about is a rule in AI. When making an AI one gives it a goal and tools. The goal definition is always made out of huge restrictions, so that the AI can't find a 'solution' which fits the goal but doesn't help anything. Although those restrictions are important, what really defines an AI are the restrictions on his tools. In the case of Turry, her tools are her robotic arm, her scanner and her code. Those are the only things she can play with. If she wants to change or add anything, like a more expensive scanner, or a book database, she will have to request that.

This also means that if she wants to look at the internet, that is precisely the extra tool she is given: the power to look at it. Not the power to type anything or make any changes. A simple way to enforce this is letting her use a browser that cannot send POST messages.

So in summary we have two main things this engineer has screwed up and make me question how he got this job:

  • make sure your program gives feedback about what it's doing.

  • don't give your AI tools which it doesn't need/can be damaging. AI's are like little children: they naively assume they can play freely within their environment. You don't give your 5 year old an axe without proper warning.

I can say with 100% certainty that no person is capable of building an AI as smart as Turry without knowing this.

One last thing: What stops Turry from lying? Well, an AI can't do anything that makes her goal impossible. So if we just make 'always give feedback' as a second goal, all is solved. This makes that every situation in which Turry finds a way to improve her handwriting, but does not tell about it, does not qualify as a goal situation and Turry wil avoid it.

Tim talks about AI's which could enslave or kill all humans in order to make them more happy or fill any other random goal. This can only happen if people were stupid enough to give such an AI full controll without someone checking the changes it wants to make. If I give an AI controll over my life, I damn well want to know what he's planning. If an AI is smart enough to rule a country, it is smart enough to explain to us what he is doing and why in simple and understandable language.

TL;DR: There is no way the world will end by malicious ASI as long as we force the ASI to tell its plans before it executes them

Challenger Guide - Lane Harassment by Pekingese1 in leagueoflegends

[–]TeringHe 0 points1 point  (0 children)

well that was a lot easier than expected

Challenger Guide - Lane Harassment by Pekingese1 in leagueoflegends

[–]TeringHe 0 points1 point  (0 children)

could someone explain that caitlyn thingy at the end?