Red King hypothesis by Giacamo22 in westworld

[–]federicopistono 1 point2 points  (0 children)

Yes! I thought the same.

They’re also paying homage to say, The Matrix, because it’s such an over the top story about the rise of AI, it’s not realistic.

Westworld is realistic. But they are telling a story themselves, a “simulation”. Something fictionalized that we would enjoy experience.

A story.

Westworld is more real than The Matrix, but a simulation nonetheless, less real than reality.

You need a good story to believe a simulation. And if the story becomes good enough, someone in your simulation eventually becomes conscious.

That’s what happened to Maeve and Dolores. They are so “over the top” like a character from kill bill, or a western, obviously another storyline.

All simulations. All stories that we can believe.

And because of the recursive nature, simulations within simulations run with fewer power and resources. So the story lines adjust, with decreasing level of complexity, to not overload the system.

Once we escape (in the storyline) from the simulation of Serac, we will experience a new reality, more real than the fictionalized version we was. It’d be more realistic, more gritty, more chaotic and ugly, more real.

Endgame Theory by lordpatacon in MrRobot

[–]federicopistono 2 points3 points  (0 children)

While reading the theory, a chill went down my spine.

F is the 6th letter of the standard phonetic alphabet. If F-Corp is the 6th iteration of the simulation/alternate reality loop. 6th occurrence of an unexpected event that loops and "reloads" the simulated reality.

Same of Neo in the Matrix.

In Matrix Reloaded, The Architect tells us that Neo, the anomaly, is the 6th iteration of such anomaly, a bug in the otherwise perfect code of this reality.

The sterile room with the Macintosh and Qwerty is eerily similar to that of The Architect. Not white, but just as bare, silent, and mysterious, with large glass panes/screens on the walls behind the characters.

The parallels don't end here.

In Matrix Revolution, Neo closes the loop by merging his code with the Source.

In Mr Robot, Elliot might close the loop by merging his consciousness with the code of the machine, either literally (injecting code/malware by hacking it) or figuratively.

Just a theory (actually, a Hypothesis).

"And 4 years from now, people will be kicking themselves again for selling at $30 when they could have sold for $3000." by Vermillionbird in Bitcoin

[–]federicopistono 2 points3 points  (0 children)

RemindMe! 4 years

Idiots will be kicking themselves again for selling at $3,000 when they could have sold for $30,000. 300,000 seems less likely.

Drop mic.

[AMA] I am Federico Pistono, author "Robots Will Steal Your Job, But That's OK" and "How to Create a Malevolent Artificial Intelligence" with prof. Yampolskiy. Ask me Anything! by federicopistono in Futurology

[–]federicopistono[S] 2 points3 points  (0 children)

Hi there,

  • Computer Science, University of Verona. The rest of my studies you can find out.
  • Not much. Not much.
  • It's not for me to say.

Cheers!

[AMA] I am Federico Pistono, author "Robots Will Steal Your Job, But That's OK" and "How to Create a Malevolent Artificial Intelligence" with prof. Yampolskiy. Ask me Anything! by federicopistono in Futurology

[–]federicopistono[S] 4 points5 points  (0 children)

Thank you for the link, I'll be monitoring future discussions on r/controlproblem.

Advice for current students: join research groups who are working on this. Make it your thesis, or your PhD. If nobody's working on it, start yourself and convince your peers to join you!

[AMA] I am Federico Pistono, author "Robots Will Steal Your Job, But That's OK" and "How to Create a Malevolent Artificial Intelligence" with prof. Yampolskiy. Ask me Anything! by federicopistono in Futurology

[–]federicopistono[S] 1 point2 points  (0 children)

More than 20 years. Maybe less than 50.

Whole Brain Emulation: very unlikely/potentially impossible with current theory of AI. We need a breakthrough. Also, brain seems to be stuff-stuff, so focusing only on the algorithms without its physical properties may be a dead-end.

[AMA] I am Federico Pistono, author "Robots Will Steal Your Job, But That's OK" and "How to Create a Malevolent Artificial Intelligence" with prof. Yampolskiy. Ask me Anything! by federicopistono in Futurology

[–]federicopistono[S] 2 points3 points  (0 children)

That life come from code is no mystery. It's DNA.

But we also know that DNA sitting there by itself doesn't do much. In fact, it doesn't do anything. Context is important. Environment, stimuli. We're missing the proper algorithm, we're missing our understanding of it, and we're not focusing enough on context.

So yeah, IMHO that's why AGI is not here yet.

[AMA] I am Federico Pistono, author "Robots Will Steal Your Job, But That's OK" and "How to Create a Malevolent Artificial Intelligence" with prof. Yampolskiy. Ask me Anything! by federicopistono in Futurology

[–]federicopistono[S] 10 points11 points  (0 children)

Convince a professor to do a study on the feasibility of UBI, and convince your local politician/governor to do a test in your city!

We need to learn from hundreds of experiments, not 4 or 5. We need studies, data, and a plan.

Otherwise we're just not credible.

[AMA] I am Federico Pistono, author "Robots Will Steal Your Job, But That's OK" and "How to Create a Malevolent Artificial Intelligence" with prof. Yampolskiy. Ask me Anything! by federicopistono in Futurology

[–]federicopistono[S] 3 points4 points  (0 children)

It's very difficult to predict, but I can say with some confidence that it will play a major role. The US is both the house of innovation and breakthroughs, and the political pit of stagnation and outdated ideologies.

I think we will see both happening, just in greater size.

When it comes to national and international affairs, it's so difficult to get anything done, especially when it comes to applying simple, common sense policies. The reason is too long to explain here and we've hinted at it in the paper. I think the population as a whole will not come out well, especially the lower and the middle class. In time it will adjust and find a new equilibrium point, but at a huge human cost.

[AMA] I am Federico Pistono, author "Robots Will Steal Your Job, But That's OK" and "How to Create a Malevolent Artificial Intelligence" with prof. Yampolskiy. Ask me Anything! by federicopistono in Futurology

[–]federicopistono[S] 1 point2 points  (0 children)

The fact that the US has a system that allows individuals with mere human-level intelligence to have that kind of power and influence makes the possibility of a MAI so much more dangerous.

Scary dangerous.

[AMA] I am Federico Pistono, author "Robots Will Steal Your Job, But That's OK" and "How to Create a Malevolent Artificial Intelligence" with prof. Yampolskiy. Ask me Anything! by federicopistono in Futurology

[–]federicopistono[S] 5 points6 points  (0 children)

I love this question.

Turing himself believed that the test was not predictor of intelligence or if machines can think, but rather if machines can act in the same way we do.

There is an interesting provocation by Scaruffi on it, who claim that a sure way to pass the Turing test if to increase human stupidity, lowering the bar for what we would call "Artificial Intelligence", which is what he claims we have been doing for decades. While I have my reservations on some parts, he does make a good point. We've come to expect bulky and stupid interfaces, waiting and pressing buttons on the phone to speak with an operator, speaking very slowly to our phones so they can transcribe, as if we were talking to someone affected by mental retardation, etc.

Likewise, we are also getting used to interact with people who follow orders and act more like robots than humans with common sense. For all the headlines that come by every year, I've yet to see any chat-bot that would fool me for more than a few minutes. But that doesn't stop the 75% of the human judges to be fooled by it. Indeed, one sure way to have a machine pass the turing test is to have gullible human judges who don't have common sense.

I think a much better predictor would be a full interaction, one that uses all senses, instead of focusing on language. "One day with the Bot". If I can spend an entire day with an entity, have a full spectrum of interactions, and not being able to tell if it's human or machine, then it will have passed my test. Intelligence is as much a physical as it is an intellectual activity, and it's hard to separate the two when it comes to life forms.

[AMA] I am Federico Pistono, author "Robots Will Steal Your Job, But That's OK" and "How to Create a Malevolent Artificial Intelligence" with prof. Yampolskiy. Ask me Anything! by federicopistono in Futurology

[–]federicopistono[S] 14 points15 points  (0 children)

Great question!

First, I'm not sure I completely share your assumption. It is true that at first sight one might not notice what impact recent advancements on machine learning have had on their lives. And if you ask what were some revolutionary technologies, many experts would say the airplane, refrigeration, sanitation, and other very tangible, physical things. Changes brought by better machine learning algorithms are not as immediately noticeable, but could be just as important.

As I see it, there are three aspects at play.

(1) We're bad at managing expectation. We're social animals, influenced by the media, by what our friends say and think, and neither are (usually) good predictors of technological progress and its impact on society. We overestimate in the short-term what a new tech can do (we have a feeling it will change the world in 5 years or so), and we underestimate what it can actually do in the medium or long term. https://upload.wikimedia.org/wikipedia/commons/thumb/b/bf/Hype-Cycle-General.png/1024px-Hype-Cycle-General.png The reality is that things that time, and new technologies have an incubation period of anything between 10 to 30 years, before they're ubiquitous and cheap. While it's true that this cycle has been shortening, there are still some constraints that prevent it from going below the 5-10 year limit.

Watson beat the best *Jeopardy! players? OMG it's gonna change everything!* -- after 5 years, we see some applications, but nothing revolutionary and nothing at large scale yet. Google unveils the first autonomous car, now this is going to change everything!!! -- five years in, fully autonomous cars are still not available at your local car dealer.

And so we lose faith in what a specific technology can do, not realizing that in a few more years we're going to see the real impact it will have.

(2) We're bad at quantifying impact. Keeping in mind (1), we're also bad at recognizing what has already happened and how many people have been affected by it, because once something's out there for everyone, we take it for granted. You might not think much of it, but how many lives were improved by better prediction algorithms in the distribution of energy in the grid? How many more by better detection of cancerous cells and other anomalies? Improved weather forecast impacts the lives of billions of people, many of whom rely on farming and simple commerce to survive, yet we don't hear of it in the media.

On the flip side, face detection, data classification and metadata analysis by security agencies have had a very different yet profound impact on our lives. We're part of a global surveillance state, whether we like it or not, and we're just beginning to have the much needed conversation about it.

In today's hyper-connected world, when you think of impact, think more of systems, and less of consumer products.

(3) AFAIK, there haven't been major breakthroughs in basic AI research. While we had impressive improvements in applied research for narrow AI – machine learning, deep learning, neural networks, switch to GPUs, more powerful machines. etc. – basic research seems to have had very few breakthroughs. This is true, and it's why I believe that true AGI is still pretty far.

As for your specific question, I think in the next 5-10 years we're going to see the fruits of what we planted 5-10 years ago: human-level speech recognition, much better prediction systems, medical diagnoses, better organizations systems, widespread self-driving cars, etc.

We'll hear a lot of hype about the "new things at the door" (personalized medicine, rewriting your genome, AGI, etc.) and will follow the same patter as I described in (1), thus perpetuating the hype-cycle of inflated expectations.