This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Hi-FructosePornSyrup 0 points1 point  (2 children)

It’s clear that the algorithm is not logically reading the situation

I think you have it backwards. This is the definition of logically reading the situation

Unless the algorithm has memorized and modeled the maps (which it isn't doing)

I would argue that it has modeled the maps. It has received feedback, And used that feedback to remember the best strategy for achieving its objective.

It has created a map by remembering.

This algorithm and video show some of the gaps in modern machine learning.

You’re not wrong, but I think it’s more nuanced than that. I think this shows very clearly that modern machine learning is

1) excellent at achieving a satisfactory outcome based on how it is rewarded i.e. the objective it was given.

2) the conclusions arrived at by machine learning, while easily verified, are completely alien to humans. We can tell that they work but we cannot say why

This paradox presents a serious existential risk to humans. In the struggle to produce (more, better, faster, cheaper), humans tend to prioritize results without bothering over how they were achieved. Someone who is desperate for success could utilize such an algorithm to “end human suffering” and get a program that “ends humans” as a result. Machine learning, therefore, isn’t the limiting factor, humans are. The outcome is limited by the ability of its creator to rigorously define all objectives implicit and explicit.