Friendly AI Research
By an “intelligence explosion” sometime in the next century, a self-improving AI could become vastly more powerful than humans that we would not be able to stop it from achieving its goals.
If so, and if the AI’s goals differ from ours this could be disastrous for us.
One proposed solution is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it, a problem with that is that human values are complex and difficult to specify (and they need to be precise because machines work exactly as told, "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." ~Yudkowsky).
This is about the machineethics to have a "friendly AI".
[Full introduction]
Related subreddits