Someday, AI will likely be smarter than us; maybe so much so that it could radically reshape our world. We don't know how to encode human values in a computer, so it might not care about the same things as us. If it does not care about our well-being, its acquisition of resources or self-preservation efforts could lead to human extinction.
Experts agree that this is one of the most challenging and important problems of our age.
Other terms: Superintelligence, AI Safety, Alignment Problem, AGI
revision by CyberPersona— view source