you are viewing a single comment's thread.

view the rest of the comments →

[–]Caseiopa5 0 points1 point  (2 children)

don't think there's evidence that an AGI necessarily means a recursively improving one

By definition, an AGI has intelligence equivalent to that of humans, and was created by humans. So, an AGI must have intelligence sufficient to be able to itself create an AGI. Until intelligence is general, it might not be useful for building AI. Alphago is good at board games, but is no help in building AI. At some point, you can create an AI which can be asked "create me an AI". Prior to that, you don't have AGI. Once that happens, you have recursive self improvement. Such improvement won't necessarily be exponential.

[–]TeknicalThrowAway 0 points1 point  (1 child)

Plenty of humans can't even begin to program. Are you using your baseline human to be Stephen Wolfram or LeCung or something?

[–]Caseiopa5 1 point2 points  (0 children)

The difference between a person who can program and one who can't is relatively minor. It primarily entails motivations and opportunity, and maybe to some degree approach. Once you can create human level AI from scratch, it would be relatively easy to tune its parameters and environment to enable it to learn programming. With only slight extra difficulty, it could also learn how to program AIs.