Submitted by Pointline t3_123z08q in singularity
I know the meme of the Open AI open position for an engineer to pull the plug is the first thing that comes to mind but let’s say we finally have the solution for AI alignment. Would such a strategy work against a powerful enough AI? If an AI becomes ASI, how can we control that which is many times smarter than the smartest human ever lived or the entirety of the human collective? It would be like ants trying to control humans.
Ezekiel_W t1_jdx1udh wrote
It's more or less impossible. The best option would be to teach it to love humanity, failing that we could negotiate with it and if all else fails containment as the nuclear option.