Submitted by yottawa t3_127ojcy in singularity
DaggerShowRabs t1_jefm4ex wrote
Reply to comment by Heinrick_Veston in Sam Altman's tweet about the pause letter and alignment by yottawa
If the system needs approval before it takes any actions at all, the system is going to be extremely slow and limited.
Heinrick_Veston t1_jefmvuu wrote
I don't mean that it would ask before every action, more so that it'd regularly ask if it was acting in the right way.
DaggerShowRabs t1_jefnl06 wrote
Ah, I get what you mean. I still don't think that necessarily solves the problem. It could be possible for a hypothetical artificial superintelligence to take actions that seem harmless to us, but because it is better at planning and prediction than us, the system knows the action or series of actions will lead to humanity's demise. But since it appears harmless to us, when it asks, we say, "Yes, you are acting in the correct way".
Viewing a single comment thread. View all comments