Submitted by Liberty2012 t3_11ee7dt in singularity
Liberty2012 OP t1_jael9bs wrote
Reply to comment by marvinthedog in Is the intelligence paradox resolvable? by Liberty2012
Because a terminal goal is just a concept we made up. It is just the premise for a proposed theory. It is essentially why the whole containment idea is of such complex concern.
If a terminal goal was a construct that already existed in the context of a sentient AI, then it is already a partially solved problem. Yes, you could still have the paperclip scenario, but it would be just a matter of having the right combination of goals. We don't really know how to prevent the AI from changing those goals, it is a concept only.
Surur t1_jaenmas wrote
I believe the idea is that every action the AI takes would be to further its goal, which means the goal will automatically be preserved, but of course in reality every action the AI takes is to increase its reward, and one way to do that is to overwrite its terminal goal with an easier one.
Viewing a single comment thread. View all comments