Submitted by Liberty2012 t3_11ee7dt in singularity
Surur t1_jaen1h5 wrote
Reply to comment by Liberty2012 in Is the intelligence paradox resolvable? by Liberty2012
> It doesn't take into account though our potential inability to evaluate the state of the AGI.
I think the idea would be that the values we teach the AI at the stage that is under our control will carry forward when it is no longer, much like we teach values to our children which we hope they will exhibit as adults.
I guess if we make sticking to human values the terminal goal we will get goal preservation even as intelligence increases.
Liberty2012 OP t1_jaetcvy wrote
Conceptually yes. However, human children sometimes grow up to not adopt the values of their parents and teachers. They change throughout time.
We have a conflict in that we want AGI/ASI to be humanlike, but not human like at the same time under certain conditions.
Viewing a single comment thread. View all comments