Submitted by kdun19ham t3_111jahr in singularity
Ortus14 t1_j8h7cxk wrote
They both have sound arguments.
Altman's argument is maybe weaker Ai's on the road to AGI will solve Alignment and prevent value drift.
But Yudkowsky should be required reading for every one working in the field of AGI or alignment. He clearly outlines how the problem is not easy, and may be impossible. This should not be taken lightly by those working on AGI because we don't get a second chance.
Viewing a single comment thread. View all comments