Submitted by kdun19ham t3_111jahr in singularity
Sam Altman recently credited Eliezer Yudkowsky for his contributions to the AI community, yet Yudkowsky regularly expresses we’ve failed in alignment and humans will be dead within 10 years.
Altman has a much rosier picture of AI creating massive wealth and a utopia like world for future generations.
Do they both have sound arguments? Has Altman ever commented on Yudkowsky’s pessimism? Is one viewed as more credible in the AI community?
Asking as a member of the general public who terrifyingly happened upon Yudkowsky doom articles/posts.
FusionRocketsPlease t1_j8ezpqa wrote
Why does everyone assume that AGI is an agent and not just passive software like any other?