SchmidhuberDidIt OP t1_j9rqdje wrote
Reply to comment by Tonkotsu787 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Thanks, I actually read this today. He and Richard Ngo are the names I've come across for researchers who've deeply thought about alignment and hold views grounded in the literature.
mano-vijnana t1_j9s5zl4 wrote
Both of them are more positive than EY, but both are still quite worried about AI risk. It's just that they don't see doom as inevitable. This is the sort of scenario Christiano worries about: https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like
And this is Ngo's overview of the topic: https://arxiv.org/abs/2209.00626
Viewing a single comment thread. View all comments