Submitted by TheLastSamurai t3_y9lj2u in Futurology
Search
4 results for 80000hours.org:
Tonkotsu787 t1_j9rolgt wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
optimistic view. Eliezer actually mentions him in the bankless podcast you are referring to. [This interview](https://80000hours.org/podcast/episodes/paul-christiano-ai-alignment-solutions/) of him is one of the most interesting talks about AI I’ve ever listened
NecessaryMajestic647 t1_jedcp6b wrote
research, but they haven't been translated to English. Here's some links: [https://chinai.substack.com/p/chinai-156-ai-risk-research-in-china](https://chinai.substack.com/p/chinai-156-ai-risk-research-in-china) [https://80000hours.org/career-reviews/china-related-ai-safety-and-governance-paths/](https://80000hours.org/career-reviews/china-related-ai-safety-and-governance-paths/) https://carnegieendowment.org/2022/01/04/china-s-new-ai-governance-initiatives-shouldn-t-be-ignored-pub-86127
GambitGamer t1_j14kezg wrote
Reply to comment by NetQuarterLatte in State Orders NYC To Drop Foie Gras Sales Ban, Says Ban Violates NY Agricultural Law by Gato1980
Netherlands. It could have resulted in a very different kind of moral landscape. from https://80000hours.org/podcast/episodes/will-macAskill-what-we-owe-the-future/ That being said, I agree that if we make better moral choices easier for people, they’ll make