Submitted by Kaarssteun t3_yw3smv in singularity
SufficientPie t1_iwi31r7 wrote
Reply to comment by GodOfThunder101 in A typical thought process by Kaarssteun
> Most people working in AI
How is that quantified?
sheerun t1_iwigcnv wrote
Maybe smartheads from https://www.lesswrong.com/ and corporate/academia AI/machine learning researchers. Not that worrying is not justified, very very justified. Controlling GAI is not possible directly indefinitely, we need another GAI, so recursive problem, or let them goooooooo, which has its own worries like killing humans as leverage in war with between GAIs, by mistake, or something. We need to set out cooperation rules, but more importantly plan how to enforce them, if even possible. I think pacifying rogue states like Russia or Iran will be (or is) an important part of this plan. We want a future where killing humans is not a preferred way to fight a war or resolve conflicts. Or even better future where wars are the past, and we focus on space expansion.
Viewing a single comment thread. View all comments