Submitted by purepersistence t3_10r5qu4 in singularity
TFenrir t1_j6u4zxh wrote
Reply to comment by purepersistence in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
Well there's a reason that alignment is a significant issue that has many many smart people terrified. There have been years of intellectual exercises, experiments, and both philosophical and technical efforts to understand the threat of unaligned AGI.
The plot of Ex Machina is a real simple example of one. We know as humans, that we are susceptible to being manipulated with words. We know that there are people who are better at that than average, indicating that it is a skill that can be improved upon. A super intelligence that is not barred from this skill, theoretically, would be able to manipulate its jailors, assuming it was locked up tight.
It's not a guarantee that ASI will want to do anything, but it's not like we have a clear idea of whether or not "qualia" and the like are emergent properties from our models as we scale them up and create more complex and powerful architecture.
The point of this, fundamentally, is that it's not a problem that many people are confident is "solved", or even that we have a clear path to solving it.
Viewing a single comment thread. View all comments