spryes
spryes t1_j9h2m5j wrote
It seems that he confidently believes we will all die once AGI/ASI is reached, but I don't see why *all* humans dying is more likely than only *some*. Why is it guaranteed it would cause catastrophic destruction rather than only minor destruction, especially since something can't be infinitely powerful.
For example, an analogy is that ASI::humans will be equivalent to humans::ants, and yet while we don't care if we kill ants to achieve our goals, we don't specifically go out of our way to kill them. Many ants have died due to us, but a ton are still alive. I think this is the most likely scenario once ASI becomes uncontrollable.
I also think it will leave our planet/solar system and pursue its goals elsewhere as Earth may not be adequate for it to continue, effectively just leaving us behind, and that humans as material won't be as effective as some other material it wants to use in space somewhere.
spryes t1_jefa436 wrote
Reply to What if language IS the only model needed for intelligence? by wowimsupergay
AI is currently a less sensory Hellen Keller