IcebergSlimFast

IcebergSlimFast t1_itan89u wrote

One counter to your counter is that once in power, a dictator who’s planning based on a nearly-endless personal time horizon (while also armed with incredibly powerful surveillance and psychological-influence tools) might be better at avoiding the types of rash decisions that have led so many dictators to premature deaths.

Another counter is the Kim family, who’ve managed to keep an iron grip on North Korea for nearly 75 years and counting, even without the advantages of personal immortality.

Edit: All that said, I’m not 100% convinced that dangers like the immortal dictator are sufficient to make immortality a net-negative for humanity. But I definitely believe there are enough potentially serious safety issues to raise real concern.

However, I also believe that like AGI/ASI, major life-extension technologies will inevitably be developed. So basically, we may eventually need to fund some degree of ‘Immortality safety’ research for the same reasons we need AI safety research.

1

IcebergSlimFast t1_it12iml wrote

Re-reading the post you originally responded to, I apparently missed or skimmed over “replace ALL work” when I first read it. I agree that it’s not at all unreasonable to doubt 100% automation in the near future.

What I think is certain (or very nearly so) is that starting in the fairly- near future — likely within a 10 year time-frame — AI-enabled automation will cause substantial disruption to global labor markets and workers. I think it’s also reasonable to predict that nearly all jobs will be capable of being automated within a similar time-frame. However, I agree that full automation will take longer.

1