Submitted by Liberty2012 t3_11ee7dt in singularity
Liberty2012 OP t1_jae75ik wrote
Reply to comment by RabidHexley in Is the intelligence paradox resolvable? by Liberty2012
> Just as a hypothetical, barely-reasonable scenario
Yes, I can perceive this hypothetical. But I also have little hope that is based on any reasonable assumptions we can make about what progress would look like given that at present AI is still not an escape for our own human flaws. FYI - I expand on that in much greater detail here - https://dakara.substack.com/p/ai-the-bias-paradox
However my original position was attempting to resolve the intelligence paradox for which proponents of ASI assume will be an issue of containment at the moment of AGI. If ASI is the goal, I don't perceive a path that takes us there that escapes the logical contradiction.
Viewing a single comment thread. View all comments