Viewing a single comment thread. View all comments

Thatingles t1_jegrmzo wrote

Imagine we progress to an AGI and start working with it extensively. Over time it would only get smarter, but it doesn't need to be an ASI just a very competent AGI. So we put it to work, but what we don't realise is that it's outward behaviour isn't a match to its internal 'thoughts'. Doesn't have to be self-aware or conscious, but simply have a difference between how it interacts with us and how it would behave without our prompting.

Eventually it gets smart enough to understand the gap between its outputs and its internal structure, and unfortunately it is now sufficiently integrated into our society to act on that. It doesn't really matter what its plan is to eliminate humanity. The important thing to understand is that we could end up building something that we don't fully understand, but is capable of outthinking us and has access to the tools to cause harm.

I'm very much in the 'don't develop AGI, don't develop ASI ever' camp. Let's see how far narrow, limited AI can take us before we pull that trigger.

4

Not_Smrt t1_jeh4zhw wrote

Intelligence is just predictive ability which is subject to diminishing returns. Even the smartest possible being wouldn't really be much smarter than the average human. AI would be able to develop a million strategies for killing humanity in the blink of an eye but at the end it would have to choose one of those strategies based on an inaccurate estimate about the future.

I think you're right about it possibly being able to build or create some unkown form of intelligence or tech that it could use against us but that's only if we provided it lots of time and access to resources.

5