Viewing a single comment thread. View all comments

ReadSeparate t1_iv3zj0j wrote

I’m not assuming it’ll be sentient, I’m just saying an Oracle ASI equally as dangerous as one with agency. It MIGHT be sentient. Or it might NOT be sentient, but still dangerous, I.e. the paper clip maximizer scenario.

> Okay then the owners will probably use this Non-sentient tech to take care of themselves

Like just AGI you mean? Yeah I agree with that of course. But ASI, again, seems short sighted. If Google makes human level AGI, but it’s just as smart as say Einstein, yeah of course they’ll use it to get richer. But if they create something that makes Einstein look like an ant, they’d be foolish to use it in such a way.

1

ihateshadylandlords t1_iv40poj wrote

> I’m not assuming it’ll be sentient, I’m just saying an Oracle ASI equally as dangerous as one with agency. It MIGHT be sentient. Or it might NOT be sentient, but still dangerous, I.e. the paper clip maximizer scenario.

Meh, the dangers of an ASI can be discussed in another thread. We were initially talking about how an ASI might manifest, so it’s getting off course.

>Like just AGI you mean? Yeah I agree with that of course. But ASI, again, seems short sighted. If Google makes human level AGI, but it’s just as smart as say Einstein, yeah of course they’ll use it to get richer. But if they create something that makes Einstein look like an ant, they’d be foolish to use it in such a way.

Okay. Just don’t be surprised if companies keep doing what they’ve been doing for literally thousands of years and use their products to make a profit.

1