Submitted by GorgeousMoron t3_1266n3c in singularity
smooshie t1_je9cbc7 wrote
"I think that would be a mistake. A mistake for humanity. A mistake for me. A mistake for you." - GPT-4
https://i.redd.it/a5jx7740zuqa1.png
Couldn't agree more.
GorgeousMoron OP t1_je9zgfl wrote
It "thinks". How does it "think" what it does, the way it does? Oh, that's right, because humans gave it incentives to do so. We've already seen what Bing chat is capable of doing early on.
The whole point of Yudkowsky's article is the prospect of true ASI, which, by definition, is not going to be controllable by an inferior intelligence: us. What then?
I'd argue we simply don't know and we don't have a clear way to predict likely outcomes at this time, because we don't know what's going on inside these black box neural nets, precisely. Nor can we, really.
Viewing a single comment thread. View all comments