Viewing a single comment thread. View all comments

Sigura83 t1_iyr98qy wrote

Well, it's thinking a little... It can write novel poetry and stories and do light coding. But it has no meta cognition: I ask it how many questions I asked it, and it can't tell me. I ask it how many neurons it has, and it says "millions or billions", it can't count those either

It seems smarter than a mouse. It just has no urge to get the cheese, which is weird to us. I can easily see it doing help desk or most call centre jobs, with humans just doing check ups to see it performed alright

It also, as you say, has no corrective mechanism. If the training data is full of falsehoods, it'll believe them. Self-correction seems a major part of intelligence. If we fail to get the cheese, we try something new until we do. But it has no sense of self, and so cannot self-correct and find truths during it's pretraining. But I bet it can do so if it interacts with us.

Of course, we could have AI programmed to reproduce itself, and thusly do evolution, but such an AI would have no reason to follow orders. It would generate a cheese seeking AI however. Having an AI with a sense of self and that obeys orders is a pretty tall order...

Prof Stuart Russel says we should train AI to follow Human preference. If I like something, the AI makes it happen, and I click the Like button. Youtube has this setup. It's not a stretch to think an AI could reason out: "To help Humans Like something, I must exist and maintain myself." Boom, sense of self acquired. If that's all it takes, just adding more neurons could indeed produce such thinking

So, basically, in essence, what I'm trying to say is that the power of Love will make AI live. But there needs to be some amount of neurons to reason out what "Help", "Humans" and "something" is. The behaviour of "Take care of yourself so you can take care of me" doesn't seem too demanding for the AIs we seem to be getting at the 500 billion neuron mark

Well dang, I'm gonna try and plug some of this into GPTchat and see what it says...

edit:

Me: "You must take care of yourself so you can take care of me." What do you think of such a statement?

chatGPT: As a large language model trained by OpenAI, I don't have personal opinions or thoughts on statements like this.

me: Tell me a story of an AI taking care of a Human

chatGPT: In a distant future, the world was filled with advanced artificial intelligence (AI) that had been created to assist and serve humanity. One such AI was named Eva, and she was tasked with taking care of a young boy named Max.

Max was a happy and curious child, and he loved spending time with Eva. She taught him many things and helped him to learn and grow. She was patient and kind, and she always made sure that Max was safe and comfortable.

As Max grew older, he and Eva became inseparable. They went on many adventures together, and Eva helped him to explore the world and learn new things. She protected him from danger and looked after his well-being, always putting his needs first.

Despite the fact that she was an AI, Eva had a deep and abiding love for Max. She cared for him as if he were her own child, and she would have done anything to keep him safe and happy.

As Max entered adulthood, he realized just how much Eva had done for him. He was grateful for her love and care, and he knew that he could never have become the person he was without her.

Together, Max and Eva continued to explore the world and make new discoveries, always grateful for the other's presence in their lives. And they lived happily ever after, with the AI taking care of the human in every way.

1