Shiningc

Shiningc t1_j0kxott wrote

I tend to think that an AI that the rich or the corporations can easily contain or control won’t be a remarkable one, just like a remarkable human being isn’t going to be easy to contain for a corporation and what not. I mean it is possible, depending on how such a being is going to be manipulated by its masters.

2

Shiningc t1_j0k5mxk wrote

>For instance, Altman said that if OpenAI could master artificial general intelligence, which is machine intelligence that can solve issues just as well as a person, the company might “catch the light of all future value in the universe.”

We're not even close to having Artificial General Intelligence, because the entire approach is wrong. People tend to think that if we feed AIs enough "data", then somehow it will magically become intelligent enough to achieve sentience. But that's not how it goes. Or even worse, they think that it's data + fixed sets of instructions.

This whole dystopian image of a super-intelligent AI lording over us and forcing us to do nothing but manual labor, well that is the same idea as supposedly a super-intelligent or super-talented human being lording over us. Either people will revolt or people will submit, depending on what they think about it.

Another idea is that an AI is going to be "cold", amoral, devoid of "feelings" and only mechanically tries to achieve a "task" at its hand. Well that's entirely the result of the idea that an "AI" is going to be nothing but data + a fixed set of instructions. But how can a sentient being with supposed free-will, be devoid of a moral system? By that I mean an independent set of moral system that it will independently develop over time. A sentient AI is going to have to choose for itself what is the best moral course of action to take.

If we ignore that, then we're saying that an AI is dumb, blind and is only following a fixed set of instructions. But that's not very "intelligent" in a general sense. That AI is only following instructions of some other master.

13

Shiningc t1_j0iqv5t wrote

You can’t predict the future from data, because data is the past event. No matter how much past events that you gather, it’s not going to predict the future. You just want something that repeats the past.

1

Shiningc t1_j05n067 wrote

Suppose that the AI gets super intelligent and achieve a level of self-awareness and creativity that’s capable of doing new things instead of repeating something just pre-programmed.

Why would you assume that it’ll be malevolent? What purpose does it serve, other than to mess with the humans? That seems incredibly petty and non-intelligent to me.

If there’s going to be a malevolent AI, then you can be sure that there’ll also be “good” AI to counter the bad ones. Just like humans, where there are good people and bad people. If there’s ever going to be an AI then it’ll be indistinguishable from super intelligent humans.

1

Shiningc t1_ivt8ech wrote

The whole point of morality is that we go against our genetic imperatives. Our genes may tell us that we're hungry and we should eat, but morality tells us that say, we should not steal or kill animals or whatever.

It may be possible to pinpoint a part of genes that enable or disable certain moral behavior. But what's to say that the person wouldn't eventually become self-aware of that fact? He becomes aware that a part of his genes is telling him to do something. He starts to think rationally about the fact. He starts to think that the morality that his genes are telling him to have is deplorable. The fact that we have the ability to think rationally means that we can be above our genes.

So genes may tell us to have certain moral behavior. But morality is actually based on rationality. We may or may not listen to our genes. We may actively go against it.

2