Viewing a single comment thread. View all comments

bk15dcx t1_irbrykb wrote

I'm torn on the necessities of this.

Do we leave it up to ourselves to self regulate AI and trust it will be developed for benevolent purposes, or do we hamstring the technology in fear of malice?

Knowing human nature, the former. But I would argue that could suppress development, and furthermore, restrict AI from stopping human nature's evil tendencies itself.

There's no proof that AI would replicate the evils of human intelligence, and left to it's own device could possibly implement utopia.

Now we'll never know.

−5

whatTheBumfuck t1_irbvctc wrote

Should we require seat belts in cars? Or is that going to hamper innovation in automobiles? And AI absolutely has been shown to amplify bias in whatever data is used to train it.

7

dmun t1_irbwrmm wrote

> There's no proof that AI would replicate the evils of human intelligence,

Exhibit A

Exhibit B

Exhibit C

Exhibit D

4

bk15dcx t1_irby44s wrote

These examples draw their current bias from human bias.

Future AI bias should be aware of conclusions based on it's own self introspection rather than a conglomerate of human bias frequency.

−1

dmun t1_irbyajx wrote

> These examples draw their current bias from human bias.

Yes. That's the point. A.I. are programmed by humans. A.I. are just hyped up decision making algorithms. You seem to be mistaking them for magic.

6

bk15dcx t1_irbys0r wrote

Not at all... But given the charts I see in this book I have by Ray Kurzweil, AI will surpass human intelligence, and future algorithms will not be based on human decision making, but purely in the AI.

−1

dmun t1_irbzgwh wrote

> but purely in the AI.

Which is actually worse and, indeed, makes the argument that we definitely need an A.I. Bill of Rights to protect humans.

The base assumption here, that I'm reading from you, is that morality and intelligence go hand in hand.

Human morality (the "evils" you refer to) are based on human empathy, human philosophically "inherent value" and the human experience.

An intelligence without any of those, nor even the basic nerve-inputs of the physical reality of inhabiting a body, is Blue and Orange Morality at best and complete, perhaps nihilistic, metaphysical solipsism at worse.

Both are a horror.

5

Few_Carpenter_9185 t1_irc67ed wrote

The good news:

The ever increasing expansion of what weak-AI can be applied to is going to severely blunt the need to chase after strong-AGI that's self aware, and possesses other metacognitive traits. Such as being able to set its own goals, or modify itself in ways not intended or predicted. Which would be very very bad if it was malicious, or even just indifferent to human life.

That even includes eventual 100% mimicry of self awareness, emotional engagement, and interaction. But despite its sophistication, it only cares as much as your Reddit app does if you don't use it, or delete/erase it. Which is none at all. Using "cares" is misleading, because it does not have any ability that rises to that level.

The bad news:

Strong-AGI is not needed to have bad or unpredictable outcomes for humans. Either in how we use them, or how they work. Social media algorithms often don't even rise to weak-AI levels but already seem to be having massive effects on society, culture, and politics. And on individual cognition, emotions, and mental heath. And presumably that's while attempting to balance efficiency, enjoyment, and profitability. Deliberately using weak-AI to control people or manipulate them could be terrifying.

More bad news:

Even if weak-AI does most or all of what humans want, even destructive and lethal things, like military applications... and it removes the economic, power, or competitive incentives to develop strong-AGI...

Some assholes somewhere are going to try anyway, if only to see if they can.

And if strong-AGI is possible, the entry barriers to getting the equipment needed are rather low, especially as compared to nuclear weapons, or even dangerous genetically engineered diseases. And even if there's national laws and international agreements to prevent attempting it, or put various protocols or safeguards in place, they're probably irrelevant.

People might envision a James Bond villains lair, or even just some nondescript office building for such a project. In reality, it could easily be people working from home, a coffee shop, even sitting on a beach in different countries around the world. And the core computer systems are virtual servers distributed around the world, running redundantly and mirrored, mixed in with the systems of other websites, governments, businesses, and schools etc.

2

robotbootyhunter t1_irbxy20 wrote

The name is a bit misleading. It's not about rights for AI, it's about restricting the capabilities and of AI currently. Replacing human positions with computers, deepfakes, that kind of thing.

1

caustic_kiwi t1_irc0c9w wrote

That's not really what the bill is about. It's about modern AI and out it affects human lives. There is no reason to start drafting laws about the rights of or the legality of creating a general intelligence because that is far beyond our level of technology.

1

Alienziscoming t1_ird50aa wrote

Given the absolute 0% chance we'd be able to stop runaway self-aware AI with malevolent intentions and the insane drive people have to genrate profits and wealth at literally any cost with a historic disregard for ethics or long term consequences, I'm in favor of strangling the entire avenue of inquiry and development with so much red tape and oversight that it becomes virtually impossible to take it further than it is right now.

0