Viewing a single comment thread. View all comments

onyxengine t1_j9g9096 wrote

You can never guarantee that some thing capable of a thing will never do that thing. If you want ai to remain harmless, then you have to construct them in such a way that they can’t do physical harm.

And that ship has sailed. Most militaries are testing AI for scouting and targeting and we even have Weaponized law-enforcement robots in the pipeline. San Francisco is the program that I’m currently aware of, I am sure there is more.

Even the linguistic models are extremely dangerous. Language is the command line script for humans and malicious people can program ai to convince people to do things that cause harm.

We’re not at the point where we need to worry about AI taking independent action to harm humans, but on the way there is plenty of room for humans to cause plenty of harm with AI.

Until we build agi that has extremely sophisticated levels of agency, every time an Ai hurts a human being it’s going to be because a human wanted it to be the case or overlooked cases in which what they were doing could be harmful.

1