Viewing a single comment thread. View all comments

Shiyayori t1_j57aqal wrote

I think reframing the issue from morality to what it really is, is much better.

Ultimately, we want AI to work for us, to do what we want it to, but also to understand us and the underlying intentions in what we’re asking of it.

It should have to ability to ignore aspects of requests and to add its own, based on its belief of what will lead to the best outcome.

It’s impossible to extrapolate every action infinitely far into the future, so it can never now with certainty what will result from those actions.

I’m under the belief that it’s not as hard as it looks. It should undergo some kind of reinforcement learning under various contexts, and with a suitable ability to extrapolate goals into the future, an AI would never misinterpret a goal in a ludicrous way like we often imagine.

But like a human, their will always be mistakes.

2