Viewing a single comment thread. View all comments

magistrate101 t1_j48ilzr wrote

This completely ignores the ways in which neural networks end up with human biases and bigotry trained into them by interactions with actual humans. And given that they're intended to mimic human behavior/results, there's no way you can give them safeguards that are an innate part of the system's logic. And inclusion of safeguards into the logic of the AI is, by your own definition, "human moral bloatware". So your post doesn't even make sense.

7