Viewing a single comment thread. View all comments

ghostfuckbuddy t1_j4am9mc wrote

It is impossible for GPT systems to not have "moral bloatware", a.k.a a moral value system. If naively trained on unfiltered data, it will adopt whatever moral bloatware is embedded in that data, which could literally be anything. If you want an AI that aligns with humanist values you need either a curated data set or use reinforcement learning to steer it in that direction. But however it is trained it will always have biases, it's just a matter of which biases you want.

3