Submitted by bikeskata t3_10rqe34 in MachineLearning
singularineet t1_j710h53 wrote
No matter how hard they try to whack-a-mole them, the biases of the model will come through, particularly by omission. Example? It's super bad about minimizing Jewish history, or saying awful things about the Holocaust like that it was harmful to both the victims and the perpetrators. It's basically like working with a raging racist who's trying to follow a list of very specifically worded instructions from a woke but low functioning autistic HR dept.
Viewing a single comment thread. View all comments