Submitted by EducationalCicada t3_10vgrff in MachineLearning
yaosio t1_j7lnkh9 wrote
Reply to comment by st8ic in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
If you look at what you.com does they cite the claims their bot makes by linking to the pages the data come from, but only sometimes. When it doesn't cite something you can be sure that it's just making it up. In the supposed Bing leak it was doing the same thing, citing it's sources.
If they can force it to always provide a source, and if it can't then it won't say it, that could fix it. However, there's still the problem that the model doesn't know what's true and what's false. Just because it can cite a source doesn't mean the source is correct. This is not something that the model can learn by being told. To learn by being told assumes that it's data is correct, which can't be assumed. A researcher could tell the model, "all cats are ugly", which is obviously not true, but the model will say all cats are ugly because it was taught that. Models will need to have a way to determine on their own what is true and what isn't true, and explain it's reasoning.
Viewing a single comment thread. View all comments