[D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work.
Submitted by tamilupk t3_123b4f0 in MachineLearning
Submitted by tamilupk t3_123b4f0 in MachineLearning
Viewing a single comment thread. View all comments