/f/MachineLearning
Submitted by jshkk t3_yeckvk
Submitted by Zatania t3_xw5hhl
Submitted by Ash3nBlue t3_xvw467
Submitted by vintergroena t3_123asbg
Submitted by ortegaalfredo t3_11kr20f
Submitted by Clarkmilo t3_1043mb2
Submitted by Confused_Electron t3_ybmppu
Submitted by matthkamis t3_126kzb6
Submitted by viertys t3_125xdrq
Submitted by niclas_wue t3_10cgm8d
Submitted by 51616 t3_yt6slt
Submitted by popcornn1 t3_ydw9wv
Submitted by phraisely t3_y9x4ac
Submitted by jayalammar t3_xvje2n
[D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work.
Submitted by tamilupk t3_123b4f0