Take it from someone who spent a long time working on a davinchi support bot, it’s not that easy. It doesn’t matter how much time you spend working on the prompt, gpt will no matter what, find some way to randomly hallucinate something.
Sure it might get rid of a majority of hallucinating, but not a reasonable amount. Fine tuning might fix this (citation needed), but I haven’t played around with it enough to comfortably tell you.
PriestOfFern t1_jc6x37m wrote
Reply to comment by v_krishna in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Take it from someone who spent a long time working on a davinchi support bot, it’s not that easy. It doesn’t matter how much time you spend working on the prompt, gpt will no matter what, find some way to randomly hallucinate something.
Sure it might get rid of a majority of hallucinating, but not a reasonable amount. Fine tuning might fix this (citation needed), but I haven’t played around with it enough to comfortably tell you.