Submitted by Cool_Abbreviations_9 t3_123b66w in MachineLearning
tt54l32v t1_jdyc1h3 wrote
Reply to comment by WarAndGeese in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
So the second app might would fare better leaning towards search engine instead of LLM but some LLM would ultimately be better to allow for less precise matches of specific set of searched words.
Seems like the faster and more seamless one could make this, the closer we get to agi. To create and think it almost needs to hallucinate and then check for accuracy. Is any of this already taking place in any models?
Viewing a single comment thread. View all comments