What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] Submitted by Destiny_Knight t3_118svv7 on February 22, 2023 at 8:27 AM in singularity 194 comments 493
Lawjarp2 t1_j9liaa5 wrote on February 22, 2023 at 9:04 PM Reply to comment by VeganPizzaPie in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight No. Once an LLM gets a keyword a lot of related stuff will come up in probabilities. Also you can go backwards on reasoning. This makes it easier for an LLM to answer if trained for this exact scenario. Permalink Parent 2
Viewing a single comment thread. View all comments