Submitted by Business-Lead2679 t3_1271po7 in MachineLearning
Dapper_Cherry1025 t1_jefywqj wrote
Reply to comment by KerfuffleV2 in [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
Well, that's probably because I specifically asked it to use an internal monologue. I think what I'm trying to say is that each part of its response does seem to flow in a logical way that I found easy to understand. Heck, when I refined my prompt down for 3.5 I was able to get it to admit that it couldn't come up with a solution when I tried to get a more complicated example.
I also find it very interesting that when chatgpt starts a sentence with something like "Yes, because..." I know right away that the answer is probably incorrect, because after it replies "Yes" it will then try to justify the yes even if it is wrong. However, if you can get it to investigate a problem like shown in the example it can actually try different things before arriving at a solution.
Viewing a single comment thread. View all comments