Submitted by Business-Lead2679 t3_1271po7 in MachineLearning
Dapper_Cherry1025 t1_jecz0th wrote
Something about these distillations feels fundamentally different than when interacting with the larger models. The responses feel a lot more... I don't really know? Artificial? Weird way to phrase it, but I definitely get a sense that this method seems to be missing something fundamental, not to say that it couldn't be useful in other cases. Like, to me it is lacking some "spark" of intelligence that you can sorta see with GPT-3.5 and definitely see with GPT-4.
That being said however, more models to compare and contrast against will always be welcome! And Vicuna does seem able to produce text that is quite amazing for its size! Hell, considering where we were 2 years ago to today it'll be really exciting to see how far these approaches can go in these next couple of months/years.
EvenAtTheDoors t1_jed5rft wrote
Yeah, I know what you’re talking about. The lower parameter model output text that doesn’t truly synthesize new information in surprising ways. It’s often shallow and comes off artificial. Even though it knows a lot it seems like a sophisticated search engine rather than an actual language model.
KerfuffleV2 t1_jefkhxs wrote
> Something about these distillations feels fundamentally different than when interacting with the larger models.
It may not have anything to do with size. ChatGPT is just adding a lot of comfort-phrases into its response instead of just responding. "Hmm, this is an interesting challenge", "Let's see", etc. Some of that may be based on the system prompt, some of it may be training to specifically produce more natural sounding responses.
All "Hmm", "interesting challenge" and stuff that makes it sound like a person isn't actually adding any actual information that's relevant to answering the query though. (Also, you may be paying for those extraneous tokens.)
Dapper_Cherry1025 t1_jefywqj wrote
Well, that's probably because I specifically asked it to use an internal monologue. I think what I'm trying to say is that each part of its response does seem to flow in a logical way that I found easy to understand. Heck, when I refined my prompt down for 3.5 I was able to get it to admit that it couldn't come up with a solution when I tried to get a more complicated example.
I also find it very interesting that when chatgpt starts a sentence with something like "Yes, because..." I know right away that the answer is probably incorrect, because after it replies "Yes" it will then try to justify the yes even if it is wrong. However, if you can get it to investigate a problem like shown in the example it can actually try different things before arriving at a solution.
Viewing a single comment thread. View all comments