Submitted by enryu42 t3_122ppu0 in MachineLearning
WarmSignificance1 t1_jdv1usr wrote
Reply to comment by LifeScientist123 in [D] GPT4 and coding problems by enryu42
Part of intelligence is the ability to learn in an efficient manner. For example, an expert programmer doesn't need to see hundreds of millions of examples to learn a new programming language. They can read the docs, play around with it a bit, and then apply their existing experience and models that they've built up over time to the new language.
LLMs fall over in this same situation.
LifeScientist123 t1_jdvmkkx wrote
>Part of intelligence is the ability to learn in an efficient manner.
Agree to disagree here.
A young deer (foal?) learns to walk 15 minutes after birth. Human babies on average take 8-12 months. Are humans dumber than deer? Or maybe human babies are dumber than foals?
Intelligence is extremely poorly defined. If you look at the scientific literature it's a hot mess. I would argue that intelligence isn't as much about efficiency as it's about two things,
- Absolute performance on complex tasks
AND
- Generalizability to novel situations
If you look at LLMs, they perform pretty well on both these axes.
-
GPT-4 has human level performance in 20+ coding languages AND 20+ human languages on top of being human level/super human in some legal exams, medical exams, AP chemistry, biology, physics etc etc. I don't know many humans that can do all of this.
-
GPT-4 is also a one-shot/ few-shot learner on many tasks.
Viewing a single comment thread. View all comments