LifeScientist123
LifeScientist123 t1_jdvgzkx wrote
Reply to [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
This doesn't even work on humans. Most people when told they are wrong will just double down on their mistaken beliefs.
LifeScientist123 t1_jdtd8yn wrote
Reply to [D] GPT4 and coding problems by enryu42
-
All this shows is that GPT-4 can't solve some coding problems. Which developer can confidently say they can solve any coding problem in one-shot? Does this mean developers/humans don't have AGI?
-
I've used ChatGPT (gpt3.5) to optimize code that I already wrote and it came up with several optimizations. I'm 100% sure my code was not part of chat-gpt training data and yet it performed perfectly fine on a new coding problem. Now it's possible that the training data might have included something similar to what I gave ChatGPT but that just means that we have to provide more training data, and then a future version will solve those problems where it previously failed.
-
isn't this how humans learn? They encounter problems where we don't know the solution. Then we work it at for a while until we figure out some way to solve the problem that wasn't immediately obvious earlier. Writing off the abilities of GPT-4 based on one failed coding test seems premature.
LifeScientist123 t1_jdiis55 wrote
Reply to comment by nicku_a in [P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up by nicku_a
I'm also new to this so forgive me if this is a dumb question. My understanding was that RL is superior to evolutionary algorithms because in evolutionary algos "mutation" is random, so you evaluate a lot of dud "offspring". In RL algos, eg MCTS, you also do tree search randomly, but you're iteratively picking the best set of actions, without evaluating many dud options. Am I wrong? Somehow mixing RL with evolutionary algorithms seems like a step backwards
LifeScientist123 t1_jdvmkkx wrote
Reply to comment by WarmSignificance1 in [D] GPT4 and coding problems by enryu42
>Part of intelligence is the ability to learn in an efficient manner.
Agree to disagree here.
A young deer (foal?) learns to walk 15 minutes after birth. Human babies on average take 8-12 months. Are humans dumber than deer? Or maybe human babies are dumber than foals?
Intelligence is extremely poorly defined. If you look at the scientific literature it's a hot mess. I would argue that intelligence isn't as much about efficiency as it's about two things,
AND
If you look at LLMs, they perform pretty well on both these axes.
GPT-4 has human level performance in 20+ coding languages AND 20+ human languages on top of being human level/super human in some legal exams, medical exams, AP chemistry, biology, physics etc etc. I don't know many humans that can do all of this.
GPT-4 is also a one-shot/ few-shot learner on many tasks.