LifeScientist123

LifeScientist123 t1_jdvmkkx wrote

>Part of intelligence is the ability to learn in an efficient manner.

Agree to disagree here.

A young deer (foal?) learns to walk 15 minutes after birth. Human babies on average take 8-12 months. Are humans dumber than deer? Or maybe human babies are dumber than foals?

Intelligence is extremely poorly defined. If you look at the scientific literature it's a hot mess. I would argue that intelligence isn't as much about efficiency as it's about two things,

  1. Absolute performance on complex tasks

AND

  1. Generalizability to novel situations

If you look at LLMs, they perform pretty well on both these axes.

  1. GPT-4 has human level performance in 20+ coding languages AND 20+ human languages on top of being human level/super human in some legal exams, medical exams, AP chemistry, biology, physics etc etc. I don't know many humans that can do all of this.

  2. GPT-4 is also a one-shot/ few-shot learner on many tasks.

1

LifeScientist123 t1_jdtd8yn wrote

  1. All this shows is that GPT-4 can't solve some coding problems. Which developer can confidently say they can solve any coding problem in one-shot? Does this mean developers/humans don't have AGI?

  2. I've used ChatGPT (gpt3.5) to optimize code that I already wrote and it came up with several optimizations. I'm 100% sure my code was not part of chat-gpt training data and yet it performed perfectly fine on a new coding problem. Now it's possible that the training data might have included something similar to what I gave ChatGPT but that just means that we have to provide more training data, and then a future version will solve those problems where it previously failed.

  3. isn't this how humans learn? They encounter problems where we don't know the solution. Then we work it at for a while until we figure out some way to solve the problem that wasn't immediately obvious earlier. Writing off the abilities of GPT-4 based on one failed coding test seems premature.

1

LifeScientist123 t1_jdiis55 wrote

I'm also new to this so forgive me if this is a dumb question. My understanding was that RL is superior to evolutionary algorithms because in evolutionary algos "mutation" is random, so you evaluate a lot of dud "offspring". In RL algos, eg MCTS, you also do tree search randomly, but you're iteratively picking the best set of actions, without evaluating many dud options. Am I wrong? Somehow mixing RL with evolutionary algorithms seems like a step backwards

2