Submitted by Gortanian2 t3_123zgc1 in singularity
qrayons t1_je06jmp wrote
I read Chollet's article since I have a lot of respect for him and read his book on machine learning in python several years ago.
His main argument seems to be that intelligence is dependent on its environment. That makes sense, but the environment for an AI is already way different than it is for humans. If I lived 80 years and read a book every day from the day I was born to the day I died, I'd have read less than 30k books. Compare that to GPT models which are able to read millions of books and even more text. And now that they're becoming multimodal, they'll be able to see more than we'll ever see in our lifetimes. I would say that's a drastically different environment, and one that could lead to an explosion in intelligence.
I'll grant that eventually even a self-improving AI could hit a limit, which would make the exponential curve to look more sigmoidal (and even Chollet mentioned near the end that improvement is often sigmoidal). However, we could still end up riding the steep part of the sigmoidal curve up until our knowledge has increased 1000 fold. I'd still call that a singularity event.
Gortanian2 OP t1_je0ww5u wrote
You make an excellent point. Even a basic AGI would be able to absorb an insane amount of knowledge from its environment in a matter of weeks. Thank you for your comment, it has altered my perspective.
Viewing a single comment thread. View all comments