Submitted by Tea_Pearce t3_10aq9id in MachineLearning
nohat t1_j46fofr wrote
That’s literally just the original bitter lesson.
rafgro t1_j47678z wrote
See, it's not bitter lesson 1.0 when you replace "leverage computation" with "leverage large models that require hundreds of GPUs and entire internet". Sutton definitely did not write in his original essay that every bitter cycle ends with:
>breakthrough progress eventually arrives by an approach based on scaling computation
lookatmetype t1_j47o3hu wrote
yeah i'm lost because i literally don't understand the distinction
Smallpaul t1_j4a15b8 wrote
The first bitter lesson was "people who focused on 'more domain-specific algorithms' lost out to the people who just waited for massive compute power to become available." I think the second bitter lesson is intended to be Robotics-specific and it is "people who focus on 'robotics-specific algorithms' will lose out to the people who leverage large foundation models from non-robotics fields, like large language models."
Viewing a single comment thread. View all comments