Viewing a single comment thread. View all comments

Queue_Bit t1_jdzlxht wrote

I mean that we've used about 1/10th of the high quality training data.

Which means that even with zero improvement in algorithms or methodology. And assuming that improvement is linear. And assuming no new data is created. It means that LLMs will get about 10x better. And who knows what that looks like.

1