No_Ninja3309_NoNoYes t1_j4z7eyb wrote
Greed is good, right? So it turns out that OpenAI was afraid of Google and other companies. They are bad at waiting and hoped to get publicity. So they went all in. Everyone who has played poker knows that you don't go all in unless you have aces and have no idea what else to do with them or if you are bluffing. I think they are bluffing.
There seems to be an obsession with parameters matching the brain. But the amount and type of data and the actual architecture and algorithms are more important. IMO for the amount of data they used they have too many parameters. They did the equivalent of fitting linear data to a cubic function. So in the best case you end up with parameters that are close to zero. In the worst you are screwed. This is not only wasteful when training and bad for the environment because tons of carbon dioxide emissions, but also awful at inference time. And still we have to pay for these extra parameters.
Why would OpenAI ever achieve AGI this way? They are doing a mix of unsupervised, supervised, and reinforcement learning. Unsupervised learning requires a lot of data. It's parsing it and trying to find patterns. But there's not enough data that can be used. Supervised has even bigger problems because it needs labels. You need to give it answers to questions. Reinforced learning requires some sort of score like in games. That is also limited. If they want AGI, they would have to look into semisupervised, self supervised, and meta learning. AI has to be able to learn on its own. Preferably going out and finding its own data.
And of course they hired Kenyans to do their dirty work which shows you what they care about. Greed is good apparently.
MrEloi t1_j4zdx7j wrote
I think that you underestimate these new transformer models.
Viewing a single comment thread. View all comments