Viewing a single comment thread. View all comments

Thatingles t1_j9r6dxz wrote

The most interesting thing about LLM is how good they are based on quite a simple idea. Given enough data and some rules, you get something that is remarkably 'smart'. The implication is that what you need is data+rules+compute, but not an absurd amount of compute. The argument against AGI was that we would need a full simulation of the human brain (which is absurdly complex) to hit the goal. LLM have undermined that view.

I'm not seeing 'it's done' but I do think the SOTA has shown that really amazing results can be achieved by building large data sets, applying some fairly straightforward rules and sufficient computing power to train the rules on the data.

Clearly visual data isn't a problem. Haptic data is still lacking. Aural isn't a problem. Nasal (chemical sensory) is still lacking. Magnetic, gravimetric sensors are far in advance of human ability already, though the data sets might not be coherent enough for training.

What's missing is sequential reasoning and internal fact-checking, the sort of feedback loops that we take for granted (we don't try to make breakfast if we know we don't have a bowl to make it in, we don't try to buy a car if we know we haven't learnt to drive yet). But these are not mysteries, they are defined problems.

AGI will happen before 2030. It won't be 'human' but it will be something we recognise as our equivalent in terms of competence. Fuck knows how we'll do with that.

31