Snufflepuffster

Snufflepuffster t1_ir22k9m wrote

Yea eventually the emergent properties should be mostly contained in the self supervised training signal. So a question of how the model learns not necessarily its construction. As the bot learns more it can start to identify priority tasks to infer, and then this process just continues. The thing we’re taking for granted is the environment that supplies all the stimulus from which self awareness could be learned.

2

Snufflepuffster t1_ir1k2dt wrote

I have always considered something approaching sentience could be made by having a network operating on top of smaller task specific nets. Now operating on the activations of all these smaller nets could give the ‘sentient’ net a sense of of the world around it because it has access to information. It can modulate each of the smaller slave nets on the fly based on previous experiences to make a decision. It can also identify the most pressing to task to make a decision about in its surrounding environment. That’s what LeCun is suggesting in this scholarly op-ed, it’s not a new idea, more a question of computing power.

afaik we haven’t clearly defined what sentience is yet, if an ai bot can trick you into believing it’s sentient then what else is there? I guess this would just show we have an information processing limit and once another entity approaches that limit we are fooled. This is a question for the humanities to answer probably.

5