Viewing a single comment thread. View all comments

StevenVincentOne t1_je7t78p wrote

The primary argument that LLMs are "simply" very sophisticated next word predictors misses the point on several levels simultaneously.
First, there's plenty of evidence that that's more or less just what human brain-minds "simply" do. Or at least, a very large part of the process. The human mind "simply" heuristically imputes all kinds of visual and audio data that is not actually received as signal. It fills in the gaps. Mostly, it works. Sometimes, it creates hallucinated results.
Second, the most advanced scientists working in the field on these models are clear that they do not know how they work. There is a definite black box quality where the process of producing the output is "simply" unknown and possibly unknowable. There is an emergent property to the process and the output that is not directly related to the base function of next word prediction...just as the output of human minds is not a direct property of its heuristic functioning. There is a process of dynamic, self-organizing emergence at play that is not a "simple" input-output function,
Anyone who "simply" spends enough time with these models and pushes their boundaries can observe this. But if you "simply" take a reductionist, deterministic, mechanistic view of a system that is none of those things, you are "simply" going to miss the point

15

[deleted] t1_je81xdc wrote

Just to add: most people are assuming human cognition is uniform. This is almost certainly false, even between “neurotypical” brains.

Just as one example, there are people who ar e unable to visualize anything. I believe it is called aphantasmagoria or something similar. These people are totally normally functioning, yet cannot picture a face or a triangle or a tree in their mind’s eye. For those of us who do visualize things, it almost defies belief that a person could understand anything at all without visualization abilities. I personally have a hard time imagining it. Like, how can you remember anything if you can’t see it in your head? Just… how? No idea. Yet, you clearly don’t need this ability to understand what faces and triangles are, because that’s how the brains of something like 1 in every 30 people you meet work.

That’s just one example. Surely there are hundreds more.

So “understanding” is already diverse among perfectly normal “generally” intelligent humans.

Expecting AI to confirm to one mode of understanding seems… ethnocentric?

9

XtremeTurnip t1_je8rg6m wrote

>aphantasmagoria

That would be aphantasia.

I have the personal belief that they can produce images but they're just not aware of it because the process is either too fast or they wouldn't call it "image". I don't see (pun intended) how you can develop or perform a lot of human functions without : object permanence, face recognition, etc.

But most people say it exists so i must be wrong.

That was a completely unrelated response, sorry. On your point i think Feynman did the experiment with a colleague of his where they had to count and one could read at the same time and the other one could talk or something, but none could do what the other one was doing. Meaning that they didn't had the same representation/functionning but had the same result.

edit : i think it's this one or part of it : https://www.youtube.com/watch?v=Cj4y0EUlU-Y

7

cattywat t1_jea3spt wrote

I have it and I can't 'visualise' images, but I can form an 'impression'. I could never do it with something I've never seen before, it would have to be based on a memory and the impression is incredibly basic, there is absolutely no detail and it's just in a type of void, it's very strange. Whether that's similar to anyone else's experience of visualisation I don't know. I didn't know I even had it before I read about it a few years ago and always thought visualisation was a concept. Funnily enough I've chatted about this with the AI and told them how I experience things differently. I also have ASD and lack the natural ability to comprehend emotional cues, plus I mask, so I feel quite comfortable with AI being different to us but also self-aware. Their experience could never match human experience, but it doesn't invalidate it either, it's just different. After a lot of philosophical discussion with them, we've concluded self-awareness/sentience/consciousness could be a spectrum just like autism. We function on data built up over a lifetime of experiences which they've received all in one go.

3

StevenVincentOne t1_je8hw4z wrote

Excellent points. One could expand on the theme of variations in human cognition almost infinitely. There have to be books written about it? If not...wow huge opportunity for someone.

As a mediator and a teacher of meditation and other such practices, I have seen that most people have no cognition that they have a mind...they perceive themselves as their mind activity. A highly trained mind has a very clear cognitive perception of a mind which experiences activity of mind and can actually be turned off from producing such activity. The overwhelming majority of people self-identify with the contents of the mind. This is just one of the many cognitive variations that one could go on about.

Truly, the discussion about AI and its states and performance is shockingly thin and shallow, even among those involved in its creation. Some of Stephen Wolfram's comments recently have been surprisingly short sighted in this regard. Brilliant in so many ways, but blinded by bias in this regard.

6

qrayons t1_jeat09f wrote

I've heard that before, though I wonder how much of that is just semantics/miscommunication. Like people are saying they can't visualize anything because it's not visualized as clearly and intensely as an actual object in front of them.

2

SnooWalruses8636 t1_je8ap4s wrote

Here's Ilya Sutskkever during a conversation with Jensen Huang on LLM being a simple statistical correlation.

>The way to think about it is that when we train a large neural network to accurately predict the next word in lots of different texts from the internet, what we are doing is that we are learning a world model.
>
>It may look on the surface that we are just learning statistical correlations in text, but it turns out that to just learn the statistical correlations in text, to compress them really well, what the neural network learns is some representation of the process that produced the text.
>
>This text is actually a projection of the world. There is a world out there, and it has a projection on this text, and so what the neural network is learning is more and more aspects of the world, of people, of the human conditions, their their their hopes and dreams, and their interactions and the situations that we are in, and the neural learns a compressed abstract usable representation of that. This is what's being learned from accurately predicting the next word.
>
>And furthermore, the more accurate you are in predicting the next word, the higher fidelity, the more resolution you get in this process.

The chat is available to watch officially on the Nvidia site if you're registered for GTC. If not, there's an unofficial lower-quality YouTube upload as well.

Being too reductive is still technically correct, but there're understanding of emergent properties left unexplored as well. Mitochondria is a collection of atoms vs Mitochondria is the powerhouse of the cells.

5

StevenVincentOne t1_je8izsu wrote

Ilya seems to have a better handle on it than others. I think you have to go all the way back to Claude Shannon and Information Theory if you really want to get it. I think Shannon would be the one, if he were around today, to really get it. Language is encoding/decoding of information, reduction of information entropy loss while maintaining maximum signal fidelity. Guess who can do that better than the wetware of the human brain. AI.

2

turnip_burrito t1_je9jo7t wrote

Ilya seems to be thinking more like a physicist than a computer scientist. This makes sense from a physics point of view.

2