Viewing a single comment thread. View all comments

venustrapsflies t1_j8jp5jv wrote

If I had a nickel for every time I saw someone say this on this sub I could retire early. It’s how you can tell this sub isn’t populated by people who actually work in AI or neuroscience.

It’s complete nonsense. Human beings don’t work by fitting a statistical model to large datasets, we learn by heuristics and explanations. A LLM is fundamentally incapable of logic, reasoning, error correction, confidence calibration, and innovation. No, a human expert isn’t just an algorithm, and it’s absurd that this idea even gets off the ground.

15

JoieDe_Vivre_ t1_j8jt9l7 wrote

The point they’re making is their second sentence.

If it’s correct, it doesn’t matter where it came from.

ChatGPT is just our first good stab at this kind of thing. As the models get better, they will out perform humans.

It’s hilarious to me that you spent all those words just talking shit, while entirely missing the point lol.

9

xxxnxxxxxxx t1_j8jzb3z wrote

If it’s ever correct, it’s by accident. The limitations listed above negate that point.

−4

JoieDe_Vivre_ t1_j8k184o wrote

It’s literally designed to get the answer right. How is that ever “by accident”?

8

venustrapsflies t1_j8kck2g wrote

No, it's not at all designed to be logically correct, it's designed to appear correct based on replications of the training dataset.

One the one hand, it's pretty impressive that it can do what it does using nothing but a statistical model of language. On the other hand, it's a quite unimpressive example of artificial intelligence because it is just a statistical language model. That's why it's abysmal at even simple math and logic questions, things that computers have historically been quite good at.

Human intelligence is nothing like a statistical language model. THAT is the real point, the one that both you and the OC, and frankly much of this sub at large, aren't getting.

7

xxxnxxxxxxx t1_j8k2m48 wrote

No, you are missing the understanding of how language models work. They are designed to guess the next word, and they can’t do any more than that. This works because language is a subjective interface - far from logical correctness

4

MaesterPycell t1_j8ky26c wrote

https://en.m.wikipedia.org/wiki/Chinese_room

This is a problem or theory that addresses the issue at possibly better lengths.

Additionally, I believe recommend to most people who are interested in AI to read the Fourth Age, which is a philosophy book targeted at ai. It explains it in a nice and easier to read concept about what it is to be truly AGI and the steps we’ve made so far and will need to make.

Quick Edit: I also don’t think youre wrong, what this AI is saying it wouldn’t be able to explain but it’s learned to take the code behind it and spit out something akin to human language, no matter how garbled or incoherent that is to the machine behind it doesn’t care, as long as it suits it’s learning.

2

jesusrambo t1_j8kk7i2 wrote

Lmao, this is how you can tell this sub is populated by javascript devs and not scientists

You can’t claim it’s fundamentally incapable of that, because we don’t know what makes something capable of that or not.

We can’t prove it one way or another. So, we say “we don’t know if it is or isn’t” until we can.

0

venustrapsflies t1_j8kkovy wrote

I am literally a scientist who works on ML algs for a living. Stop trying to philosophize yourself way into believing what you want to. Just because YOU don’t understand it doesn’t mean you can wave your hands and act like two different things are the same.

1

jesusrambo t1_j8knlv9 wrote

You are either not, or a bad scientist. You’re just describing bad science.

3

venustrapsflies t1_j8l4ftf wrote

No, bad science would pretending that just because you don’t understand two different things, they are likely the same thing. Despite what you may believe, these algorithms are not some mystery that we know nothing about. We have a good understanding of why they work, and we know more than enough about them to know that they have nothing to do with biological intelligence.

0

jesusrambo t1_j8l57sr wrote

Can you please define exactly what biological intelligence is, and how it’s uniquely linked to logic and innovation?

1

venustrapsflies t1_j8l5t2n wrote

are you actually interested in learning something, or are you just trying to play stupid semantic games?

0

jesusrambo t1_j8l6y32 wrote

If you can justify your perspective, and you’re interested in discussing it, I would love to hear it. I find this topic really interesting, and I’ve formally studied both philosophy and ML. However, so far nobody’s been able to provide an intelligent response without it devolving into, “it’s just obvious.”

Can you define what you mean by intelligence, how we can recognize and quantify it, and therefore describe how we can identify and measure its absence in a language model?

0

venustrapsflies t1_j8l8o54 wrote

How would you quantify the lack of intelligence in a cup of water? Prove to me that the flow patterns don’t represent a type of intelligence.

This is a nonsensical line of inquiry. You need to give a good reason why a statistical model would be intelligent, for some reasonable definition. Is a linear regression intelligent? The answer to that question should be the same as the answer to whether a LLM is.

What people like you do is to conflate multiple very different definitions of a relatively vague concept line “intelligence”. You need to start with why on earth you would think a statistical model has anything to do with human intelligence. That’s an extraordinary claim, the burden of proof is on you.

1

jesusrambo t1_j8laua9 wrote

I can’t, and I’m not making any claims about the presence or lack of intelligence.

You are making a claim: “Language models do not have intelligence.” I am asking you to make that claim concrete, and provide substantive evidence.

You are unable to do that, so you refuse to answer the question.

I could claim “this cup of water does not contain rocks.” I could then measure the presence or absence of rocks in the cup, maybe by looking at the elemental composition of its contents and looking for iron or silica.

As a scientist, you would understand that to make a claim, either negative or positive, you must provide evidence for it. Otherwise, you would say “we cannot make a claim about this without further information,” which is OK.

Is a linear regression intelligent? I don’t know, that’s an ill-posed question because you refuse to define how we can quantify intelligence.

2