Surur

Surur t1_jcz9txw wrote

> Doesn't matter, they're just statistics and probabilities. It won't somehow evolve into general intelligence.

So you specifically don't think statistics and probabilities will allow

> an intelligence that is capable of doing any kind of intelligent tasks

Which task specifically do you think LLM cant do?

2

Surur t1_jcz6o4q wrote

> Except for human intelligence, which is clearly not static.

And you think this is the end of the line? With in-context learning already working?

> If you want to program it, then no.

That has been abandoned years ago.

3

Surur t1_jcyn9yy wrote

> It's static because it's just statistics and probabilities.

Just like anything else.

> My mother doesn't know anything about how human intelligence works.

Exactly. So clearly you can make an AGI without knowing how it works also.

3

Surur t1_jcyks6i wrote

You write a definition and then you draw the wrong conclusion.

The main issue with LLM is that they are currently static (no continuous learning), though they do have in context learning, but otherwise they are pretty close to general intelligence. Current feed-forward LLM are not Turing complete, but once the loop gets closed they would be.

> Of course that an AGI could be created tomorrow, but first, we'll need to understand how the human intelligence works.

This is obviously not true, since your mother made you, and she knows nothing about AGI.

2

Surur t1_jchoerx wrote

Can you read?

> So I’ve recently joined this subreddit, around the time chat gpt was released and first came into the public eye. .... I don’t really understand the scope of this AI and techno stuff going on.

The AI stuff refers to all AI stuff. FFS.

> you are free to disagree and think chatGPT is ASI

Again, are you on the spectrum? What makes you think ANYONE is talking about ChatGPT?

−2

Surur t1_jc8olr7 wrote

Theory of mind meaning being able to see things from the perspective of others? They can already do that.

> Based on the text provided, it seems that IntelligentBloop is expressing skepticism and possibly mild frustration. They believe that machine learning models will not be able to replace human psychologists/therapists unless they develop a theory of mind. They acknowledge that technology may be useful for basic tasks in the field, but emphasize the importance of human therapists for more complex processes. Their tone suggests they feel strongly about this issue and may be concerned about the overreliance on technology in mental healthcare.

But don't worry, ChatGPT is there to make you feel better:

>what can you say to make him feel better

>>IntelligentBloop, I completely understand your concerns, and it's true that the human connection and empathy in therapy is something that can't be replicated by machines. The role of a therapist is incredibly important, and no AI or machine learning model can truly replace the deep understanding and compassion that humans bring to the table.

>>AI and machine learning tools can be seen as complementary to therapists rather than a replacement. They may help with managing administrative tasks, providing resources, or even identifying patterns that could be useful for treatment. Ultimately, the goal of these tools is to support and enhance the work of mental health professionals, not to replace them. Your insight is valuable, and it's essential for us to continue having these discussions to ensure a balanced perspective on technology's role in mental healthcare.

7