sideways

sideways t1_irqltq2 wrote

I think Sparrow is really interesting. It's intentionally limited in order to fit a specific vision and be more effective in particular use cases.

But that also suggests that you could use the same techniques to create a huge range of different models for different purposes.

0

sideways t1_irqjy9s wrote

Good point. Of course, ultimately, superhuman kindness is exactly what we want in an AGI. However, I think the *appearance* of superhuman kindness in "companion" language models would just be another kind of superstimulus that a normal human couldn't compete with.

If you spend a significant amount of time interacting with an entity that never gets angry or irritated, dealing with regular humans could be something you would come to avoid.

21

sideways t1_irpmivv wrote

I'd expect many people to have both. What I'm concerned about is how, eventually, human companionship might just not be very compelling compared to a good language model.

An "AI" partner has no needs of its own. It can be as endlessly loving or supportive or kinky or whatever as you need it to be. Once they can also give legitimately good advice I can imagine a lot of people finding real human relationships to be not much more than a pain in the ass. Human relationships are hard!

6

sideways t1_irpfahg wrote

It has already started with apps like Replika.

At the moment, the human tendency to anthropomorphize is meeting language models halfway - but it won't be long until we're in Her territory. I'd expect many people to have a language model as their primary means of emotional support by 2030.

People are (correctly) alarmed by superhuman intelligence but I'm just as worried by superhuman charm, kindness, empathy and persuasiveness.

78

sideways t1_ir3l5kh wrote

That was exactly my point.

If you agree that a lower qualitative level of intelligence can't recognize a greater one, what makes you so confident that our level is "universal"?

Perhaps we can agree that a baby or small child, similar to animals, does not have universal intelligence. At what point do people "graduate" into it?

2

sideways t1_ir39t47 wrote

Are you saying that there is a specific line that separates "limited intelligence" from "universal intelligence" and that "mentally disabled" people (and presumably animals) fall on the limited side?

Where do you see that border? Do you have any evidence to back that up?

Personally, I'd love to believe that I have universal intelligence but I'm skeptical since I doubt that a lower level of intelligence is able to even recognize a level of intelligence sufficiently beyond it.

3