sideways
sideways t1_irqlvlx wrote
Reply to comment by overlordpotatoe in Human to Ai Relationships (Discussion) by Ortus12
You're absolutely right. I certainly hope it works out that way.
sideways t1_irqltq2 wrote
Reply to comment by iNstein in Human to Ai Relationships (Discussion) by Ortus12
I think Sparrow is really interesting. It's intentionally limited in order to fit a specific vision and be more effective in particular use cases.
But that also suggests that you could use the same techniques to create a huge range of different models for different purposes.
sideways t1_irqjy9s wrote
Reply to comment by Flare_Starchild in Human to Ai Relationships (Discussion) by Ortus12
Good point. Of course, ultimately, superhuman kindness is exactly what we want in an AGI. However, I think the *appearance* of superhuman kindness in "companion" language models would just be another kind of superstimulus that a normal human couldn't compete with.
If you spend a significant amount of time interacting with an entity that never gets angry or irritated, dealing with regular humans could be something you would come to avoid.
sideways t1_irpmivv wrote
Reply to comment by [deleted] in Human to Ai Relationships (Discussion) by Ortus12
I'd expect many people to have both. What I'm concerned about is how, eventually, human companionship might just not be very compelling compared to a good language model.
An "AI" partner has no needs of its own. It can be as endlessly loving or supportive or kinky or whatever as you need it to be. Once they can also give legitimately good advice I can imagine a lot of people finding real human relationships to be not much more than a pain in the ass. Human relationships are hard!
sideways t1_irpfahg wrote
Reply to Human to Ai Relationships (Discussion) by Ortus12
It has already started with apps like Replika.
At the moment, the human tendency to anthropomorphize is meeting language models halfway - but it won't be long until we're in Her territory. I'd expect many people to have a language model as their primary means of emotional support by 2030.
People are (correctly) alarmed by superhuman intelligence but I'm just as worried by superhuman charm, kindness, empathy and persuasiveness.
sideways t1_irl58ig wrote
Reply to comment by spreadlove5683 in When do you think we'll have AGI, if at all? by intergalacticskyline
I doubt most governments have the imagination to grasp just how profoundly powerful AGI will be.
They're focused on guns, bombs, oil and money. It's the Maginot Line all over again.
sideways t1_irl39bo wrote
Reply to comment by DungeonsAndDradis in When do you think we'll have AGI, if at all? by intergalacticskyline
PaLM's logical reasoning really blew my mind. That, more than anything, convinced me that we are close to AGI.
sideways t1_irdcv0y wrote
Reply to comment by Yuli-Ban in “Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032” by Dr_Singularity
Sparrow seems like a proof of concept for Oracle-like weak AI.
sideways t1_ircvv93 wrote
Reply to comment by dreamedio in We are in the midst of the biggest technological revolution in history and people have no idea by DriftingKing
You underestimate how quickly things have progressed.
sideways t1_ircvo8p wrote
Reply to comment by 175ParkAvenue in We are in the midst of the biggest technological revolution in history and people have no idea by DriftingKing
It's possible that war could accelerate things.
sideways t1_ir9crks wrote
Reply to comment by [deleted] in The last few weeks have been truly jaw dropping. by Particular_Leader_16
What do you mean?
sideways t1_ir3n4js wrote
Reply to comment by MurderByEgoDeath in What happens in the first month of AGI/ASI? by kmtrp
Thanks for your explanation. That makes more sense. Doesn't David Deutsch take a similar position?
sideways t1_ir3l5kh wrote
Reply to comment by MurderByEgoDeath in What happens in the first month of AGI/ASI? by kmtrp
That was exactly my point.
If you agree that a lower qualitative level of intelligence can't recognize a greater one, what makes you so confident that our level is "universal"?
Perhaps we can agree that a baby or small child, similar to animals, does not have universal intelligence. At what point do people "graduate" into it?
sideways t1_ir39t47 wrote
Reply to comment by MurderByEgoDeath in What happens in the first month of AGI/ASI? by kmtrp
Are you saying that there is a specific line that separates "limited intelligence" from "universal intelligence" and that "mentally disabled" people (and presumably animals) fall on the limited side?
Where do you see that border? Do you have any evidence to back that up?
Personally, I'd love to believe that I have universal intelligence but I'm skeptical since I doubt that a lower level of intelligence is able to even recognize a level of intelligence sufficiently beyond it.
sideways t1_ir38dkd wrote
Reply to comment by kmtrp in What happens in the first month of AGI/ASI? by kmtrp
I think you are right. People won't even notice that systems like that will quietly deliver better and better answers until eventually they'll be solving problems that are currently outside of our grasp.
sideways t1_irzldba wrote
Reply to comment by w33dSw4gD4wg360 in How long have you been on the Singularity subreddit? by TheHamsterSandwich
Maybe the single best depiction of the early stages of a Singularity, all disguised as a love story!