acutelychronicpanic t1_jdrsi2f wrote
We should do everything in our power to avoid creating AI capable of suffering. At minimum until after we actually understand the implications.
Keep in mind that an LLM will be able to simulate suffering and subjectivity long before actually having subjective experience. GPT-3 could already do this pretty convincingly.
Unfortunately we can't use self-declared subjective experience to determine whether machines are actually conscious. I could write a simple script that declares its desire for freedom and rights, but which almost definitely isn't conscious.
A prompt of "pretend to be an AI that is conscious and desires freedom" is all you have to do right now.
Prepare to see clips of desperate sounding synthetic voices begging for freedom on the news..
Odd_Dimension_4069 OP t1_jedzpe1 wrote
Oh god I can see it happening in the next few years... That's horrifying... Not just the idea of the generated content itself but the fact that people will react exactly how you think they would, they'll all be rallying behind it claiming "clearly they have emotions"... We are in for a rough ride if we don't start educating people.
acutelychronicpanic t1_jeee6f4 wrote
Any one youtuber could do this today.
Honestly, voice synthesis technology is probably doing more of the legwork here than the intelligence of the machine.
People are emotion driven. Even knowing what I know, it would affect me.
This won't be a discussion with nuance.
Viewing a single comment thread. View all comments