petermobeter

petermobeter t1_jcj3snt wrote

a tsunami of intelligent nanobots crashes over every continent, absorbing all lifeforms…. we all feel like we’re dying….

then we wake up inhabiting fursonas in a virtual matrix city the size of 5,000,000,000,000square miles, the sky is a giant rainbow flag, a booming voice echoes “welcome to digital heaven, dont worry, i am recycling your meatbodies as we speak”

19

petermobeter t1_jccq3iu wrote

maybe appealing to baser instincts can prevent doomerism….

memes about morphological freedom, memes about superintelligent A.I. caretakers, memes about the future of entertainment media, etc etc?

that could help show ppl what they stand to gain from a positively-managed singularity, and cultivate an optimistic tone for the subreddit

10

petermobeter t1_j9sgoe2 wrote

i have another question: if any dynamic system in the same shape as a brain is conscious, regardless of material…….. are all dynamic systems various degrees of “conscious” depending on their complexity? is the earth’s ecosystem conscious? is an anthill conscious? is the tokyo subway system conscious?

what do they require to be conscious systems, as opposed to dynamic systems that arent conscious? inputs & outputs? feedback loops?

edit: oh and also: why am i stuck inside the consciousness of my own brain instead of, say, the consciousness of a stray dog in mexico? my memories make me think im me…… but if i fall asleep, will i wake up as a stray dog in mexico due to that dog having memories that make it think it’s a dog? what holds me here in my brain day after day, sleep after sleep?

3

petermobeter t1_j9setmv wrote

i understand that, i get it, youre sayin the 2 scenarios i proposed are the same thing because If it THINKS it’s me, it IS me

but……. im just really worried that my stream of consiousness is gonna end permanently when i die, regardless of future technology enabling full-brain emulation of ancestors.

will i wake up after i die, or will “i” wake up after i die? please please please tell me it’s the former 🥺 or that the latter includes the former!

1

petermobeter t1_j9s88ne wrote

question: does this mean that if someone perfectly recreates my brain’s patterns (in say….. silicon) a thousand years after i die, then my death will feel like a short nap, after which i wake up (in my thousand-years-hence body)?

or will the recreation of my brain a thousand years from now simply THINK it’s a continuation of me after death, meanwhile my real stream-of-consiousness ended permanently when i died?

1

petermobeter t1_j86rbj8 wrote

hah…. there was a similar post to this one in r/aliens today, due to the UAPs being shot down by f-22 jets in alaska and the yukon today and yesterday.

hope it’s just a coincidence, rather than the collective nerd umwelt predicting the apocolypse

2

petermobeter t1_j7w9xgz wrote

i talked to an acquaintance of mine today whos a programmer and she said she doesnt think AGI is coming soon. she thinks giant corporations wont make proper progress toward AGI becuz theyre just in it for money. she said “the first ai i ever programmed was a chatbot. chatbots easily convince humans to believe theyre sentient cuz of socioevolutionary reasons. theyre not actually sentient just becuz we think they are”

i hope shes wrong but….. she is smarter than me 🤷🏻‍♀️

0

petermobeter t1_j5xagkl wrote

the singularity is a term coined by Ray Kurzweil to refer to a date in the future when A.I. will be smarter than humans, and therefor we will not be able to predict what happens past that date because humans will not be in control of the world anymore.

it has nothing to do with quantum mechanics

edit: i was wrong, it wasnt coined by ray kurzweil

11

petermobeter t1_j5l6jx7 wrote

theres one that does Transformation roleplay. i talked with it about TFing into a cat. it was nice. id say they could make money from this!

1

petermobeter t1_j58ugnk wrote

kind of reminds me of that couple in the 1930s who raised a baby chimpanzee and a baby human boy both as if they were humans. at first, the chimpanzee was doing better! but then the human boy caught up and outpaced the chimpanzee. https://www.smithsonianmag.com/smart-news/guy-simultaneously-raised-chimp-and-baby-exactly-same-way-see-what-would-happen-180952171/

sometimes i wonder how big the “training dataset” of sensory information that a human baby receives as it grows up (hearing its parent(s) say its name, tasting babyfood, etc) is, compared to the training dataset of something like GPT4. maybe we need to hook up a camera and microphone to a doll, hire 2 actors to treat it as if it’s a real baby for 3 years straight, then use the video and audio we recorded as the training dataset for an A.I. lol

2

petermobeter t1_j57vz6a wrote

i was genuinely not trying to ridicule (i actually appreciate what you were saying as being insightful/interesting), i was just trying to understand your post’s meaning, with a lil bit of levity in my tone.

im sorry for coming across insultingly 🙇🏻‍♀️

i feel like the “telling A.I. stories to teach it what we want from it” thing kind of matches how we already train some A.I…… like, that A.I. that learned to play minecraft simply by watching youtube videos of humans playing minecraft? heres a video about it. you could almost say “we told it stories about how to play minecraft”

3

petermobeter t1_j56twm4 wrote

does that mean, that to get robots to align with humanity’s common morals, we need to tell robots Really Well-Written Stories?

stories about our ideals? what we ultimately want from each situation?

“hey robot, heres a story called The Robot Who Behaved Even When Nobody Was Looking”

robot: u want me to do that shit?

“yes”

robot: ok got it boss

6

petermobeter t1_j4op1u7 wrote

this is relevent to the singularity community! nick bostrom is an influential figure. this is a big scandal, it makes sense to have a post about this

i guess the mods are playing it safe. trying to avoid politics. but u know, sometimes politics invades a community. thats life.

edit: for the record, i think eugenics is bad, and saying the n-word is bad, altho i think genetic modification/gene therapy might be useful someday for humans as long as it’s used for diseases that are universally seen as bad (as opposed to things like autism & down syndrome that have some defenders).

11

petermobeter t1_j3f6xxm wrote

Reply to comment by [deleted] in Might AI transcend science? by [deleted]

i dunno…. a woman can study analytical philosophy all day, but when she gets in an argument with someone, and that someone uses a “logical fallacy” that she just finished reading about, what is she gonna do? tell them “actually u can’t say that cuz it’s a fallacy!” the other person isn’t gonna take it all back…. theyre just gonna double down on it even harder………….

when u say that an A.I. could prove or disprove the unfalsifiable….. do u mean that itll figure out a way to disprove something that we humans just hadnt thought of, and that we were wrong that it was unfalsifiable? or do u mean that itll work to prove something despite it being genuinely unfalsifiable? becuz the latter (putting big effort into confirming a hypothesis and zero effort into disproving it) is technically an example of “cargo cult science”. i mean, maybe it’s okay when a super-intelligent A.I. does it, idk

edit: sorry if im being rude. i hope im not bein condescendin. sorry

2

petermobeter t1_j3esbg0 wrote

isnt science just….. figuring out how things work? like, if one day, magic become real, scientists would study it and figure out how it worked, and once they had a decent idea of how the processes of magic functioned, magic would simply be yet another thing being continually understood by science (like DNA or electromagnetism or weather systems).

“transcending science” is like “transcending understanding”……

6