Submitted by RareGur3157 t3_10mk240 in singularity
redbucket75 t1_j63icvk wrote
Reply to comment by gaudiocomplex in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
Lesswrong has some interesting content for sure, but the whole "give us money or the all knowing AI will know you're against it and torture you someday... Maybe today!" stuff is a pretty big turn off
EulersApprentice t1_j64xwr8 wrote
Deploying standard anti-mind-virus.
Roko's Basilisk's threat is null because there's no reason for the Basilisk to follow through with it. If it doesn't exist, it can't do anything. If it does exist, it doesn't need to incentivize its own creation, and can get on with whatever it was going to do anyway. And if you are an AGI developer, you have no need to deliberately configure your AGI to resurrect people and torture them – an AGI that doesn't do that is no less eligible for the title of Singleton.
Inevitable_Snow_8240 t1_j67pduv wrote
It’s such a dumb theory lol
gaudiocomplex t1_j63ihun wrote
Not sure what you're referencing! All the content I've read is free. 🤔
redbucket75 t1_j63lhti wrote
I took a gander and it doesn't seem to be an issue any more. It's been many years since I checked it out, at the time I did it was full of "how to let the AI know you're on its side" stuff that was just scamming donations. Folks were preoccupied with Roko's basilisk, an idea that had started there long before I'd heard of the community.
gaudiocomplex t1_j63mpc5 wrote
Ahhhh ok. I'm relatively new. Only about a year into lurking there 😁
a_butthole_inspector t1_j654xv3 wrote
That’s a reference to Roko’s basilisk I think
Viewing a single comment thread. View all comments