sticky_symbols
sticky_symbols t1_ivljwmr wrote
Reply to comment by mhornberger in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
Don't you think people in a utopian society could head off collapse from lack of population replacement? Like, by making parenting easier and more fun and more respected?
sticky_symbols t1_ivkdpxn wrote
Reply to Out of all the movies that depict a dystopian future with humanity taken over by robots the Disney Cars movie could be a highly probable outcome. by cloudrunner69
See the classic singularity short story "I, rowboat" (get it?) by Charlie Stross. Stross has thought about the singularity a lot, and he tells a rollicking good take.
sticky_symbols t1_ivfwlym wrote
Reply to Essential reading material? by YB55qDC8b
Accelerando by Charles Stross. Fiction but visionary. Particularly the first few chapters, which can be read independently.
sticky_symbols t1_ivfwgn0 wrote
Reply to Essential reading material? by YB55qDC8b
Rapture of the Nerds.
sticky_symbols t1_iu00kjz wrote
Reply to comment by hducug in Do you guys really think the AI won't just kill you? by [deleted]
Somebody is going to try, whether they have a safe plan or not. That's why safety research ch now seems like a good idea.
sticky_symbols t1_iu00ftj wrote
Reply to comment by hducug in Do you guys really think the AI won't just kill you? by [deleted]
See the post "the AI knows and doesn't care". I find it completely compelling on this topic.
sticky_symbols t1_itzz0we wrote
Reply to comment by hducug in Do you guys really think the AI won't just kill you? by [deleted]
Yes, by reading the research on alignment forum. And they're still not totally sure they can build safe AGI.
sticky_symbols t1_itzyv66 wrote
Reply to comment by hducug in Do you guys really think the AI won't just kill you? by [deleted]
Maybe. Or maybe not. Even solving problems involves making goals, and humans seem to be terrible at information security. See the websites I mentioned in another comment for that discussion.
sticky_symbols t1_itzxzvo wrote
Reply to comment by hducug in Do you guys really think the AI won't just kill you? by [deleted]
There's a principle called instrumental convergence that says: whatever your goals are, gathering power and eliminating obstacles will help achieve them. That's why most of the people building it are worried about agi taking over.
sticky_symbols t1_itzxmxi wrote
OP is absolutely correct. Naturally, there are arguments on both sides, and it probably matters a good deal how you build the AGI.There is a whole field that thinks about this. The websites LessWrong and Alignment Forum offer brief introductions to AI safety thinking.
sticky_symbols t1_itsch4a wrote
Reply to comment by Eleganos in It's important to keep in mind that the singularity could create heaven on Earth for us. *Or* literal hell. Human priorities are the determining factor. by Pepperstache
Exactly. If it was designed to be good, very carefully. Which those groups are going to try very hard to do.
sticky_symbols t1_itsa017 wrote
Reply to comment by TheSingulatarian in It's important to keep in mind that the singularity could create heaven on Earth for us. *Or* literal hell. Human priorities are the determining factor. by Pepperstache
Again, no. Brin and Page were computer scientists first and created Google almost by accident. And OpenAI was created entirely with the hope of doing something good.
I agree that most politicians, business owners and leaders are on the sociopath spectrum. We appear to be lucky with regard to those two and some other AGI groups. The companies weren't started to make.profits, because the research was visionary enough that near term profits weren't obvious.
sticky_symbols t1_its22wz wrote
Reply to comment by rushmc1 in It's important to keep in mind that the singularity could create heaven on Earth for us. *Or* literal hell. Human priorities are the determining factor. by Pepperstache
I've been observing closely, too. That's why I'm curious where the disagreement arises.
sticky_symbols t1_itrxr18 wrote
Reply to comment by rushmc1 in It's important to keep in mind that the singularity could create heaven on Earth for us. *Or* literal hell. Human priorities are the determining factor. by Pepperstache
Yeah I see it differently, but I could be wrong. Who do you think enjoys inflicting suffering on people who've never wronged them?
Wanting some sort of superiority or control is almost universal, but that wouldn't nearly be a hell outcome.
sticky_symbols t1_itqw9gh wrote
Reply to It's important to keep in mind that the singularity could create heaven on Earth for us. *Or* literal hell. Human priorities are the determining factor. by Pepperstache
The hell scenario seems quite unlikely compared to the extinction scenario. We'll try to get its goals to align with ours. If we fail, it won't likely be interested in making things worse for us. And there are very few true sadists who'd torment humanity forever if they achieved unlimited power by controlling AGI.
sticky_symbols t1_itqvw1l wrote
Reply to comment by TheSingulatarian in It's important to keep in mind that the singularity could create heaven on Earth for us. *Or* literal hell. Human priorities are the determining factor. by Pepperstache
I don't think this is true. The people at DeepMind and OpenAI seem quite well-intentioned. And those two are currently well in the lead.
sticky_symbols t1_isl2a1e wrote
Paywall. Anyone have access to a non-paywalled version?
sticky_symbols t1_is9ehpd wrote
Reply to Scientists teach brain cells in a dish to play Pong, opening potential path to powerful AI by WikkaOne
I read the paper. The learning is really minimal; it hits the ball slightly better than chance after learning.
sticky_symbols t1_irllusl wrote
If, as I think you are assuming, reality is a benign simulation, then we're probably safe from the dangers of unaligned AGI. We would also be safe from a lot of other dangers if we're in a benign simulation. And that would be awesome. I think we might be, but it's far from certain. Therefore I would like to solve the alignment problem.
sticky_symbols t1_ivlktyi wrote
Reply to comment by mootcat in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
If you're right, this is absolutely critical to the logic we use in the AGI safety community. We estimate around maybe 2040 for AGI, too, but delaying that progress is considered a good idea to allow more time for safety research.
If there's a good chance of collapse, it is not a good idea to delay AGI.
Any sources or references would be very helpful. I'll try to make good use of them if you make time to provide some.