sticky_symbols

sticky_symbols t1_ivlktyi wrote

If you're right, this is absolutely critical to the logic we use in the AGI safety community. We estimate around maybe 2040 for AGI, too, but delaying that progress is considered a good idea to allow more time for safety research.

If there's a good chance of collapse, it is not a good idea to delay AGI.

Any sources or references would be very helpful. I'll try to make good use of them if you make time to provide some.

3

sticky_symbols t1_itzxmxi wrote

OP is absolutely correct. Naturally, there are arguments on both sides, and it probably matters a good deal how you build the AGI.There is a whole field that thinks about this. The websites LessWrong and Alignment Forum offer brief introductions to AI safety thinking.

2

sticky_symbols t1_itsa017 wrote

Again, no. Brin and Page were computer scientists first and created Google almost by accident. And OpenAI was created entirely with the hope of doing something good.

I agree that most politicians, business owners and leaders are on the sociopath spectrum. We appear to be lucky with regard to those two and some other AGI groups. The companies weren't started to make.profits, because the research was visionary enough that near term profits weren't obvious.

5

sticky_symbols t1_itrxr18 wrote

Yeah I see it differently, but I could be wrong. Who do you think enjoys inflicting suffering on people who've never wronged them?

Wanting some sort of superiority or control is almost universal, but that wouldn't nearly be a hell outcome.

6

sticky_symbols t1_itqw9gh wrote

The hell scenario seems quite unlikely compared to the extinction scenario. We'll try to get its goals to align with ours. If we fail, it won't likely be interested in making things worse for us. And there are very few true sadists who'd torment humanity forever if they achieved unlimited power by controlling AGI.

11

sticky_symbols t1_irllusl wrote

If, as I think you are assuming, reality is a benign simulation, then we're probably safe from the dangers of unaligned AGI. We would also be safe from a lot of other dangers if we're in a benign simulation. And that would be awesome. I think we might be, but it's far from certain. Therefore I would like to solve the alignment problem.

6