Cypher10110
Cypher10110 OP t1_je9vtqo wrote
Reply to comment by Akimbo333 in This image felt a bit more meaningful given current events re:pausing AI. by Cypher10110
Yea, of course. It's not an easy problem.
Personally, I don't think the correct response is to race. But I'd rather die and be right than be wrong and kill everyone else. (Obviously, all this is hyperbole. But I think it gets across my point)
But also, I don't see how "winning" or "losing" the "culture war" can be put on the same scale as potential human extinction. I know some people feel a lot more strongly that the west needs to "win" this one, and that some of the actual risks are still pretty debatable at this stage.
As it turns out, I'm a spectator with zero influence on this particular game, so I'll just do my best to deal with whatever the people with actual power decide is the best idea 🤷♂️
Cypher10110 OP t1_je9t7sh wrote
Reply to comment by Adventurous-Mark2477 in This image felt a bit more meaningful given current events re:pausing AI. by Cypher10110
I guess the answer is probably don't release any more extremely powerful models for public use without extensive internal testing, and instead of quickly training ever more larger and complex models, focus more resources on safety research to ensure that AI tools are appropriately aligned.
The general idea of "slow down" seems pretty reasonable. AI safety (and potentially government regulation) may need some time to catch up.
Will it happen? Not sure, lots of conflicting incentives and perspectives. Interesting times.
Cypher10110 OP t1_jeax9cy wrote
Reply to comment by Saerain in This image felt a bit more meaningful given current events re:pausing AI. by Cypher10110
So true. That's why it's such a tough issue.
In the case where 2 powers are competing to reach AGI, there is a potentially huge first move advantage if one power reaches AGI before the other. So how do they each agree to slow development on safety grounds, when they also know there is a possibility of the other side disregarding their agreement to get a competitive lead? A kind of prisoner's dilemma, they both need to refuse the easy win.
Ultimately, I think who wins is less relevant than a safe win. The pendulum of power swings, and so long as humans are "in charge" in the grand scheme of things, it doesn't really matter (in my view) if they are outside my own sphere of values. So long as they have human values! The pendulum of cultural power will hopefully continue to swing and in a few centuries, the planet will still have humans on it.
Although there is a possible future where human directed AGI changes the world in a permanent negative way (against my own values) - like some sort of totalitarian dystopic power, but as a pessimist I don't think that outcome is exclusive to "the other side" winning this race. Both sides have the power to do bad things.