Submitted by Wroisu t3_10vdd7g in singularity
Some interesting food for thought for those who browse this sub -
“Most problems, even seemingly really tricky ones, could be handled by simulations which happily modelled slippery concepts like public opinion.
Or the likely reactions of other societies by the appropriate use of some especially cunning and devious algorithms… nothing more processor-hungry than the right set of equations…
But not always.
Sometimes, if you were going to have any hope of getting useful answers, there really was no alternative to modelling the individuals themselves, at the sort of scale and level of complexity that meant they each had to exhibit some kind of discrete personality, and that was where the Problem kicked in.
Once you’d created your population of realistically reacting and – in a necessary sense – cogitating individuals, you had – also in a sense – created life.
The particular parts of whatever computational substrate you’d devoted to the problem now held beings;
virtual beings capable of reacting so much like the back-in-reality beings they were modelling – because how else were they to do so convincingly without also hoping, suffering, rejoicing, caring, living and dreaming -
By this reasoning, then, you couldn’t just turn off your virtual environment and the living, thinking creatures it contained at the completion of a run or when a simulation had reached the end of its useful life; that amounted to genocide.“
CertainMiddle2382 t1_j7gtgrn wrote
Yep, that was also one of Bostrom arguments.
To properly align itself with our values, even in situations we could not even imagine ourselves, making a simulation of humans and test our avatars responses could be the only way of protecting us.
By harming « them » instead.