Submitted by xutw21 t3_ybzh5j in singularity
ReadSeparate t1_ittgzjh wrote
Reply to comment by 4e_65_6f in Large Language Models Can Self-Improve by xutw21
I've never really cared too much about the moral issues involved here, to be honest. People always talk about sentience, sapience, consciousness, capacity to suffer, and that is all cool stuff for sure, and it does matter, however, what I think is far more pressing is can this model replace a lot of people's jobs, and can this model surpass the entire collective intelligence of the human race?
Like, if we did create a model and it did suffer a lot, that would be a tragedy. But it would be a much bigger tragedy if we built a model that wiped out the human race, or if we built superintelligence and didn't use it to cure cancer or end war or poverty.
I feel like the cognitive capacity of these models is the #1 concern by a factor of 100, the other things matter too, and it might turn out that we'll be seen as monsters in the future by enslaving machines or something, certainly possible. But I just want humanity to evolve to the next level.
I do agree though, it's probably going to be extremely difficult if not impossible to get an objective view on the subjective experience of a mind like this, unless we can directly view it somehow, rather than asking it how it feels.
Viewing a single comment thread. View all comments