VirtualHat t1_j9rqmii wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
This is very far from the current thinking in AI research circles. Everyone I know believes intelligence is substrate independent and, therefore, could be implemented in silicon. The debate is really more about what constitutes AGI and if we're 10 years or 100 years away, not if it can be done at all.
wind_dude t1_j9rv2vw wrote
Would you admit a theory that may not be possible and than devote your life to working on it? Even if you don't you're going to say it, and eventually believe it. And the definitions do keep moving with lower bars as the media and companies sensationalise for clicks and funding.
Viewing a single comment thread. View all comments