Submitted by Liberty2012 t3_11ee7dt in singularity
Artanthos t1_jaexshl wrote
This sounds like a great first problem for AGI/ASI
If the task is beyond human intelligence, make solving one of the fundamental purposes of the AGI/ASI.
The more the AI grows, the better it gets at alignment.
Liberty2012 OP t1_jaeyqu3 wrote
That is a catch-22. Asking the AI to essentially align itself. I understand the concept, but it would assume that we can realistically observe what is happening within the AI and keep it in check as it matures.
However, we are already struggling with our most primitive AI in that regards today.
>“The size and complexity of deep learning models, particularly language models, have increased to the point where even the creators have difficulty comprehending why their models make specific predictions. This lack of interpretability is a major concern, particularly in situations where individuals want to understand the reasoning behind a model’s output”
>
>https://arxiv.org/pdf/2302.03494.pdf
Viewing a single comment thread. View all comments