SoylentRox t1_j4das0y wrote
Reply to comment by ElvinRath in Does anyone else get the feeling that, once true AGI is achieved, most people will act like it was the unsurprising and inevitable outcome that they expected? by oddlyspecificnumber7
I think this depends on your definition of 'original research'. Some AI systems already do research, and are used to set the equipment for the next run based on the numerical results of all the previous runs. This is used in semiconductor process optimization and fusion energy research. You could argue that this isn't 'original' or 'research' but you could devise a lot of experiments that are "just" have the robots do an experiment similar to before, but vary certain parameters in a way the AI 'believes' (based on past data) may give new information.
The key part in that description is having robots sophisticated enough to set up experiments, something we don't currently have.
ElvinRath t1_j4dw91r wrote
Oh, yeah, sorry.
I was answering the op and I used "original research" because he mentioned that, but I was thinking "independent" (Term that I use later in my post), meaning, "without human intervention" (Or, at least, not more intervention that "Hey, go and research on this")
​
No human intervention is the requirement for the concept of singularity (Well,,, or augmented humans that can comprehend in seconds what actually take years, but that's probably not a human anymore... :D )
SoylentRox t1_j4edj94 wrote
I am not sure that you would not get a singularity if there was only a small amount of human involvement, say 1000 times as much research as we do now with the same number of people working on it.
Viewing a single comment thread. View all comments