Submitted by atomsinmove t3_10jhn38 in singularity
Ortus14 t1_j5objyy wrote
Reply to comment by phriot in Steelmanning AI pessimists. by atomsinmove
I see no reason why understanding the human brain would be needed.
We have more than enough concepts and AGI models, we just need more compute imho. Compute (for the same cost) increases by a thousand times every ten years. So by Kurzweils 2045 date, compute for the same cost can be estimated to be 4.2 million times more than today.
Even if moors law ended the trend would continue because of the fact that server farms are growing at an exponential pace, and solar energy is dropping towards zero. If we have a breakthrough in fusion power it will accelerate beyond our models.
Today we can simulate vision (roughly 20% of the human brain) but we're simulating it in a way that's far more computationally efficient than the human brain, because we're making the absolute most out of our hardware.
It's pretty likely we'll reach super human level AGI well before 2045.
phriot t1_j5ovn3n wrote
I don't think that you have to simulate a human brain to get intelligence, either. I discuss that toward the end of my comment. But the OP asked about counterarguments to the Kurzweil timeline for AGI. Kurzweil explicitly bases his timeline on the those two factors: computing power and a good enough brain model to simulate in real time. I don't think that the neuroscience will be there in 6 years to meet Kurzweil's timeline.
If we get AGI in 2029, it will likely be specifically because some other architecture does work. It won't be because Kurzweil was correct. In some writings, Kurzweil goes further and says that we'll have this model of the brain, because we'll have this really amazing nanotech in the late 2020s that will be able to non-invasively map all the synapses, activation states of neurons, etc. I'm not particularly up on that literature, but I don't think we're anywhere close to having that tech. I expect that we'll need AGI/ASI, first, to get there before 2100.
With regards to your own thinking, you only mention computing power. Do you think that intelligence is emergent given a system that produces enough FLOPS? Or do you think that we'll just have enough spare computing power to analyze data, run weak AI, etc., that will help us discover how to make an AGI? I don't believe that intelligence is emergent based on processing power, or else today's top supercomputers would be AGIs already, as they surpass most estimates of the human brain's computational capabilities. That implies that architecture is important. Today, we don't really have ideas that will confidently produce an AGI other than a simulated brain. But maybe we'll come up with a plan in the next couple of decades. (I am really interested to see what a LLM with a memory, some fact-checking heuristics, ability to constantly retrain, and some additional modalities would be like.)
Viewing a single comment thread. View all comments