Submitted by Gortanian2 t3_123zgc1 in singularity
SoylentRox t1_jdyhulw wrote
Fine let's spend a little effort debunking this:
From:
https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec
Intelligence is situational — there is no such thing as general intelligence.
This is empirically false and not worth debating. Current sota AI use very very very simplistic algorithms and are general, and slight changes to the algorithm result in large intelligence increases.
This is so wrong I will not bother with the rest of the claims, this author is unqualified
​
From:
Extraordinary claims require extraordinary evidence
- you could have "debunked" nuclear fission in 1943 with this argument and sat comfortably in the nice japanese city of hiroshima unworried. Sometimes you're just wrong.
Good ideas become harder to find
This is true but misleading. We have many good ideas, like fusion rocket engines, flying cars, genetic treatments to disable aging, nanotechnology. As it turns out the implementation is insanely complicated and hard. Sometime AI can do much better than us.
Bottlenecks
True but misleading. Each bottleneck can be reduced at an exponential rate. For example if we actually have AGI right now, we'd be building as many robots and AI accelerator chips as physically can, and also increasing the rate of production exponentially.
Physical constraints
True but misleading, the solar system has a lot of resources. Growth will stop when we have exhausted the entire solar system of accessible solid matter.
​
Sublinearity of intelligence growth from accessible improvements
True but again misleading, even if intelligence is sublinear we can make enormous brains, and there are many tasks, mentioned above, we as individual humans are too stupid to make short term progress on, so investors won't pay to develop them.
So even if the AGI system has 1 million times the computational power of a human being, but is "only 100" times as smarter and works 24 hours a day, it can still make possible to make working examples of many technologies in short timelines. Figure out biology and aging in 6 months of frenetic round the clock experiments using millions of separate robots. Figure out a fusion rocket engine by building 300,000 prototypes of fusion devices of various scales. And so on.
Human beings are not capable of doing this, no human alive can even hold in their head the empirical results of 300k engine builds and field geometries. So various humans have to "summarize" all the information and they will get it wrong.
Yomiel94 t1_jdyrrw5 wrote
> This is so wrong I will not bother with the rest of the claims, this author is unqualified
I find these comments pretty amusing. The author you’re referring to is François Chollet, an esteemed and widely published AI researcher whose code you’ve probably used if you’ve ever played around with ML (he created Keras and, as a Google employee, is a key contributor to Tensorflow).
So no, he’s not “unqualified,” and if you think he’s confused about a very basic area of human or machine cognition, you very likely don’t understand his claim, or are yourself confused.
Based on your response, you’re probably a little of both.
SoylentRox t1_jdze8ze wrote
I don't care who he is, it doesn't fit measurements he is aware exist.
Viewing a single comment thread. View all comments