Viewing a single comment thread. View all comments

Ok_Homework9290 t1_j3p62eo wrote

I've commented this before, and since it's relevant, I'll comment it again (almost verbatim):

Take Metaculus seriously at your risk. Anyone can make a prediction on that website, and those who do tend to be tech-junkies who are generally optimistic about timelines.

To my understanding, most AI/ML expert surveys continue to have an AGI arrival year average of some decades from now/mid-century plus, and the majority of individuals who are AI/ML researchers have similar AGI timelines.

Also, I'm a bit skeptical that the amount of progress that's been made in AI the past year (which has been impressive, no doubt) merits THAT much of a shave-off from the February 2022 prediction. Just my thoughts.

25

SoylentRox t1_j3p8ys3 wrote

>most AI/ML expert surveys continue to have an AGI arrival year average of some decades from now/mid-century plus, and the majority of individuals who are AI/ML researchers have similar AGI timelines

You know, when the Manhattan project was being worked on, who would you trust for a prediction of the first nuke detonation, Enrico Fermi or some physicist who had worked on radioactive materials.

I'm suspicious that any "experts" with valid opinions exist outside of well funded labs (openAI/google/meta/anthropic/hugging face etc)

They are saying a median of about ~8 years, which would be 2031.

13

Ok_Homework9290 t1_j3qcmla wrote

>They are saying a median of about ~8 years, which would be 2031.

That's an oddly specific number/year.

Also, remember that people who work at AI corporations, as opposed to academia (for example), have the tendency to hype up their work, which makes their timelines (on average) shorter. To me personally, a survey of AI researchers on timelines has more weight than AI Twitter, which is infested with hype.

1

Thelmara t1_j3s2c1e wrote

> That's an oddly specific number/year.

No, that's the median of a spread, and it's stated with the caveat of "about". That's literally the opposite of "specific".

5

will-succ-4-guac t1_j3rm0sa wrote

Source on that 8 years number? Would certainly be quite a compelling argument if a random sampling of exclusively well funded AI PhDs had a median prediction of 8 years.

1

SoylentRox t1_j3rovna wrote

It's just the opinions on the eleuther AI discord. Arguably weak general AI will be here in 1-2 years.

My main point is the members I am referring to all live in Bay Area and work for hugging face and openAI. Their opinion is more valid than say a 60 year old professor in the artificial intelligence department at Carnegie melon.

2

will-succ-4-guac t1_j3rmq84 wrote

> Also, I'm a bit skeptical that the amount of progress that's been made in AI the past year (which has been impressive, no doubt) merits THAT much of a shave-off from the February 2022 prediction. Just my thoughts.

Correct, and if anything, the mere fact that the prediction has changed by over a decade in the span of 12 months is strong evidence of exactly what you’re saying — this prediction is made by people who aren’t really in the know.

If the weather man told you it was going to be 72 and sunny tomorrow and then when you woke up tomorrow he said actually it’s going to be -15 and a blizzard you would probably think hmmm, maybe this guy doesn’t know what the fuck he’s talking about.

2

arindale t1_j3tq5wt wrote

I agree with all of your comments. And to add what I believe to be a more important point, the Metaculus question defines weakly general AI as (heavily paraphrased):

- Pass the Turing Test (text prompt)

- Achieve human-level written language comprehension on the Winograd Schema Challenge

- Achieve human-level result on the math section of the SATs

- Play the Atari game Montezuma's Revenge at a human level

We already have separate narrow AIs that can do these tasks at either human or nearly human levels. We even have more general AIs that can do multiple of these tasks at a near-human level. I wouldn't be overly surprised if by the end of 2023, we have a single AI that could do all of these tasks (and many other human-level task). But even so, many people wouldn't call it general AI.

Not trying to throw shade here on Metaculus. They had to narrowly define general AI and have concrete, measurable objectives. I just personally disagree with where they drew that line.

2

arisalexis t1_j3q2cp0 wrote

I trust the opinión of an unknown redditor without any links sure. If you do decide to post a link it should be after stable diffusion and chatgpt3 the survey.

0