TemetN

TemetN t1_isiez70 wrote

This is kind of a what it says on the tin case, basically one of the biggest problems being tackled in this field is successfully targeting and integrating treatments. As a result, even if this method turns out to be problematic down the line, it has a better than average chance of being significant.

1

TemetN t1_iryiise wrote

I'd be interested in this too - so far your responses seem to be either short, or from people citing something other than when they joined though. You might try going back and reading through the old yearly prediction threads though, I found them interesting in some ways.

​

I've only been here since around last Christmas, and the only real changes are a slight update closer on weak AGI, and an acknowledgement we're likely to see the ramp up the singularity before volitional AGI.

7

TemetN t1_irfp5ia wrote

Good question. My timelines for this are much slower than for AGI, simply because I don't see a lot of progress being made (or focus on it), but there have been a lot of arguments for emergent intelligence. I still tend to think we won't see this until we actually start attempting to develop it, but I don't really think we can rule it out either way.

4

TemetN t1_iqwuq0b wrote

I'm one of the early predictors of AGI, and I still don't expect a rapid takeoff - even if we do something in a lab, it doesn't mean broad adoption has been achieved and further the benefits from creating such things have to cycle through the economy. I will note though that modern predictions have been consistently more pessimistic than results (see ML surveys by Bostrom et al, or various predicted benchmarks such as the big MATH dataset miss).

​

This all said, earlier responses to you are right - the modern take on education is unhealthy. It used to be acknowledge that an educated populace was a public good in and of itself. Continued learning should be undertaken simply to improve yourself (and the world around you).

1