sticky_symbols

sticky_symbols t1_j2cz1ib wrote

Anyone could be wrong. But I'm about as expert as anyone on AI and brain function, and by my estimate the first is catching the second very fast.

I'm not sure that's a good thing. At this rate the singularity is likely to wipe us out.

1

sticky_symbols t1_j22m6ap wrote

None.

Nobody cares enough about this sub to create a bot that integrates chatGPT with an account.

Some time soon, we might see a lot of those on everything, once the code becomes commonplace enough for amateurs to run just for fun.

3

sticky_symbols t1_j1i5gpt wrote

Reply to comment by PoliteThaiBeep in Hype bubble by fortunum

I agree that we don't know when. The point people often miss is that we have high uncertainty in both directions. It could happen sooner than the average guess, as well as later. We are now around the same processing power as a human brain (depending what aspects of brain function you measure), so it's all about algorithms.

6

sticky_symbols t1_j1giyvw wrote

Reply to comment by fortunum in Hype bubble by fortunum

I agree with all of this. But the definition of outrageous is subjective. Is it more outrageous to claim that we're on a smooth path to AGI, or to claim that we will suddenly hit a brick wall, when progress has appeared to readily accelerate in recent years? You have to get into the details to decide. I'd say Marcus and co. are about half right. But the reasons are too complex to state here.

10

sticky_symbols t1_j1gi6fn wrote

Reply to comment by fortunum in Hype bubble by fortunum

Here we go. This comment has enough substance to discuss. Most of the talk in this sub isn't deep or well informed enough to really count as discussion.

Perceptual and motor networks are making progress almost as rapidly as language models. If you think those are important, and I agree that they can only help, they are probably being integrated right now, and certainly will be soon.

I've spent a career studying how the human brain works. I'm convinced it's not infinitely more complex than current networks, and the co.putational motifs to get from where we are to brain like function are already understood by handfuls of people, and merely need to be integrated and iterated upon.

My median prediction is ten years to full superhuman AGI, give or take. By that I mean something that makes better plans in any domain than a single human can do. That will slowly or quickly accelerate progress as it's applied to building better AGI, and we have the intelligence explosion version of the singularity.

At which point we all die, if we haven't somehow solved the alignment problem by then. If we have, we all go on permanent vacation and dream up awesome things to do with our time.

28

sticky_symbols t1_j179w2x wrote

The AI in Her is fully sentient. It is smarter than humans in every way.

I don't think people can fully fall in love with something way dumber than they are.

So AI will be charming to the very young and very old. Until they take over.

I could be wrong, though - desperate people may fall in love with non-sentient systems that are just somewhat better than chatGPT with a long term memory.

7

sticky_symbols t1_izy81b1 wrote

Interesting thought. I think you might well be right.

I guess having a politician in charge of the world is better than having humanity ended. :) But they are people who want power, and I find that disturbing. This might be a case where I'd actually prefer a successful actor, and they're pretty good with popularity contests, too. I think that for that purpose, being positive and open-minded would be more important than any knowledge. And you wouldn't need to have someone that smart, just not stupid.

The Rock for god-emperor! Or maybe RDJ can reprise his role as the creator of Ultron...

:)

1

sticky_symbols t1_izy5p7j wrote

I think this is exactly right. And I've read a ton of professional writing on AGI safety.

I think some of the people involved are already thinking this way.

It's actually really nice that both the clear leaders in AI also seem to be really ethical people, the sort you'd like to have ruling the world, if someone has to. However, governments and militaries will probably catch on and take over those labs before they get all the way to AGI.

2

sticky_symbols t1_izy1gup wrote

Sounds like you are discounting the possibility of truly sentient AGI. Most experts are not. The world will change again when it arrives, maybe for the better, maybe for the worse.

You're also not accounting for possible political change. When the rich own the AI that creates everything, I hope we'll legislate for distributing that wealth.

Also, art will become meaningless to some creators, but not consumers. For us it will be better when AI makes it easier. Imagine having more, better AAA games to choose from and play cooperatively and competitively with other people. I think the meaning of life for many will be based on our accomplishments and communities in virtual worlds.

2