ihateshadylandlords

ihateshadylandlords t1_iwh4o9d wrote

I don’t see a lot of those happening, but I’m just guessing too. I think a lot of these predictions will still be in the lab/discontinued by 2030, but the ones that make it out of the lab will be noticeable in the late 2030’s.

Edit: I think you can talk to “AI” now. I’ve seen interviews with a GPT3 bot, but it was underwhelming in my opinion.

1

ihateshadylandlords t1_ivov2ns wrote

> Why do people on this sub seem so much more confident in their predictions than everyone else?

I think it’s because this place (like most subreddits) is an echo chamber.

>What is it that people on this sub know that nobody else in STEM is aware of?

Good question. In my opinion, I think people put too much stock into early stage developments. I think there’s a good chance that most of the products/developments that get posted here daily won’t go that far.

1

ihateshadylandlords t1_ivbtle2 wrote

> it seems as if we're approaching the holy grail of regenerative medicine extremely fast.

What is extremely fast in your opinion? I feel like it would take 10 years minimum for this drug to be available for the masses (assuming the FDA approves it and all that other stuff).

12

ihateshadylandlords t1_iv40poj wrote

> I’m not assuming it’ll be sentient, I’m just saying an Oracle ASI equally as dangerous as one with agency. It MIGHT be sentient. Or it might NOT be sentient, but still dangerous, I.e. the paper clip maximizer scenario.

Meh, the dangers of an ASI can be discussed in another thread. We were initially talking about how an ASI might manifest, so it’s getting off course.

>Like just AGI you mean? Yeah I agree with that of course. But ASI, again, seems short sighted. If Google makes human level AGI, but it’s just as smart as say Einstein, yeah of course they’ll use it to get richer. But if they create something that makes Einstein look like an ant, they’d be foolish to use it in such a way.

Okay. Just don’t be surprised if companies keep doing what they’ve been doing for literally thousands of years and use their products to make a profit.

1

ihateshadylandlords t1_iv3z7of wrote

> Even if an ASI is an oracle alignment is still just as much of an issue. It can tell them to do something that sounds completely harmless to even the smartest of humans and even non-ASI AGIs, but in reality lets it out of the box.

You’re assuming the ASI will be sentient. Teams are doing everything to ensure it’s not sentient.

> What do you mean? That's exactly what ASI is. We're talking about something orders of magnitudes more intelligent than Albert Einstein here. A machine like that will be capable of recursively improving its own intelligence at an insane rate and will eventually know how to achieve any goal compatible with the laws of physics in the most efficient way possible for any possible set of constraints. That is basically by definition a magical genie that can do anything in a split second.

Okay. Then the owners will probably use this non-sentient tech to take care of themselves and the rest of us next.

1

ihateshadylandlords t1_iv3rnlh wrote

Who knows if they’ll even let their ASI do the tasks. They might ask how to do it on their own to ensure the ASI stays as an Oracle like entity and not some runaway genie.

>Why would they want to recoup their investment?

Unless the ASI is a genie that can turn everything around in a split second, they’re most likely going to want to take care of themselves first and everyone else right after that.

2

ihateshadylandlords t1_iv3co1t wrote

>The moment ASI comes online is the moment money loses all of its value

That’s assuming whoever creates it will let it run on its own. There’s a whole subreddit dedicated towards why that’s a problem (/r/controlproblem). I really doubt the founders and employees will let their ASI run wild. For anyone to not recoup their investment and let their product run wild is silly imo.

1