Viewing a single comment thread. View all comments

royalemate357 t1_j9rphfc wrote

I think the biggest danger isn't AIs/AGIs pursuing their own goals/utility functions that involve turning all humans into paperclips. I think the "predict-the-next-word" AIs that are currently the closest thing to AGI aren't capable of recursively self improving arbitrarity, nor is there evidence AFAIK that they pursue their own goals.

Instead the danger is in people using increasingly capable AIs to pursue their own goals, which may or may not be benign. Like, the same AIs that can cure cancer can also create highly dangerous bioweapons or nanotechnology.

45

wind_dude t1_j9rvfo5 wrote

That's just how tools are used, has been since the dawn of time. You just want to be on the side with the largest club, warmest fire, etc.

10

SleekEagle t1_j9tttxr wrote

Until the tools start exhibiting behavior that you didn't predict and in ways that you have no control over. Not taking an opinion on which side is "right", just saying that this is a false equivalence with respect to the arguments that are being made.

​

EDIT: Typo

6

wind_dude t1_j9up1ux wrote

> Until the tools start exhibiting behavior that you didn't predict and in ways that you have no control over.

LLMs already do behave in ways we don't expect. But they are much more than a hop skip, a jump and 27 hypothetical leaps away from being out of our control.

Yes, people will use AI for bad things, but that's not an inherent property of AI, that's an inherent property of humanity.

1

SleekEagle t1_j9vl7r3 wrote

I don't think anyone believes it will be LLMs that undergo an intelligence explosion, but they could certainly be a piece of the puzzle. Look at how much progress has been made in just the past 10 years alone - imo it's not unreasonable to think that the alignment problem will be a serious concern in the next 30 years or so.

In the short term, though, I agree that people doing bad things with AI is much more likely than an intelligence explosion.

Whatever anyone's opinion, I think the fact that the opinions of very smart and knowledgeable people run the gamut is a testament to the fact that we need to dedicate serious resources into ethical AI beyond the disclaimers at the end of every paper that models may contain biases.

2

shoegraze t1_j9s22kq wrote

Yep if we die from AI it will be from bioterrorism well before we get enslaved by a robot army. And the bioterrorism stuff could even happen before “AGI” rears its head.

10

dentalperson t1_j9t6zxx wrote

> can also create highly dangerous bioweapons

EY's example he gave in the podcast was a bioweapon attack. Unsure what kind of goal the AI had in this case, but maybe that was the point:

>But if it's better at you than everything, it's better at you than building AIs. That's snowballs. It gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second. That's the disaster scenario if it's as smart as I am. If it's smarter, it might think of a better way to do things. But it can at least think of that if it's relatively efficient compared to humanity because I'm in humanity and I thought of it.

2

crt09 t1_j9tncbf wrote

"Unsure what kind of goal the AI had in this case"

tbf pretty much any goal that involves you doing something on planet Earth may be interrupted by humans, so to be certain, getting rid of them probably reduces the probability of being interrupted from your goal. I think its a jump that itll be that smart or that the alignment goal we use in the end wont have any easier way to the goal than accepting that interruptibility, but the alignment issue is that it Wishes it was that smart and could think of an easier way around

3