SoylentRox

SoylentRox t1_j8gypbq wrote

As long as you have some minimum number of people (specialized skills) and enough manufacturing machinery, this won't happen.

I agree there are scenarios where humans might die, but I just don't feel you are arguing in good faith. "despite many decades of trying". What are you talking about? There was biosphere 2. And..........................

What else? Literally when has this ever been tried? The ISS is far too small to attempt a closed loop life support system. So I know of 0 examples other than a small cult effort that hit problems because I recall they had CO2 releasing from the concrete pad the biosphere was built on, no automation (subsidence farming is very labor intensive), no genetically engineered crops to help (hadn't been invented yet)...

1

SoylentRox t1_j8gydo4 wrote

>Finally, something we can agree on at least.

Yeah. It's quite grim actually if you think about what even just sorta useful AGI would allow you to do. By "sorta useful" I mean "good enough to automate jobs that ordinary people do, but not everything". So mining and trucking and manufacturing and so on.

It would be revolutionary. For warfare. Because the reason you can't win a world war is you can't dig enough bunkers for your entire population to be housed in separated bunkers, limiting the damage any 1 nuke can do, and build enough antimissile systems to prevent most of the nuclear bombardment from getting through.

And then well you fight the whole world. And win. "merely" AI able to do ordinary people tasks gives you essentially exponential amounts of production capacity. You're limited by how much land you have for an entire country covered in factories.

Note by "you" I don't mean necessarily the USA. With weapons like this, anyone can be a winner.

2

SoylentRox t1_j8gx69g wrote

Right. I know I am correct and simply don't think you have a valid point of view.

Anyways it doesn't matter. Neither of us control this. What is REALLY going to happen is an accelerating race, where AGI gets built basically the first moment it's possible at all. And this may turn into outright warfare. Easiest way to deal with hostile AI is to build your own controllable AI and bomb it.

0

SoylentRox t1_j8gwv4n wrote

But why can't we just order robots to grow whatever we need.

I just don't see it. Biosphere 2 was small scale, had limited reserves of oxygen etc. A sealed biodome on earth can pull in oxygen still from the earths atmosphere even if it is sterile or full of bioweapons or radioactive etc.

1

SoylentRox t1_j8gk6d1 wrote

How dystopic? An unfair world but everyone gets universal health care and food and so on? But it's not super great, it's like the videogames with lots of habitation pods and nutrient paste? Or S risk?

Note I don't "think it is". I know there a range of good and bad outcomes, and "we all die" or "we live but are tortured" fit in that area of "bad outcomes". I am just explaining the percentage of bad outcomes that would be acceptable.

Delaying things until the bad outcome risk is 0 is also a bad outcome.

1

SoylentRox t1_j8gjv91 wrote

You think we would just die instead of rapidly genetically engineering our way of any imbalances? Any protein or nutrient we need, just have bacteria make it.

Bubble boy lived so it's not like this isn't possible. People have lived on meal replacement drinks for years with all synthetic ingredients. What precisely would kill humans?

I am assuming the biosphere collapses and the earth is as sterile as the Moon, but large numbers of humans have plenty of money and resources and the genetic code in compute files for everything that matters.

1

SoylentRox t1_j8gj6go wrote

It's the cause of 90 percent of deaths. But obviously I implicitly meant treatment for all non instant death, and rapid development of cortical stacks or similar mind copying technology to at least prevent friends and loved ones from missing those killed instantly.

And again, I said relative risk. I would be willing to accept an increase of risk of all of humanity dying up to a 0.80 percent increased chance of it meant AGI 1 year sooner. 10 years sooner? 8 percent extra risk is acceptable and so on.

Note I consider both humans dying "natural" and a superior intelligence killing everyone "natural" so all that matters is the risk.

1

SoylentRox t1_j8ghast wrote

I am saying it's an acceptable risk to take a 0.5 percent chance of being wiped out if it lets us completely eliminate natural causes deaths for humans 1 year earlier.

Which is going to happen. Someone will cure aging. (Assuming humans are still alive and still able to accomplish things) But to do it probably requires beyond human ability.

2

SoylentRox t1_j8fyxct wrote

Reply to comment by jamesj in Altman vs. Yudkowsky outlook by kdun19ham

Have you considered that delaying AGI also has an immense cost?

Each year, the world loses 0.84% of everyone alive.

So if delay AGI by 1 year reduces the chance of humanity dying by 0.5%, for example, it's not worth the cost. This is because 0.84% extra people have to die while more AGI safety work is done who wouldn't have died if more advances in medicine and nanotechnology were available 1 year sooner, and the expected value an extra 0.5% chance of humanity wiped out is not enough gain.

(since "humanity wiped out" is what happens whenever any human dies, from their perspective)

Note this is true even if it takes 100 years from AGI -> (aging meds, nanotechnology) because it's still 1 year sooner.

15

SoylentRox t1_j8eh4tu wrote

My point is that scale matters. A 3d multiplayer game was "known" to be possible in the 1950s. They had mostly offline rendered graphics. They had computer networks. There was nothing in the idea that couldn't be done, but in practice it was nearly completely impossible. The only thing remotely similar cost more than the entire manhattan project and they were playing that 3d game in real life. https://en.wikipedia.org/wiki/Semi-Automatic_Ground_Environment

If you enthused about future game consoles in the 1950s, you'd get blown off. Similarly, we have heard about the possibility of AI about that long - and suddenly boom, the dialogue of HAL 9000 for instance is actually quite straightforward and we could duplicate EXACTLY the functions of that AI right now, no problem. Just take a transformer network, add some stream control characters to send commands to ship systems, add a summary of the ship's system status to the memory it sees each frame. Easy. (note this would be dangerous and unreliable...just like the movie)

Also note that in the the 1950s there was no guarantee the number of vacuum tubes you would need to support a 3d game (hundreds of millions) would EVER be cheap enough to allow ordinary consumers to play them. The transistor had not been invented.

Humans for decades thought an AGI might take centuries of programming effort.

11

SoylentRox t1_j8efx92 wrote

Many algorithms don't show a benefit unless used at large scales. Maybe "discover" is the wrong word, if your ml researcher pool has 10,000 ideas but only 3 are good, you need a lot of compute to benchmark all the ideas to find the good ones. A LOT of compute.

Arguably you "knew" about the 3 good ideas years ago but couldn't distinguish them from the rest. So no, you really didn't know.

Also transformers are a recent discovery (2017), it required compute and software frameworks to support complex nn graphs to even develop the idea.

7

SoylentRox t1_j8edo45 wrote

I know this but I am not sure your assumptions are quite accurate. When you ask the machine to "take this program and change it to do this", often your request is unique, but is similar enough to previous training examples it can emit the tokens with the edited program and it will work.

It has genuine encoded "understanding" of language or this wouldn't be possible.

Point is it may be all a trick but it's a USEFUL one. You could in fact connect it to a robot and request it to do things in a variety of languages and it will be able to reason out the steps and order the robot to do them. And Google has demoed this. It WORKS. Sure it isn't "really" intelligent but in some ways it may be intelligent the same way humans are.

You know your brain is just "one weird" trick right. It's a buncha cortical columns crammed in and a few RL inputs from the hardware. Its not really intelligent.

7

SoylentRox t1_j8e8c8p wrote

Can you go into more detail?

In this case, there is more than 1 input that causes acceleration.

Set 1:

(1) more compute
(2) more investor money
(3) more people working on it

Set 2:

(A) existing AI making better designs of compute

(B) existing AI making money directly (see chatGPT premium)

(C) existing AI substituting for people by being usable to write code and research AI

​

Set 1 existed in the 2010-2020 era. AI wasn't good enough to really contribute to set 2, and is only now becoming good enough.

So you have 2 separate sets of effects leading to an exponential amount of progress. How do you represent this mathematically? This looks like you need several functions.

7