SoylentRox
SoylentRox t1_j8hmnw5 wrote
Reply to comment by nohwan27534 in Would an arcology be conceivably possible? by peregrinkm
You wouldn't use domes. Either many underground bunkers connected by tunnels with logistics transport, or many sealed surface buildings. Depending how hostile the surface is. Domes don't provide radiation or blast protection.
SoylentRox t1_j8hmhvc wrote
Reply to comment by nohwan27534 in Would an arcology be conceivably possible? by peregrinkm
You would use brighter than the sun grow lamps and genetically modify it to store calories and make b6 etc.
SoylentRox t1_j8h5omo wrote
Reply to comment by nohwan27534 in Would an arcology be conceivably possible? by peregrinkm
Go check the numbers on spirulina. The math says you need a few LITERs per person of growing algae. Even if it's 10 times less efficient than that it's just not the problem you think it is.
SoylentRox t1_j8h3czz wrote
Reply to comment by nohwan27534 in Would an arcology be conceivably possible? by peregrinkm
You obviously have to use automation. And high pressure water and heat to break down the sewage to remove odors.
Keep in mind the scale I am imagining: a million plus people. Not 4.
SoylentRox t1_j8h1cu8 wrote
Reply to comment by greenman5252 in Would an arcology be conceivably possible? by peregrinkm
Why? Like are you claiming you can't grow nutrients?
SoylentRox t1_j8gypbq wrote
Reply to comment by DoktoroKiu in Would an arcology be conceivably possible? by peregrinkm
As long as you have some minimum number of people (specialized skills) and enough manufacturing machinery, this won't happen.
I agree there are scenarios where humans might die, but I just don't feel you are arguing in good faith. "despite many decades of trying". What are you talking about? There was biosphere 2. And..........................
What else? Literally when has this ever been tried? The ISS is far too small to attempt a closed loop life support system. So I know of 0 examples other than a small cult effort that hit problems because I recall they had CO2 releasing from the concrete pad the biosphere was built on, no automation (subsidence farming is very labor intensive), no genetically engineered crops to help (hadn't been invented yet)...
SoylentRox t1_j8gydo4 wrote
Reply to comment by BigZaddyZ3 in Altman vs. Yudkowsky outlook by kdun19ham
>Finally, something we can agree on at least.
Yeah. It's quite grim actually if you think about what even just sorta useful AGI would allow you to do. By "sorta useful" I mean "good enough to automate jobs that ordinary people do, but not everything". So mining and trucking and manufacturing and so on.
It would be revolutionary. For warfare. Because the reason you can't win a world war is you can't dig enough bunkers for your entire population to be housed in separated bunkers, limiting the damage any 1 nuke can do, and build enough antimissile systems to prevent most of the nuclear bombardment from getting through.
And then well you fight the whole world. And win. "merely" AI able to do ordinary people tasks gives you essentially exponential amounts of production capacity. You're limited by how much land you have for an entire country covered in factories.
Note by "you" I don't mean necessarily the USA. With weapons like this, anyone can be a winner.
SoylentRox t1_j8gx69g wrote
Reply to comment by BigZaddyZ3 in Altman vs. Yudkowsky outlook by kdun19ham
Right. I know I am correct and simply don't think you have a valid point of view.
Anyways it doesn't matter. Neither of us control this. What is REALLY going to happen is an accelerating race, where AGI gets built basically the first moment it's possible at all. And this may turn into outright warfare. Easiest way to deal with hostile AI is to build your own controllable AI and bomb it.
SoylentRox t1_j8gwy3x wrote
Reply to comment by greenman5252 in Would an arcology be conceivably possible? by peregrinkm
Why would you die of hunger? Just grow more crops in vertical farms. Biosphere 2 had a small growing area and no robotics.
SoylentRox t1_j8gwv4n wrote
Reply to comment by pete_68 in Would an arcology be conceivably possible? by peregrinkm
But why can't we just order robots to grow whatever we need.
I just don't see it. Biosphere 2 was small scale, had limited reserves of oxygen etc. A sealed biodome on earth can pull in oxygen still from the earths atmosphere even if it is sterile or full of bioweapons or radioactive etc.
SoylentRox t1_j8gknwx wrote
Reply to comment by pete_68 in Would an arcology be conceivably possible? by peregrinkm
Please don't just ignore what i said. HOW would we die. Assume we have better robotics also.
SoylentRox t1_j8gk6d1 wrote
Reply to comment by BigZaddyZ3 in Altman vs. Yudkowsky outlook by kdun19ham
How dystopic? An unfair world but everyone gets universal health care and food and so on? But it's not super great, it's like the videogames with lots of habitation pods and nutrient paste? Or S risk?
Note I don't "think it is". I know there a range of good and bad outcomes, and "we all die" or "we live but are tortured" fit in that area of "bad outcomes". I am just explaining the percentage of bad outcomes that would be acceptable.
Delaying things until the bad outcome risk is 0 is also a bad outcome.
SoylentRox t1_j8gjv91 wrote
Reply to comment by pete_68 in Would an arcology be conceivably possible? by peregrinkm
You think we would just die instead of rapidly genetically engineering our way of any imbalances? Any protein or nutrient we need, just have bacteria make it.
Bubble boy lived so it's not like this isn't possible. People have lived on meal replacement drinks for years with all synthetic ingredients. What precisely would kill humans?
I am assuming the biosphere collapses and the earth is as sterile as the Moon, but large numbers of humans have plenty of money and resources and the genetic code in compute files for everything that matters.
SoylentRox t1_j8gji4k wrote
Reply to comment by pinkfootthegoose in Would an arcology be conceivably possible? by peregrinkm
Would the solution if it came to that be massive solar arrays along the now uninhabitable equator (maintained by remote controlled robotics or workers at night or in air conditioned suits) and vertical farms in cooler latitudes?
SoylentRox t1_j8gj6go wrote
Reply to comment by BigZaddyZ3 in Altman vs. Yudkowsky outlook by kdun19ham
It's the cause of 90 percent of deaths. But obviously I implicitly meant treatment for all non instant death, and rapid development of cortical stacks or similar mind copying technology to at least prevent friends and loved ones from missing those killed instantly.
And again, I said relative risk. I would be willing to accept an increase of risk of all of humanity dying up to a 0.80 percent increased chance of it meant AGI 1 year sooner. 10 years sooner? 8 percent extra risk is acceptable and so on.
Note I consider both humans dying "natural" and a superior intelligence killing everyone "natural" so all that matters is the risk.
SoylentRox t1_j8ghast wrote
Reply to comment by BigZaddyZ3 in Altman vs. Yudkowsky outlook by kdun19ham
I am saying it's an acceptable risk to take a 0.5 percent chance of being wiped out if it lets us completely eliminate natural causes deaths for humans 1 year earlier.
Which is going to happen. Someone will cure aging. (Assuming humans are still alive and still able to accomplish things) But to do it probably requires beyond human ability.
SoylentRox t1_j8gf381 wrote
Reply to comment by BigZaddyZ3 in Altman vs. Yudkowsky outlook by kdun19ham
It makes perfect sense you just are valuing outcomes you may not live to witness.
SoylentRox t1_j8fyxct wrote
Reply to comment by jamesj in Altman vs. Yudkowsky outlook by kdun19ham
Have you considered that delaying AGI also has an immense cost?
Each year, the world loses 0.84% of everyone alive.
So if delay AGI by 1 year reduces the chance of humanity dying by 0.5%, for example, it's not worth the cost. This is because 0.84% extra people have to die while more AGI safety work is done who wouldn't have died if more advances in medicine and nanotechnology were available 1 year sooner, and the expected value an extra 0.5% chance of humanity wiped out is not enough gain.
(since "humanity wiped out" is what happens whenever any human dies, from their perspective)
Note this is true even if it takes 100 years from AGI -> (aging meds, nanotechnology) because it's still 1 year sooner.
SoylentRox t1_j8eh4tu wrote
Reply to comment by genericrich in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
My point is that scale matters. A 3d multiplayer game was "known" to be possible in the 1950s. They had mostly offline rendered graphics. They had computer networks. There was nothing in the idea that couldn't be done, but in practice it was nearly completely impossible. The only thing remotely similar cost more than the entire manhattan project and they were playing that 3d game in real life. https://en.wikipedia.org/wiki/Semi-Automatic_Ground_Environment
If you enthused about future game consoles in the 1950s, you'd get blown off. Similarly, we have heard about the possibility of AI about that long - and suddenly boom, the dialogue of HAL 9000 for instance is actually quite straightforward and we could duplicate EXACTLY the functions of that AI right now, no problem. Just take a transformer network, add some stream control characters to send commands to ship systems, add a summary of the ship's system status to the memory it sees each frame. Easy. (note this would be dangerous and unreliable...just like the movie)
Also note that in the the 1950s there was no guarantee the number of vacuum tubes you would need to support a 3d game (hundreds of millions) would EVER be cheap enough to allow ordinary consumers to play them. The transistor had not been invented.
Humans for decades thought an AGI might take centuries of programming effort.
SoylentRox t1_j8eglhs wrote
Reply to comment by sitdowndisco in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
See this comment: https://www.reddit.com/r/singularity/comments/1118hkt/comment/j8e8c8p/?utm_source=share&utm_medium=web2x&context=3
what was the correct term?
SoylentRox t1_j8efx92 wrote
Reply to comment by FusionRocketsPlease in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
Many algorithms don't show a benefit unless used at large scales. Maybe "discover" is the wrong word, if your ml researcher pool has 10,000 ideas but only 3 are good, you need a lot of compute to benchmark all the ideas to find the good ones. A LOT of compute.
Arguably you "knew" about the 3 good ideas years ago but couldn't distinguish them from the rest. So no, you really didn't know.
Also transformers are a recent discovery (2017), it required compute and software frameworks to support complex nn graphs to even develop the idea.
SoylentRox t1_j8edo45 wrote
Reply to comment by wren42 in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
I know this but I am not sure your assumptions are quite accurate. When you ask the machine to "take this program and change it to do this", often your request is unique, but is similar enough to previous training examples it can emit the tokens with the edited program and it will work.
It has genuine encoded "understanding" of language or this wouldn't be possible.
Point is it may be all a trick but it's a USEFUL one. You could in fact connect it to a robot and request it to do things in a variety of languages and it will be able to reason out the steps and order the robot to do them. And Google has demoed this. It WORKS. Sure it isn't "really" intelligent but in some ways it may be intelligent the same way humans are.
You know your brain is just "one weird" trick right. It's a buncha cortical columns crammed in and a few RL inputs from the hardware. Its not really intelligent.
SoylentRox t1_j8ecpwg wrote
Reply to comment by genericrich in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
But yes false? Your argument is like saying people in 1850 knew about aerodynamics and combustion engines.
Which, yes, some did. Doesn't negate the first powered flight 50 years later, it was still a significant accomplishment.
SoylentRox t1_j8e8c8p wrote
Reply to comment by fairly_low in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
Can you go into more detail?
In this case, there is more than 1 input that causes acceleration.
Set 1:
(1) more compute
(2) more investor money
(3) more people working on it
Set 2:
(A) existing AI making better designs of compute
(B) existing AI making money directly (see chatGPT premium)
(C) existing AI substituting for people by being usable to write code and research AI
​
Set 1 existed in the 2010-2020 era. AI wasn't good enough to really contribute to set 2, and is only now becoming good enough.
So you have 2 separate sets of effects leading to an exponential amount of progress. How do you represent this mathematically? This looks like you need several functions.
SoylentRox t1_j8hrdur wrote
Reply to comment by throwaway764586893 in Altman vs. Yudkowsky outlook by kdun19ham
Which ones? AGI takeover, the AI has no need to make it painful. Just shoot everyone in the head (through walls from long range) without warning or whatever is most efficient. You mean from aging and cancer right.