World_May_Wobble

World_May_Wobble t1_iy3w5ts wrote

Consider commercial aviation. It has seen no gains in 40 years. In fact, it slid back with the death of Concorde. Sometimes things stagnate because there's a lack of imagination, or the economics is bad, or there just physically is no way to do the thing we envision.

Stagnation has been the norm for most of human history, and we should expect more of it with things that aren't closely linked to some kind of feed-forward loop. Smaller transistors help us make smaller transistors. Better AI can help us make better AI. Better VR ... Is just better VR.

Edit: Airliners have seen some gains in fuel efficiency, and they've obviously become more computerized but these are not the kind of exponential transformations we have become used to in computing.

14

World_May_Wobble t1_ixefjwe wrote

When I say "sustainable," I don't just mean eco-friendly. For example, it's not sustainable to keep large arsenals of nuclear armed ICBMs, because even if the probability of them being used in any year is very small, the cumulative probability over long time spans approaches 1. Probably the only way to change this is a radical and global change in governance.

Then yes, there are environmental issues. We don't have a ready answer to microplastics, and they're making us infertile when we're already heading into a demographic cul-de-sac. We'll need more rare earth metals for those electrics cars. Oh, and by the way, those electric cars are still being powered by coal.

Europe is the poster child of renewables, and most of its energy still doesn't come from renewables. Its leading renewable isn't solar or wind; it's wood, and it's not even close. Wider adoption of solar and wind require better battery technology, but batter technology has improved at a notoriously linear rate. It's not going to be any time soon that we see all of Europe's energy come from renewables, and again, they're the best at this.

I'm not saying there's no progress, but that's kind of the point. We need progress to get ahead of some of the problems in our future.

2

World_May_Wobble t1_ixb9xt0 wrote

It is anthropocentric, which might even be warranted. For example, if the AGI that takes off ends up being an emulated human mind, human psychology is totally relevant.

It really all depends on the contingencies of how the engineers navigate the practically infinite space of possible minds. It won't be a blank slate. It'll have some. The mind we pull out of the urn will depend on the engineering decisions smart people make. If they want a more human mind, they can probably get something that, if nothing else, acts human. But for purely economic reasons, they'll probably want the thing to be decidedly unhuman.

1

World_May_Wobble t1_ixb8tjl wrote

*We're* general intelligences that are content by much less than solving medical problems while we sit idly in states of precarious safety, so I wouldn't make too many uncaveated proclamations about what an AGI will put up with.

Any speculation about the nature of an unbuilt AI's motivations makes unspoken assumptions about the space of possible minds and how we will choose to navigate that space. For all we know, AGI will come in the form of the world's most subservient and egoless grad student having their mind emulated. We can't predict the shape and idiosyncrasies of an AGI without assuming a lot of things.

When I talk about us not surviving an approach to this, I'm pointing at much more mundane things. Look at how narrow algorithms like Facebook, Youtube, and Twitter have inflamed and polarized our politics. Our culture, institutions, and biology aren't adapted to those kinds of tools. Now imagine the degenerating effect something like full dive VR, Neuralink, universal deepfake access, or driverless cars will have. Oh. Right. And they're all happening at about the same time.

Don't worry about the AGI. Worry about all the landmines between here and there.

1

World_May_Wobble t1_ixa232i wrote

Something that passes for AGI 2030-2040.

Full dive VR 2035-2045.

The singularity is a more alien and total transformation though. It's not one innovation; it's all of them, everywhere, all at once. So 2045-2055 on our current trajectory.

We've entered a new paradigm and are rapidly soaking up a lot of low hanging fruits in the form of language models. A lot of people here are mistaking that sudden progress for a more systemic, sustainable trajectory, but one toy does not a singularity make.

Personally, I doubt we ever get there. Much like an actual singularity, approaching it will kill you. Our civilization is too fragile and too monkey to survive an encounter with this.

7

World_May_Wobble t1_iwcmfw8 wrote

Far centrist here. This division is founded on axiomatic value judgements, almost entirely determined by accidents of one's gestation and history. Almost no one's opinions are justified. No one knows what's best, and there may not even be such a thing.

2

World_May_Wobble t1_iv6zi3l wrote

We have to make a lot of assumptions, and there's very little to anchor those assumptions to. So all we can say is given set of assumptions x, you tend toward world y.

One of my assumptions is that, depending on its capabilities, constraints, and speed of takeoff, an ASI may not be in a position to establish a singleton. Even an uploaded human mind is technically superintelligent, and it's easy to imagine a vast ecosystem of those forming.

Even if you imagine a singleton arising, you have to make some assumptions about its activities and constraints. If it's going to be doing things in places that are physically separated, latency may be an issue for it, especially if it's running at very high speeds. It may want to delegate activities to physically distributed agents. Those may be subroutines, or whole copies of the ASI. In either case, you again have a need for agents to exchange resources.

1

World_May_Wobble t1_iv6k0dr wrote

>Why would it need symbols to do that though?

I think bartering has problems besides converting between iPhones and chickens. Even if you know how many chickens an iPhone is worth, what if one ASI doesn't *want* iPhones? Then you can't "just do it directly," you have to find an intermediary agent who wants your iPhone who has something chicken-ASI wants.

Then symbols have other benefits. For example, you can't pay in fractions of an iPhone, but symbols are infinitely divisible, and symbols store value longer than chickens, which die and rot.

>there would not be market forces in such a system

Why not? Agents are (I presume) exchanging things based on their supply and demand. That's a market.

1

World_May_Wobble t1_its5dvj wrote

Lol. $6250 would be the monthly salary of someone who makes $75,000 annually. That's around the salary I'd expect for an expert-tier professional in something like astrobiology or astrophysics.

That tells me that NASA can't be paying for more than a month of these people's time, and they can't be splurging on any tools besides a whiteboard and some markers.

Granted, they may not be dedicated to this project full-time, so they may spread it out over months, but this amounts to little more than a high profile brainstorming session.

6

World_May_Wobble t1_its07l8 wrote

$100,000 or less [1] [2]

For 16 people, that's $6,250 per person. Probably around a month's salary for each person. "I'm going to take you and 15 other people who don't have relevant expertise and pay you to think about this problem for a month." That's how I read this.

0

World_May_Wobble t1_ita2kc0 wrote

>albeit the lab technicians and assistants doing less innovative work will be far sooner.

I'm one of these people. The amount of automation my company has picked up in the last few years is substantial. All the busy-work I do would take me a week without all the gadgets and machines I'm using.

2

World_May_Wobble t1_it01qm0 wrote

The alternative is too disruptive to imagine. Do you want careful, incremental thinkers to imagine a world where everything we know, all the pillars of our civilization, go out the window? It's so alien to us, we don't know if our species can even survive in proximity to that paradigm, nevermind describing the problems and solutions of that world.

I don't think anything useful can be discussed about a post-AGI world, and to say anything at all about it requires such leaps of imagination, that serious thinkers are wary to go near it.

1