World_May_Wobble
World_May_Wobble t1_iy4tufx wrote
Reply to comment by Seek_Treasure in Why is VR and AR developing so slowly? by Neurogence
Likewise airfare seems to have halved at least once.
It's not transformational, and we'd be disappointed if the biggest improvement in VR between now and 2050 was that it was cheaper -- but it is something.
World_May_Wobble t1_iy4qkvi wrote
Reply to comment by Seek_Treasure in Why is VR and AR developing so slowly? by Neurogence
Has it? How many doublings has the range of airliners undergone in the last 40 years?
World_May_Wobble t1_iy3w5ts wrote
Reply to comment by Neurogence in Why is VR and AR developing so slowly? by Neurogence
Consider commercial aviation. It has seen no gains in 40 years. In fact, it slid back with the death of Concorde. Sometimes things stagnate because there's a lack of imagination, or the economics is bad, or there just physically is no way to do the thing we envision.
Stagnation has been the norm for most of human history, and we should expect more of it with things that aren't closely linked to some kind of feed-forward loop. Smaller transistors help us make smaller transistors. Better AI can help us make better AI. Better VR ... Is just better VR.
Edit: Airliners have seen some gains in fuel efficiency, and they've obviously become more computerized but these are not the kind of exponential transformations we have become used to in computing.
World_May_Wobble t1_iy1jng2 wrote
Reply to comment by [deleted] in What is one thing you NEED but can't afford right now? by Revolutionary-Net-93
Rude.
World_May_Wobble t1_iy13xuo wrote
Money.
World_May_Wobble t1_ixvyf9d wrote
Reply to For anyone still believing that standalone VR/AR/MR will flourish and popularize in the 2020s, please watch this video and think again. by Quealdlor
I have no opinion about VR, but this video is about Meta, not the technology in principle. We don't know what other implementations will come later in the decade.
World_May_Wobble t1_ixl20if wrote
Reply to comment by curiosityVeil in Over 1,000 songs with human-mimicking AI vocals have been released by Tencent Music in China. One of them has 100m streams. by mutherhrg
Ngl. Those threads are my favorite thing about Reddit.
World_May_Wobble t1_ixk4us9 wrote
Reply to comment by NTIASAAHMLGTTUD in Over 1,000 songs with human-mimicking AI vocals have been released by Tencent Music in China. One of them has 100m streams. by mutherhrg
Am I Chinese? Can I help?
World_May_Wobble t1_ixfsxpj wrote
Reply to what does this sub think of Elon Musk by [deleted]
He has at times been a little important to technological progress, and I don't really think one way or another about him.
World_May_Wobble t1_ixefjwe wrote
Reply to comment by Artanthos in How do you think about the future of AI? by diener1
When I say "sustainable," I don't just mean eco-friendly. For example, it's not sustainable to keep large arsenals of nuclear armed ICBMs, because even if the probability of them being used in any year is very small, the cumulative probability over long time spans approaches 1. Probably the only way to change this is a radical and global change in governance.
Then yes, there are environmental issues. We don't have a ready answer to microplastics, and they're making us infertile when we're already heading into a demographic cul-de-sac. We'll need more rare earth metals for those electrics cars. Oh, and by the way, those electric cars are still being powered by coal.
Europe is the poster child of renewables, and most of its energy still doesn't come from renewables. Its leading renewable isn't solar or wind; it's wood, and it's not even close. Wider adoption of solar and wind require better battery technology, but batter technology has improved at a notoriously linear rate. It's not going to be any time soon that we see all of Europe's energy come from renewables, and again, they're the best at this.
I'm not saying there's no progress, but that's kind of the point. We need progress to get ahead of some of the problems in our future.
World_May_Wobble t1_ixbk3mn wrote
Reply to comment by Artanthos in How do you think about the future of AI? by diener1
This is also a nightmare scenario, because without radically new technologies and governance, the stuff we're up to isn't sustainable.
World_May_Wobble t1_ixb9xt0 wrote
Reply to comment by Falkusa in How much time until it happens? by CookiesDeathCookies
It is anthropocentric, which might even be warranted. For example, if the AGI that takes off ends up being an emulated human mind, human psychology is totally relevant.
It really all depends on the contingencies of how the engineers navigate the practically infinite space of possible minds. It won't be a blank slate. It'll have some. The mind we pull out of the urn will depend on the engineering decisions smart people make. If they want a more human mind, they can probably get something that, if nothing else, acts human. But for purely economic reasons, they'll probably want the thing to be decidedly unhuman.
World_May_Wobble t1_ixb8tjl wrote
Reply to comment by HongoMushroomMan in How much time until it happens? by CookiesDeathCookies
*We're* general intelligences that are content by much less than solving medical problems while we sit idly in states of precarious safety, so I wouldn't make too many uncaveated proclamations about what an AGI will put up with.
Any speculation about the nature of an unbuilt AI's motivations makes unspoken assumptions about the space of possible minds and how we will choose to navigate that space. For all we know, AGI will come in the form of the world's most subservient and egoless grad student having their mind emulated. We can't predict the shape and idiosyncrasies of an AGI without assuming a lot of things.
When I talk about us not surviving an approach to this, I'm pointing at much more mundane things. Look at how narrow algorithms like Facebook, Youtube, and Twitter have inflamed and polarized our politics. Our culture, institutions, and biology aren't adapted to those kinds of tools. Now imagine the degenerating effect something like full dive VR, Neuralink, universal deepfake access, or driverless cars will have. Oh. Right. And they're all happening at about the same time.
Don't worry about the AGI. Worry about all the landmines between here and there.
World_May_Wobble t1_ixa232i wrote
Reply to How much time until it happens? by CookiesDeathCookies
Something that passes for AGI 2030-2040.
Full dive VR 2035-2045.
The singularity is a more alien and total transformation though. It's not one innovation; it's all of them, everywhere, all at once. So 2045-2055 on our current trajectory.
We've entered a new paradigm and are rapidly soaking up a lot of low hanging fruits in the form of language models. A lot of people here are mistaking that sudden progress for a more systemic, sustainable trajectory, but one toy does not a singularity make.
Personally, I doubt we ever get there. Much like an actual singularity, approaching it will kill you. Our civilization is too fragile and too monkey to survive an encounter with this.
World_May_Wobble t1_iwcmfw8 wrote
Reply to Cultural Profile of r/singularity by Redvolition
Far centrist here. This division is founded on axiomatic value judgements, almost entirely determined by accidents of one's gestation and history. Almost no one's opinions are justified. No one knows what's best, and there may not even be such a thing.
World_May_Wobble t1_iv6zi3l wrote
Reply to comment by ReadSeparate in How do you think an ASI might manifest? by SirDidymus
We have to make a lot of assumptions, and there's very little to anchor those assumptions to. So all we can say is given set of assumptions x, you tend toward world y.
One of my assumptions is that, depending on its capabilities, constraints, and speed of takeoff, an ASI may not be in a position to establish a singleton. Even an uploaded human mind is technically superintelligent, and it's easy to imagine a vast ecosystem of those forming.
Even if you imagine a singleton arising, you have to make some assumptions about its activities and constraints. If it's going to be doing things in places that are physically separated, latency may be an issue for it, especially if it's running at very high speeds. It may want to delegate activities to physically distributed agents. Those may be subroutines, or whole copies of the ASI. In either case, you again have a need for agents to exchange resources.
World_May_Wobble t1_iv6k0dr wrote
Reply to comment by ReadSeparate in How do you think an ASI might manifest? by SirDidymus
>Why would it need symbols to do that though?
I think bartering has problems besides converting between iPhones and chickens. Even if you know how many chickens an iPhone is worth, what if one ASI doesn't *want* iPhones? Then you can't "just do it directly," you have to find an intermediary agent who wants your iPhone who has something chicken-ASI wants.
Then symbols have other benefits. For example, you can't pay in fractions of an iPhone, but symbols are infinitely divisible, and symbols store value longer than chickens, which die and rot.
>there would not be market forces in such a system
Why not? Agents are (I presume) exchanging things based on their supply and demand. That's a market.
World_May_Wobble t1_iv4guq0 wrote
Reply to comment by ReadSeparate in How do you think an ASI might manifest? by SirDidymus
Why wouldn't money have value in a post-ASI world? I assume even super-intelligent, digital minds will need to find maximally efficient ways to distribute resources over very large, technically complex networks. Money's one way of doing that.
World_May_Wobble t1_ituvswt wrote
Reply to comment by throwaway23410689 in NASA announces its unidentified aerial phenomena - A 16-people team — including an astronaut, a space-treaty drafter, a boxer, and several astrobiologists — will soon begin its review of unexplained aerial phenomena (UAP) for NASA research team to examine mysterious sightings. by yourSAS
I'm only speaking about one of them.
World_May_Wobble t1_its5dvj wrote
Reply to comment by Thismonday in NASA announces its unidentified aerial phenomena - A 16-people team — including an astronaut, a space-treaty drafter, a boxer, and several astrobiologists — will soon begin its review of unexplained aerial phenomena (UAP) for NASA research team to examine mysterious sightings. by yourSAS
Lol. $6250 would be the monthly salary of someone who makes $75,000 annually. That's around the salary I'd expect for an expert-tier professional in something like astrobiology or astrophysics.
That tells me that NASA can't be paying for more than a month of these people's time, and they can't be splurging on any tools besides a whiteboard and some markers.
Granted, they may not be dedicated to this project full-time, so they may spread it out over months, but this amounts to little more than a high profile brainstorming session.
World_May_Wobble t1_its07l8 wrote
Reply to comment by Thismonday in NASA announces its unidentified aerial phenomena - A 16-people team — including an astronaut, a space-treaty drafter, a boxer, and several astrobiologists — will soon begin its review of unexplained aerial phenomena (UAP) for NASA research team to examine mysterious sightings. by yourSAS
World_May_Wobble t1_itr1vsy wrote
Reply to NASA announces its unidentified aerial phenomena - A 16-people team — including an astronaut, a space-treaty drafter, a boxer, and several astrobiologists — will soon begin its review of unexplained aerial phenomena (UAP) for NASA research team to examine mysterious sightings. by yourSAS
This struck me as a joke as soon as I learned the budget for this team. My suspicions have been unambiguously confirmed.
World_May_Wobble t1_ita2kc0 wrote
Reply to Thoughts on Job Loss Due to Automation by Redvolition
>albeit the lab technicians and assistants doing less innovative work will be far sooner.
I'm one of these people. The amount of automation my company has picked up in the last few years is substantial. All the busy-work I do would take me a week without all the gadgets and machines I'm using.
World_May_Wobble t1_it01qm0 wrote
Reply to Since Humans Need Not Apply video there has not much been videos which supports CGP Grey's claim by RavenWolf1
The alternative is too disruptive to imagine. Do you want careful, incremental thinkers to imagine a world where everything we know, all the pillars of our civilization, go out the window? It's so alien to us, we don't know if our species can even survive in proximity to that paradigm, nevermind describing the problems and solutions of that world.
I don't think anything useful can be discussed about a post-AGI world, and to say anything at all about it requires such leaps of imagination, that serious thinkers are wary to go near it.
World_May_Wobble t1_iyeki7c wrote
Reply to What’s gonna happen to the subreddit after the singularity? by Particular_Leader_16
Reddit might not even be around in another 20 years.