Viewing a single comment thread. View all comments

ihateshadylandlords t1_j14g274 wrote

No, at least not in my opinion. Per the sidebar: “The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence.“

We’re nowhere close to that(yes, I’ve seen GPTCHAT).

Even if you use the definition of singularity as the point where tech progresses so fast we can’t keep up, we aren’t close to that either. Tech still has to pass through the proof of concept/R&D/market research/economic feasibility bottleneck before it ever makes it into production. That bottleneck gives us plenty of time to keep up with tech.

29

OldWorldRevival t1_j14pqic wrote

The thing is that AI actually vastly outstrips us in narrow problems.

I think that element of it will drive us to AGI sooner than later. That is, much of what AI is already good at should help reel in a lot of the technical AGI problems.

I.e. mapping neurons, mapping complex patterns between neurons and emulating that behavior more robustly.

I think that whatever problems that remain over the horizon, there's a sort of exponential space that we are now in where those unknowns will quickly be reeled in.

It's the nature of information technology itself. I.e. most math was discovered in the past 300 years compared to 10000 years of civilization.

Now our population is massive, which means that the talent pool is also significantly larger. It's inevitable that it will happen relatively soon, in my view, when those things are considered.

18

Agreeable_Bid7037 t1_j14ubxq wrote

true, and people are already working on ways to create better A.I. using existing A.I. so AGI may arrive quite abruptly soon.

13

TouchCommercial5022 t1_j15jo1r wrote

⚫ AGI is entirely possible; If it turns out that there is some mysterious unexplained process in the brain responsible for our general intelligence that cannot be replicated digitally. But that doesn't seem to be the case.

Other than that, I don't think anything short of an absolute disaster can stop it.

Since general natural intelligence exists, the only way to make AGI impossible is by a limitation that prevents us from inventing it. Its existence wouldn't break any laws of physics, it's not a perpetual motion machine, and it might not even be that impractical to build or operate if you had the blueprints. But the problem would be that no one would have the plans and there would be no way to obtain them.

I imagine this limitation would be something like a mathematical proof that using one intelligence to design another intelligence of equal complexity is an undecidable problem. On the other hand, evolution did not need any intelligence to reach us...

Let's say a meteor was going to hit the world and end everything.

That's when I'd say AGI isn't likely.

Assume that all intelligence occurs in the brain.

The brain has in the range of 1026 molecules. It has 100 billion neurons. With a magnetic sound (perhaps an enhancement of the current state) we can get a snapshot of an entire working human brain. At most, an AI that is a general simulation of a brain only has to model this. (It's "at most" because the human brain has things we don't care about, for example, "I like the taste of chocolate.") So we don't have to understand anything about intelligence, we just have to reverse engineer what we already have.

There are two additional things to consider:

⚫ If you believe that evolution created the human mind and its property of consciousness, then machine modeled evolution could theoretically do the same without a human needing to understand all the ins and outs. If consciousness came into existence without a conscious being trying once, then it can do so again.

⚫ Alphago, the Google AI that beat one of Go's top champions, was so important explicitly because it showed that we can produce an AI that can find the answers to things we don't quite understand. In chess, when the deep blue was made, the IBM programmers explicitly programmed a 'value function', a way to look at the board and judge how good the board was for the player, eg "having a queen are ten points, having a rook is 5 points, etc., add it all up to get the current value of the board."

With Go, the value of the board isn't something humans have figured out how to explicitly compute in a useful way; a stone in a particular position could be incredibly useful or harmful depending on the moves that could happen 20 turns down the road.

However, by giving Alphago many games to look at, Alphago eventually figured out using its learning algorithm how to judge the value of a board. This 'intuition' is the key to showing that AI can understand how to do tasks for which humans cannot explicitly write rules, which in turn shows that we can write AI that could understand more than we can, suggesting that, in the worst case, we could write 'bootstrapping' AIs that learn to create a real AI for us.

Many underestimate the implication of "solving intelligence". Once we know what intelligence is and how to build and amplify it, all artifacts will be connected to a higher-than-human intelligence that works at least thousands of times faster...and we don't even know what kind of emerging abilities lie beyond it. . Human intelligence. It's not just about speed. we can simply predict speed and accuracy, but there could be more.

The human brain exists. It's a meat computer. It's smart. It's sensitive. I see no reason why we can't duplicate that meat computer with electronic circuitry. The Singularity is not a question of if, but when.

We need a Manhattan Project for AI

AGI's superintelligence will advance so rapidly once the tipping point has passed (think minutes or hours, not months or years) that even the world's biggest tech nerd wouldn't see it coming, even if it happened outright.

when will it happen?

Hard to tell because technology generally advances as a series of S-curves rather than a simple exponential. Are we currently in an S-curve that leads rapidly to full AGI or are we in a curve that flattens out and stays fairly flat for 5-10 years until the next big breakthrough? Also, the last 10% of progress might actually require 90% of the work. It may seem like we're very close, but resolving the latest issues could take years of progress. Or it could happen this year or next. I don't know enough to say (and probably no one does)

It's like quantum physics. In the end, 99.99% of us have no fucking idea. It could take 8 years, 80 years or never.

Personally, I'm more on the side of AGI gradually coming into our lives rather than turning it on one day.

I imagine narrow AI systems will continue to seep into everything we use, as it already is. (Apps, games, creating music playlists, writing articles) But that they will eventually get more options as they develop. Take the most recent coronation achievement: GPT-3. I don't see it as an AGI in any sense, but I don't see it as totally narrow either. You can do multiple things instead of one. It can be a chatbot, an article writer, a code wizard, and much more. But he is also limited and is quite amnesiac when it comes to chatting, as he can only so far remember his own past, breaking the illusion of speaking to something intelligent.

But I think these problems will go away over time as we discover new solutions and new problems.

So for TL; DR. I feel like the AI ​​will gradually narrow down to general AI over time.

Go to the extreme for fun. We could end up with a chatbot assistant that we can ask almost anything to help us in our daily lives. If you're in bed and can't sleep, you may be able to talk, if you're at work and having trouble with a task, you may be able to ask for help, etc. It would be like a virtual assistant I guess. But that's me fantasizing about what could be and not a prediction of what will be.

2029 seems pretty viable in my opinion. But, I'm not too convinced that it will infuse into society and over 70% of a population's personal life. There is also the risk of a huge public backlash against the AI ​​if some things go wrong and give it a bad image.

But if. 2029 seems feasible. 2037 is my most conservative estimate.

Ray Kurzweil was the one who originally specified 2029. He chose that year at the time because, extrapolating forward, it seemed to be the year the world's most powerful supercomputer would achieve the same capacity in terms of "instructions per second" as a human being. brain.

Details about the computing capabilities have changed a bit since then, but its estimated date remains the same.

It could be even earlier.

If the scale hypothesis is true, that is. We are likely to see AI with 1 to 10 trillion parameters in 2021

We will see 100 trillion by 2025 according to open AI

The human brain is 1000 trillion. Also, each model is trained on a newer better architecture.

I'm sure something has changed in the last 2-3 years. I think maybe it was the transformer.

In 2018, Hinton was saying that general intelligence wasn't even close and we should scrap everything and start over.

In 2020, Hinton said that deep networks could actually do everything.

According to kurzweil, this has been going on for a while.

People in the 90s saying that AGI is thousands of years away

Then later in the 2000s, saying it's only centuries away

To the 2010s with deep learning people saying it's only a few decades away

AI progress is one of our fastest exponentials. I'll take the 10-year bet for sure.

6

visarga t1_j15tcrf wrote

> like a mathematical proof that using one intelligence to design another intelligence of equal complexity is an undecidable problem

No, it's not like that. Evolution is not a smart algorithm, but it created us and all life. Even though it is not smart, it is a "search and learn" algorithm. It does massive search, and the result of massive search is us.

AlphaGo wasn't initially smart. It was just a dumb neural net running on a dumb GPU. But after playing millions of games in self-play, it was better than humans. The way it plays is by combining search + learning.

So a simpler algorithm can create a more advanced one, given a massive budget of search and ability to filter and retain the good parts. Brute forcing followed by learning is incredibly powerful. I think this is exactly how we'll get from chatGPT to AGI.

3

Vitruvius8 t1_j177x96 wrote

How we look at and interpret consciousness could all be a cargo cult mentality. We might not be on the route at all. Just making it appear like it.

1

matt_flux t1_j17zv3s wrote

We aren’t just meat computers, we are alive, conscious, and have a drive to create. We are made in the image of God, and AI will always lack that.

−1

visarga t1_j15s5tw wrote

It's not "complex patterns between neurons" we should care about, what will drive AI is more and better data. We have to beef up datasets of step by step problem solving in all fields. It's not enough to get the raw internet text, and we already used a big chunk of it, there is no 100x large version coming up.

But I agree with you here:

> whatever problems that remain over the horizon, there's a sort of exponential space that we are now in where those unknowns will quickly be reeled in

We can use language models to generate more data, as long as we can validate it to be correct. Fortunately problem validation is more reliable than open ended text generation.

For example, GPT-3 in its first incarnations didn't have chain-of-thought abilities, so no multi-step problem solving. Only after training on a massive dataset of code did this ability emerge. Code is problem solving.

The ability to execute novel prompts comes from fine-tuning on a dataset of 1000 supervised tasks. So they are Question-Answer pairs of many kinds. After seeing 1000 tasks, the model can combine and solve countless more tasks.

So it matters what kind of data is in the dataset. By discovering what data is missing and what are the ideal mixing proportions AI will advance further. This process can be largely automated, it mostly costs GPU and electricity. That is why it could solve the data problem. It is not dependent on us creating more data.

2

Cult_of_Chad t1_j14lfs6 wrote

That's not the only definition of singularity. My personal definition of a technological singularity is a point in time when we're experiencing so many black swan events that the future becomes impossible to predict at shorter and shorter timescales. The event horizon.

We're definitely there as far as I'm concerned.

17

JVM_ t1_j14oxlx wrote

Same.

There seems to be an idea that the singularity needs to declare itself like Jesus returning or become a product release like Siri or a Google home.

There's a lot of space between no AI -> powerful AI (but not the singularity) -> the singularity.

Like you said, as the singularity approaches it becomes harder and harder to see the whole AI picture.

10

AdditionalPizza t1_j14xn9p wrote

>My personal definition of a technological singularity

​

>The event horizon.

I mean, you can have your own personal definition if you want, but that makes no sense. Not trying to sound rude or anything. An event horizon is not the same thing as a singularity. That's not having your own definition, that's just calling one thing another thing for no reason, specifically because we have definitions for both of those things already.

I agree with the comparison of being at or beyond an "event horizon" in terms of AI. But the singularity is an infinitely brief measure of time in which the moment we reach it, we have passed it. That moment, by actual definition, is when AI reaches a greater intelligence than all collective human intelligence. It probably won't even be a significantly impactful moment, it will just be a reference. We look at it now as some grand moment, but it isn't. It is just a moment where it's impossible for humanity to predict anything beyond it because an intelligence greater than all of ours exists, so we can't comprehend the outcome at this time.

The individual capacity of a human to not be able to predict what comes tomorrow has no bearing on whether or not the singularity has passed. Even if all human beings trying to predict what will come tomorrow are wrong, that still is not the singularity. It's a hypothetical time in the future that based on on today, right now, we know 100% we cannot make a prediction beyond because it's mentally impossible as a direct result of our brains being incapable.

It's interesting to consider that we may never reach the moment of a technological singularity either. If we merge with technology and increase our own intelligence, we could forever be moving the singularity "goal posts" similar to how an observer sees someone falling toward a black hole forever suspended, yet the subject falling felt a normal passage of time from event horizon to singularity. We may forever be suspended racing toward the singularity, yet at the same time having reached and surpassed it.

7

Cult_of_Chad t1_j14yew4 wrote

>An event horizon is not the same thing as a singularity.

I never said it was. I said we've crossed the event horizon, which puts us 'inside' the singularity.

>I mean, you can have your own personal definition

I didn't come up with it, Kurzweil did as far as I know.

>That moment, by actual definition, is when AI reaches a greater intelligence than all collective human intelligence

There's no 'actual' definition. It's a hypothetical/speculative.

7

AdditionalPizza t1_j1515v3 wrote

>I said we've crossed the event horizon, which puts us 'inside' the singularity.

That is essentially the same thing I claimed you said. The event horizon is normal times, you would unknowingly cross that barrier. In a physical sense, that would mean time slowing to an observer watching. I agree we are likely past that barrier/threshold in that more technological break throughs happen in shorter and shorter timeframes and eventually (the moment of singularity) there is a hypothetical infinite amount of technology being created AKA impossible for us to comprehend right now. But being within the bounds of the event horizon does not mean being inside of a singularity.

>I didn't come up with it, Kurzweil did as far as I know.

He didn't invent the comparison to physics, but that's besides the point. His definition is exactly what I stated. And I was referencing your comment directly, where you said you have your own personal definition...

>There's no 'actual' definition. It's a hypothetical/speculative.

There quite literally is an exact definition, and it isn't speculation. I'm not sure where you're getting that from, but it's a term that is widely used but this sub misuses it continually. It is a hypothetical thing, but not a speculative definition.

4

Cult_of_Chad t1_j151nsv wrote

>There quite literally is an exact definition

There have been multiple definitions used for as long as the subject has been discussed. AI is not even a necessary component.

6

AdditionalPizza t1_j154zp8 wrote

>AI is not even a necessary component.

For one, we are talking directly relating to AI. Even without AI, it means a technology that is so transforming that we haven't yet anticipated its impact (something like femtotech?). That could also arguably be some kind of medical break through that changes our entire perspective on life, say total immortality or something. Doesn't matter it's irrelevant to the discussion.

Second, the only definition is in direct comparison to the term used in physics, by which you aren't "inside" of a singularity the moment you cross the even horizon. I'm not trying to be overly direct or rude here, but you can't just use examples from physics to describe this and expect it to make sense when you've misused the terms.

From your original comment:

>My personal definition of a technological singularity is a point in time when we're experiencing so many black swan events that the future becomes impossible to predict at shorter and shorter timescales

Your thought process behind increasing occurrences of black swan events is perfectly acceptable as passing the event horizon. I like that reference, I've used it before. But crossing an event horizon does not equal being inside of a singularity. The technological singularity is a blip in time, not something you sit around in and chill for a while like we currently are in the "space between singularity and event horizon."

Anyway, that's about enough from me on the subject. I hope I didn't come off as rude or anything.

3

magnets-are-magic t1_j15dl5t wrote

I’m not the person you replied to but just wanted to say I appreciate the info you shared. I didn’t find it rude. Very interesting stuff!

4

AdditionalPizza t1_j15l4ir wrote

Thanks, I try to not be too wordy in comments which can make me sound like much more of an asshole than I intend to come across as. It's just a definition that has been skewered, and while the distinction isn't a hug difference, it's important so we don't get people claiming we're "in the singularity" right now. You're either pre-singularity, or post-singularity. There's no "in" and it's probably not going to be as of significant "event" as several things preceding it, and many many things following it.

2

oldmanhero OP t1_j15qd15 wrote

Just to be clear, this is not the definition I am using. The definition I am using is the point at which humanity can no longer "keep up" with the pace of technological change. That is a fuzzy concept, and as such not a point-like moment in time.

I'd hoped that much was obvious from the initial post, since I talked explicitly about the inability of institutions to keep pace.

2

AdditionalPizza t1_j15w7ty wrote

You could use other terms, such as Transformative AI. It describes the exact situation you're expressing. I don't want to sound like a nitpicking idiot or anything, but it's an important distinction that the singularity is in fact a moment and that we're either pre-singularity or post-singularity. You can make the argument that we're already post singularity, I'd probably disagree, but the opinion is your own.

I was just clarifying because it pops up in this sub often that people have this idea of the singularity and to be honest I'm not sure where that idea is coming from other than maybe being a feedback loop within this sub and similar online discussions that began as a misinterpretation of why we use the word singularity for a specific use-case.

Of course you're free to ignore me altogether haha, to each their own.

2

Gaudrix t1_j154nkx wrote

Yeah, I think people misconstrue technological singularity and AI singularity. It has nothing to do with not going backwards or any other constraint. Technology can always be destroyed and lost. The entire planet can be destroyed any instant.

The technological singularity was first and explains the confluence of different technologies that reach a stage where they begin to have compounding effects in progress, and there is an explosion of progress and trajectory.

The AI singularity specifically refers to the point AI becomes sentient and transitions into AGI. At which point we have no clue what the repercussions are after the creation of true artificial consciousness. Especially considering if it has the ability to self improve and on shorter and shorter time tables.

We are living through the technological singularity, and when they look back 100 years from now they'll probably put the onset somewhere in the late 90s or early 2000s. Things are getting faster and faster with breakthroughs across many different sectors due to cross-pollination of technological progress.

4

TheSecretAgenda t1_j14w4sj wrote

I was thinking about this the other day.

For example, Flemming discovered Penicillin in the 1920s. It took until the 1940s and a massive government investment to make it a mass-produced product because of the war effort.

Even if AGI was discovered tomorrow, it could take 10 plus years for AGI to have a meaningful impact on society.

8

Talkat t1_j15b9l9 wrote

I seriously doubt that. Chatgtp acquired users faster than any tech company.

6

VertexMachine t1_j16c556 wrote

Changes in digital space are fast. Changes in the physical world are slow. One can influence the other, but there are limits how fast physical world can change.... or as chatgpt would say:

In the digital world, changes can happen very quickly. Information can be transmitted instantly across the internet, and software can be updated and deployed almost instantly. In contrast, changes in the physical world tend to be slower and more laborious. It takes time and resources to build physical infrastructure, manufacture products, and make changes to the natural environment.

However, the digital world can influence the physical world and vice versa. For example, the internet and social media can be used to mobilize people and organize protests or other political action, which can then lead to changes in the physical world. Similarly, physical actions such as building a bridge or planting a forest can have long-term impacts on the natural environment and the quality of life for people living in the area.

3

XagentVFX t1_j15dce9 wrote

Why do people keep saying that? Midjourney accelerated at a crazy pace. Why are you so confident about that? Coping?

5

ihateshadylandlords t1_j15mejv wrote

>Why do people keep saying that?

Why do people keep saying what? Be specific.

>Midjourney accelerated at a crazy pace.

…and the people by far and large can keep up with program updates.

>Coping?

lol coping about what? Again, you need to be specific.

0

XagentVFX t1_j15nof9 wrote

Lol, cmon. I think it's obvious things are going to be moving much faster than we think. I've been in existential crisis mode myself. I'm a CGI artist, but I love Ai. I'm not even mad that it's taking the skills that I worked so hard most of my life to achieve in just a year. This 4th Industrial Revolution is going to be the big one. Capitalism itself needs to be done away with, it's that drastic. I don't see any jobs that'll be human only because of capability. Ai will do everything and better. My only problem is, will the rich give up that glorious feeling of being better than everyone else? Probably nah. The Elysium film is looking very realistic.

10

ihateshadylandlords t1_j15podg wrote

No doubt that things are moving fast; but I still think we’re able to keep up with advancements. As far as capitalism goes, yeah I don’t know what companies and governments will do when they automate enough to the point where people can’t afford to buy their products or pay taxes.

1

XagentVFX t1_j15y5lt wrote

It'll all just be UBI. Sam Altman is already setting up these initiatives as the CEO of OpenAi. He gets it. But the benefits of seeing a Super Intelligent AGI do it's thing will be worth the suffering of the fight it'll take for the elite to let go of money. And everyone else for that matter.

3

cuposun t1_j178gh7 wrote

Just like they gave UBI to the Walmart workers that self-checkout laid off. Oh wait.

2

imlaggingsobad t1_j17ykgu wrote

when unemployment hits 25% they'll have no choice but to mandate UBI

1

cuposun t1_j196w3p wrote

It must be nice to still believe the government will help the most disparaged. I wish I had that optimism. But have you looked around? Why does everyone think there is a utopia ahead? I highly highly doubt it. They don't give a shit about the least of us.

I'd say their idea of UBI is for-profit prisons. Basic needs are met, right? File under: be careful what you wish for.

2

XagentVFX t1_j180edl wrote

Haha. But this is completely different. It will be the majority of the working class around the world. If people become broke too quick the elite will lose thier influence and people won't respect government and there will be complete upheaval. The elite print money, they don't need it, they just want our servitude and the pride of feeling power. So they do need to keep us happy to an extent. There aren't any robot armies yet so thier militaries couldn't hold back billions of people. But they can't help it, they need more power, more more more. So it's very likely Ai will be coming in very quickly. But like we are seeing in China, they'll bite off more than they can actually chew. I predict the people will still want to topple government and maybe even ask Ai to lead instead. But either way Ai will be more than capable of doing any intellectual task very soon.

1

mootcat t1_j15x8fp wrote

The rate that technology is approved for use with the general populace is wildly different from the rate at which new breakthroughs are being made in the field.

Just over the last 2 years, there has been an exponential uptick in speed and quality of AI improvements evidenced via research papers. It has definitely gotten to the point where I can't keep up and feel like there's substancial breakthroughs constantly. Recent examples are 3d image modeling and video creation developing far more rapidly than we witnessed with image generation.

I'll note that these are also only the developments that are being publicly shared. I don't know about you, but I don't feel comfortable projecting even 5 years ahead to determine which jobs will or won't be automated.

3

shakedangle t1_j150o7i wrote

>Tech still has to pass through the proof of concept/R&D/market research/economic feasibility bottleneck

Where regulation is light, we've bypassed these bottlenecks, and are reaping the consequences - speaking of crypto in general - and a lot of people were/are getting hurt.

I would say our inability to properly regulate a space to prevent fraudulent gains or losses is "us not keeping up."

2

QuietOil9491 t1_j15o7va wrote

Define the current level of human intelligence/sentence

1

Vitruvius8 t1_j177n8u wrote

I think a good example is the fast food industry. That went from “low skilled”/“entry level” to rapidly being replaced and will be completely replaced in 5 years. How quickly will that snowball roll up to “high skilled” labor. Imagine all farming, fast food, grocery stores, all that stuff is 5 years from being automated away from people.

1