AdditionalPizza

AdditionalPizza t1_itmatqp wrote

Here's a few:

First

Second

Third

Forth

So at some point in these, they all mention this "5 to 10 years" or so casually when they refer to AGI or transformative AI being capable of doing most jobs. There's a few more out there but these were in my recent history.

I recommend watching some videos from Dr. Alan D. Thompson for a continuous stream of some cool language model capabilities he explains. He's not a CEO or anything, but he just puts out some interesting videos.

And then there's this one here talking about AI programming. Another here, in this interview he mentions hoping people forget about GPT-3 and move on to something else. Hinting at GPT-4 maybe? Not sure.

6

AdditionalPizza OP t1_itfy45b wrote

Moore's Law is probably still going for a bit, with current technology should go until 2024 - 2026. I think Nvidia claimed it's dead, and Intel claimed it was not dead shortly afterward. Depends on the exact definition.

Doesn't really matter though, I find it hard to believe these companies will just pack it in, and give up. They'll figure something else out, they've had decades of research go into it.

While it's also not the be all end all of anything either, it's just an easy to digest concept of exponential growth in tech; don't rule out AI from assisting in figuring out new architecture.

2

AdditionalPizza OP t1_itck3o1 wrote

I agree with that definition. There's always the possibility, we don't know what's on the other side of the singularity, that it could propel us instantaneously into weird tech but I don't really bother debating that kind of stuff. What's happening now is plenty exciting for my brain.

4

AdditionalPizza t1_itciz3i wrote

>AGI is not ASI. People here need to stop misrepresenting the ability of AGI. It won't be smarter than the average human (by definition)

The important part of AGI is the G, which stands for general. The definition of AGI means it will have the ability to do whatever humans can do. The very nature of artificial intelligence presumes it will be able to do everything much, much faster and much more accurately. ASI has a much fuzzier and debatable definition and is used when comparing AGI to something that is billions of times more intelligent than humans, and has processing power >= all living human brains collectively. ASI will most likely have more abilities than humans, we have no idea at this point.

An AGI could very well plan the logistics of reversing climate change and create technology to do it effectively. Realistic humans could stop climate change, we just don't.

9

AdditionalPizza OP t1_itcgkgk wrote

That's what I'm saying when I talk about programming being automated for efficiency. We will have behind the scenes transformative AI (already do, but it will be increasing over 5 years) which is potentially increasing the rate.

It's hard to gauge it, but it's there. I'm at the point I feel like I'm too "optimistic" but it's not impossible I'm being conservative in some aspects.

I think robotics and medicine will increase much quicker than people expect. Similar to LLM increases today. I think LLM's still have a lot of uptick ahead though. Of course I could be wrong, I'm not a prophet and don't pretend to be. I'm just going by the generally accepted graphs we've all seen.

2

AdditionalPizza t1_itbr8e9 wrote

Reply to comment by Spoffort in U-PaLM 540B by xutw21

There's no point in wasting money/time on a large model right now, I agree.

At least until they're satisfied enough to try another larger model again.

2

AdditionalPizza OP t1_itbdu7b wrote

The current LLM's work in a general way, they just need to be scaled to include a larger pool of abilities.

Don't get me wrong, there's plenty of hurdles in the way, but let's just wait and see what the next generation of models have the ability to do. Hopefully within a few months we will have an idea.

Odds are with scaling, at least most jobs will be replaceable within 5 years. All data entry jobs anyway, which is a very significant amount of work for humans. Anything that requires a human to enter information into a computer should be replaceable. Fairly likely, most/all jobs that require logistics and planning.

2

AdditionalPizza OP t1_itbd93c wrote

The problem with a very slow transition, or one where people don't even pay attention is for the people that are affected directly by it. You don't care if your neighbour is unemployed because you got yours.

They will have to try and study a new career path for a couple years or start from the bottom of a trade. All while being unsure if their next career of choose will disappear before they advance at all.

A quick instantaneous one would be amazing, but extremely unlikely.

I'm not sure which end of the spectrum it will be, by my fingers are crossed for quick enough to reduce suffering for as many as possible.

4

AdditionalPizza t1_it8zew3 wrote

Reply to comment by mj-gaia in U-PaLM 540B by xutw21

Basically look at it this way, scaling works but we haven't scaled massive again (yet). But also in ELI5 terms, they're discovering significantly "better ways to scale" in a sense. So It's going to be bonkers when we do a next generation scale.

36

AdditionalPizza OP t1_it8v9c1 wrote

>The number of job positions the economy supports is not hard capped at some maximum value.

No, you're right that it isn't. But I think time plays a large factor here. If suddenly enough people's employment is displaced, and automation is gobbling up enough jobs, then we have a case of more unemployed people per month than new human viable jobs created per month. It may very well settle itself, but if the rate is high enough it won't matter. You can't have a large portion of society unemployed for very long, chaos ensues.

Unless of course there's a lot of menial labour jobs to go around, that probably will result in the same situation though. I think in a situation where we have physical robots able to do labour, it's well past the point of society needing to change.

1

AdditionalPizza OP t1_it7z0r3 wrote

I would suggest trying something that is self sufficient, more so than an "employable" skill.

Take it up as a hobby now, and if you truly are in the first wave, you'll maybe have some totally unrelated skill you can use for passive income in a market that isn't entirely dictated by IT.

2

AdditionalPizza OP t1_it7hczg wrote

Yeah we have totally opposite opinions haha. I mean we have the same foundation, but we go different directions.

I believe increasing human productivity with AI will undoubtedly lead to a quicker rate with which we achieve more adequate AI and then the cycle continues until the human factor is unnecessary.

While I'm not advocating full automation of all jobs right away, I am saying there's a bottom rung of the ladder that will be removed, and when there's only so many rungs, eventually the ladder won't work. As in, chunks of corporations will be automated and there won't be enough jobs to fill elsewhere for the majority of the unemployed.

2

AdditionalPizza OP t1_it7dt3m wrote

All I can really say is issues like that are being worked on as we speak and have been since inception. Assuming it will take years and years to solve some of them is what I'm proposing we question a little more.

But I'm also not advocating that fully automated systems will replace all humans in a year. I'm saying a lot of humans won't be useful at their current jobs when an overseen AI replaces them, and their skill level won't be able to advance quickly enough in other fields to keep up, rendering them unemployed.

3

AdditionalPizza OP t1_it734wc wrote

>I'm also pretty confident that there will be a transition period where AI will augment, rather than replace

Yeah don't get me wrong. I don't even mean full automation at first. I mean automation that increases efficiency. Job losses will start to become more and more commonplace starting in 2025. All while LLM's are assisting in break through after break through. We don't need full autonomy of the work force, just enough that we can't expect our current system to work at all.

5

AdditionalPizza OP t1_it72r5k wrote

Being that robotics is an IT industry (obviously), the growth there is going to boom even more than it has in the past few years.

I don't think a lot people are ready to expect it over the next couple years. I think 2025 is when it will be proven to be useful enough to most industries that it will start being deployed en masse in different sectors.

6

AdditionalPizza OP t1_it6w0ya wrote

I don't love talking about tech I have no idea about, but I'd argue in that case a sort of cloud computing could be possible. Expanding our brains with a "server" or something.

But that's way beyond anything I know about.

3

AdditionalPizza OP t1_it6vmqy wrote

>Automation is coming for everyone, artist, programmer, office worker or physical laborer.

I won't speak for them, but personally when I talk about this I mean intellectual or digital jobs go first, I mean they go first and not long after robotics is there. Labour jobs will inevitably need more logistics to replace, as its not just software a company can install. I won't pretend to be able to predict that, but I think it won't be much longer after there's already an unemployment crises on our hands. It won't really matter at that point.

I don't think full automation of everything will happen that quickly, but it really doesn't need to be full automation. It needs to be 10 to 15% of the workforce jobless with no skills outside of their extinct domain.

5

AdditionalPizza OP t1_it6v5e7 wrote

This is exactly what I'm saying. It's time people stop making excuses based on how we were thinking 5 years ago.

This is happening, and it's happening now. We all waited for this, it's just happening in a way we didn't expect. But in hindsight this makes so much more sense. The digital jobs should be the first to go. Yes they take high human skill, but we should've had the foresight that high human skill != high AI skill. AI are born digital. They are masters of intelligence.

With that being said, robotics are going to feel this effect as well. I think we can agree when we say intellectual jobs go first, it's not first by a mile. They're first on a scale of months to a year or 2. Implementation of robotics in the real world is a challenge we can't really predict at this point though.

23

AdditionalPizza OP t1_it6t7kh wrote

That's definitely a wild card, politics. I don't foresee politicians giving up the reins easily. But I don't think that's even a possibility aside from some ultimate AGI being let loose and all that.

Realistically, I think policy makers won't be able to be quick enough to deal with a lot of this. Unless they just try and ban AI from automating jobs. People will be unemoyed and corporations will be lining their own pockets.

I'm not much of a political person though, and it's hard to predict human nature.

5

AdditionalPizza OP t1_it6snr9 wrote

The tough part about thinking exponentially is we have to base it on something. When it's a chart over the span of 20 years it's easy, connect the dots and wait. We've been doing that for decades.

When we're at a point that the rate is advancing so quickly, and the timeframe is less than a decade we need to fight linear thinking.

5 years today is 5 years in the past. 5 years of progress from today will happen in 2.5 years relative to the past 5.

>out of curiosity, what date you have for AGI?

Not the answer you want, but I have no idea. I don't think it really matters either. LLM's will likely prove to be significant enough that we will be making huge advances in the coming years.

But if I had to guess based off my limited knowledge, I'd say prior to 2029. 2027 to 2028 so long as LLM's will either directly lead to AGI or have the ability to solve the hurdles we need to get there. We have things like AlphaTensor, who knows what else we can come up with in a year or 2.

8