AdditionalPizza

AdditionalPizza OP t1_it6rg5b wrote

Yeah, I was saying it as an example, and in a multiplicative way. 10% on top of the already 50% sort of deal. But didn't want to use numbers like 1000%.

>Probably not the singularity or AGI, but a massive shift with the narrow AI we allready have alone.

Agreed. But that massive shift is going to be a ride before AGI. I know my post sounds very matter-of-fact and I debated the potential of looking like an idiot in 3 years. But 2022 has been hard to keep up with, barring some external forces causing an immediate halt on technological progress, I'm very certain 2025 and beyond we will start seeing societal shifts across industries.

The further past 2025 we get the more disruptions we'll see.

8

AdditionalPizza OP t1_it6qkj8 wrote

Yeah I saw something roughly about that before. It blows my mind that people in the industry refuse to believe it has much effect, and claim nobody is using it.

It's based on Codex and the LLM trajectory is moving so quickly. Everyone thinks their job is too complex for AI for a long time, until it isn't. People are underestimating LLM's abilities.

11

AdditionalPizza OP t1_it6q677 wrote

I think a lot of people equate AGI to a human in artificial form. But 'general' is the key word. It will have the general ability that humans have, but will be so much faster than a human that whichever companies start getting close to AGI first are going to shoot up in value at break neck speed.

Any company that gets to AGI or even close first will start making waves across all industries like energy, medicine, housing, automotive, etc.

I don't think many, if any, significant companies are going to stick to their roots and plug along with humans. Corporations exist to make investors money, period. They don't exist to make investors money by keeping humans on board for the sake of "what's right" and having values. They make money by any means necessary so their stock price goes up. Small businesses maybe I guess? But small business is dying anyway.

Any company that doesn't use the advanced productivity that AI will bring will fall into insignificance quickly. And this isn't a case of a board of directors carefully deciding if they should implement AI and lay off all employees. It will replace some employee tasks. Then in 2 months another wave. Then another month more employees. And so on.

9

AdditionalPizza OP t1_it6o96e wrote

They may not be the be all end all, though they sure are looking like they are a very significant step at the very least.

But I've said in the comments before, this post is about the time before AGI. We don't need AGI to see massive disruptions in society. I believe LLM's are the way we will get there, but language models are "good enough" to increase productivity by enough across enough IT sectors that we will start seeing some really big changes soon.

Advancements like this are going to lead to more powerful LLM's too. Highly suggest reading this article from deepmind as the implications are important.

4

AdditionalPizza OP t1_it5f0tw wrote

Hopefully it happens quickly. Some people seem to want to hold onto jobs for as long as possible. But I'd rather most jobs go quickly, then just slowly and painfully.

If it goes too slow, policies will lag way too much getting ahead of it.

12

AdditionalPizza OP t1_it5erg0 wrote

Oh I gotcha.

I know what you mean, but I disagree to an extent. There's not a ton of terms for this stuff really. It'd be confusing if there was, but the ones I know of anyway, they're pretty useful and will become much more commonly used.

Transformative AI is exactly just AI that is transformative. It will make huge changes coming up shortly. We need a way to describe AI that's more transformative than Siri, but not at the level of AGI. The stuff that automates white collar workers' jobs.

Proto-AGI is important because there will almost certainly be claims of AGI that aren't full AGI. Needs to be distinguished somehow. It just means basically beta AGI. The arguments for proto-AGI will be coming with some LLM's soon most likely.

But yeah, I feel you.

2

AdditionalPizza OP t1_it4ygqu wrote

I can see the argument here for sure. But it's not up to general society. Corporations will do this first. Think nearly all support chat and calls as a start. When you call now you get a shitty robot that you have to push buttons to get through, or chat that you have to try and get to a human. Those would be replaceable today, and save enormous amounts of money. All that takes is a small LLM a corporation could train on their products/services.

Decision makers that see the dollar signs absolutely will. They outsource products overseas with inferior quality because they don't care. They reduce consumable product sizes and charge more money for them because they don't care. When their quarterly profits go up, they don't care how the customer feels.

4

AdditionalPizza OP t1_it4x6zl wrote

Do you think LLM's have zero programming involved?

If I'm not making sense to you, it's because you don't want to make sense of it in the first place.

LLM's will help develop and train new LLM's, soon if not already. Whether directly or indirectly doesn't even matter at this point, but they will directly in the near future.

7

AdditionalPizza OP t1_it4w71z wrote

>so long as the courts don't recognize AI legal advice and the public feels more comfortable getting a real lawyer, a good AI lawyer program won't make a big impact.

That's the same point everyone misunderstands. Transformative AI != full automation off the start.

It will replace the lawyer's law clerks. How many law clerks can say "Well I'll just use my skills and become a lawyer" though? Very few. They will be unemployed. This will happen across all industries. Rapidly, and more advanced versions will come out faster and faster.

We have LLM's that can nearly do this, released earlier in the year. There will probably be push back, don't get me wrong. But the Lawyers that choose productivity and money over employing the people below them will take on more cases, earn more money, get better advice, choose better clients to win more cases.

9

AdditionalPizza OP t1_it4v3ys wrote

Coding productivity is a bottleneck for every IT industry. But that's not the point.

LLM's will target these industries, and LLM's are written by programmers. Programmers that can more efficiently write code and design LLM's will make better LLM's.

LLM's that can help design better LLM's, that are targeted at helping productivity in every other sector.

11

AdditionalPizza OP t1_it4uiwn wrote

I'm assuming you didn't get the gist of the post then. I'm not talking about full dive VR and nano-bots building dreams.

I'm talking about office work, research, and programming being disrupted after 2025, and before AGI. Every industry that involves IT will be affected, and productivity of those sectors will skyrocket. This will inevitably lead to low skill layoffs at first, and echo up the chain of command.

24

AdditionalPizza OP t1_it4tuqe wrote

That's the sort of thing I expect in (or around) 2025 to start happening. That followed by new industries in the scope of LLM's. And these LLM's will all be much more impressive than the one's of 2020-2022.

4

AdditionalPizza OP t1_it4tj17 wrote

Between now and 2025 I think we will have 5 years of progress (by 2020 standards). I know that's a weird way of putting it, but I think that's how our attempts at exponential thinking goes. If this were someone in the general public, I'd say 10 years of progress between now and 2025 (2015 standards).

It will be progress with LLM's, so it will be very exciting. But yes, if I'm right, I hope we are more conscious of its consequences.

8

AdditionalPizza OP t1_it4skby wrote

I think that is not quite correct. I'm not even talking about AGI/ASI in the post as being the Transformative AI either. Too speculative to comment on something like ASI remaining contained or whatever.

But while I agree the bottleneck is production and distribution, software is so easily distributed. We don't need labour jobs being taken over by robots right away. Programmers, accountants, lawyers, researchers, any intellectual career; these can all be very easily disrupted. I'm not even talking full automation either. I'm talking a tipping point for policies and governments to change. Transforming society. An AI to increase efficiency in robotics, distribution logistics, production techniques? All of these are overnight emails to swathes of employees being laid off. It will happen more and more frequently. I believe it will start soon, the tech to start really automating significant portions of jobs that lead to lay offs will be created by 2025, and after 2025 the dominos will fall. That's what I predict anyway.

We don't need AGI to disrupt everything. I don't think governments and policy makers will catch it in time either.

​

>Dali is cool but until it is used widely in commercial applications

Text to image AI is already being used commercially. Like a lot. Photoshop will be mostly replaced soon with AI editing images as well.

16

AdditionalPizza OP t1_it4l93q wrote

And I think it will happen at a rate faster than people are currently projecting. Assuming we need AGI and "2029" is nonsense. So many more jobs can be replaced within a generation or 2 of of LLM's.

I'm not even trying to be optimistic, it might kind of suck for a lot of us. It's like pushing a stalled car off a railroad with an oncoming train. It appears to be moving slowly until it doesn't.

12

AdditionalPizza OP t1_it4ke0n wrote

Every single target that LLM's have had in their scope so far start out slow, and then become useful to the general public and private sectors. A ton of people use copilot, what are you talking about? And copilot is powered by Codex, and Codex is being updated with self correction and testing. It's a matter of time at this point.

19

AdditionalPizza OP t1_it4jut4 wrote

>However, if the same goals are exponential in difficulty then an exponential growth could just be linear.

I agree with you here, and that's part of what I'm saying in the post. Increasing the efficiency of programmers through AI like Codex increases the growth rate of all sectors across the board.

​

>If a set of goals are linear in difficulty then exponential growth will get us to those goals in exponentially lower times

Maybe you can explain this better, but this makes no logical sense to me assuming the starting point is the same.

5

AdditionalPizza OP t1_it49ija wrote

I think things that are close enough to AGI in almost every aspect will make enough large scale disruptions to society and humanity. AGI will probably be claimed before true full AGI is developed and at that point it probably won't matter whether or not something is fully an AGI or not. I think these proto-AGI will be much sooner than we are augmenting ourselves. 5 years maybe. Possibly 3 or 4. My answer will probably change in 6 months to a year.

24