AdditionalPizza

AdditionalPizza OP t1_isq6u2z wrote

>You don't ever have to worry about that.

Every single post I've ever submitted here has had that happen. Talked to the mods and they said my threads are too low effort/speculative. But anyway.

​

>But it's simply incapable of dispensing with the need for programmers

What do you mean by this? Incapable how?

But yes, if they migrate to low/no code platforms that's exactly what I'm getting at. The skill level will plummet and there will be a huge pool of people capable of entering text to code. The higher engineers will likely remain highly paid, but their positions will be extremely competitive.

3

AdditionalPizza OP t1_isq65kh wrote

So, you think it will increase the demand for developers initially? At the same pay scale and for how long? A decade? 2 years?

Web devs seem like they're always in demand, and I agree not every programmer will be able to transition to state of the art environments. I can't imagine large non-tech companies having a large demand for several programmers when their mode of operation usually consists of having too few for the job as it is already. I imagine those companies would likely have much fewer human programmers working with something like Codex to get as much done as possible while being understaffed.

I also think a text to code AI will severely reduce the skill needed for a programmer to begin with (outside of state of the art stuff). Thus reducing wages, and demand.

I'm trying to imagine how this won't gravely effect employment in the field, especially as the AI gets better and better at writing code.

5

AdditionalPizza t1_isppjrh wrote

Are we talking full automation, or partial? Full meaning every single job in that sector, partial meaning anything between zero automation and full automation.

In the partial automation category, anything that just requires a brain will be automated first. Anything that requires a body will probably be last. There might be a few people left to do certain tasks, but ultimately everyone else would have to be on some kind of universal income at that point anyway, if money even exists in the same way it does now.

As for in the coming years, I think most of the jobs you listed will be mostly automated by 2030. I think full automation could be a while but who knows.

5

AdditionalPizza t1_ispk568 wrote

>I'm guessing it's more of a search engine type thing)

It isn't, it's fed training data, and then that data is removed. It literally learns from the training data. Much like when I say the word river, you don't just imagine a river you saw in a google image search. You most likely think of a generic river that could be different the next time someone says the word river, or maybe it's a quick rough image of a river near your house you have driven by several times over the years. Really think about and examine what the first thing that pops into your head is. Do you think it's always the EXACT same, do you think it's very detailed? The AI learned what a river is from data sets, and understands when it "sees" a painting of a unique river, the same as you and me.

​

>It can't 'imagine', it can only 'access what we've given it'.

This is exactly what the op asked for an answer to. You say it can't imagine something, it just has access to the data it was given. How do humans work? If I tell you to imagine the colour "shlupange" you can't. You have no data on that. Again, I will stress, these transformer AI have zero saved data the way you're imagining it that it just searches up and combines it all for an answer. It does not have access to the training data. So how do we say "well it can't imagine things, because it can't..."

...Can't what? I'm not saying they're conscious or have the ability to imagine, I'm saying nobody actually knows 100% how these AI come to their conclusion outside of using probability for the best answer, which appears to be similar to how humans brains work when you really think about the basic process that happens in your brain. Transformers are a black box at a crucial step in their "imagination" that isn't understood yet.

When you're reading this, you naturally just follow along and understand the sentence. When I tell you something you instantly know what I'm saying. But it isn't instant, it actually takes a fraction of a second for you to process it. That process that happens, can you describe what happens in that quick moment? When I say the word cat, what exactly happened in your brain? What about turtle? Or forest fire? Or aardvark? I bet the last one tripped you up for a second. Did you notice your brain try and search something it thinks it might be? You had to try and remember your training data, but you don't have access to it so you probably try and make up some weird animal in your head.

31

AdditionalPizza t1_ispeqhe wrote

I love this debate, it happens over and over about this stuff.

People think it's using a database of images or whatever. But the training data isn't that. And it doesn't have access to it. It literally learned it. Others just dismiss it because "we're not there yet" with no real further explanation.

Do I think it's conscious? Probably not, I think it needs more senses to obtain that. To truly understand what "feel" and "see" means. But even that doesn't necessarily matter. As a human, I am incapable of really understanding another being's experience of consciousness, human or not. It's like the colour red, I can't prove that you and I both see the same colour we call red.

But what we do know, is that we don't understand how human consciousness works so, why are we so quick to say AI doesn't have it? I'm not saying it does, but just saying we aren't 100% sure. 2 or 3 years ago I would've said no way, but at this point I'm starting to think google (or others) may have achieved far greater than what's publicly known about AI now in the realm of self awareness/consciousness. They're actively working on giving AI those other senses.

6

AdditionalPizza OP t1_isgin0q wrote

Oh, then I'm afraid our signals got mixed up somewhere along the line.

I do wonder if the singularity will affect those that refuse to take part in the technology before it. As in, some people choose to live off the grid, will they be left alone. I don't know, topic for another time I suppose.

1

AdditionalPizza OP t1_isgfk33 wrote

>I don't think you're actually disagreeing with me any more.

Honestly I don't think we ever were. Aside from whether or not it requires a self aware AI and the ways to achieve a singularity situation, and the definition of it.

We both agreed it's a moment in time we can't predict beyond. I had never stated anything less, the original commenter stated something differently and I disagreed with them.

1

AdditionalPizza OP t1_isgcvl7 wrote

The singularity is literally a point in time though. It's not an ongoing event. We possibly have our social structures > singularity > we no longer have our social structures.

I don't think you're understand what I'm saying. To be honest, I don't understand what your argument is either. I don't even know what we're debating at this point.

1

AdditionalPizza OP t1_isgcin7 wrote

I think it's correct to assume different countries will attempt different things. It's really hard to say because while we can compare revolutions in history, it doesn't really help us understand the implications of this one. This isn't the industrial revolution, this is a replacement of the of the entire workforce sector by sector. The first sectors to go might find other employment. Then more sectors will go, and then more.

We're waiting on the first big sector to go nearly or fully automated. Graphic designers aren't a sector, they're part of a creative sector. When entertainment and art is mostly automated we will see. But it could be medicine, could be legal, could be computer sciences, could be retail. It could be ones that have labour involved but that seems less likely at the moment but if there's a breakthrough in robotics soon that will spell the end of that sector.

One sector will probably knock I don't know, 5% of the workforce out? Maybe more? That's an immediate crises. Then another. Then another.

​

>I still think there will be work for people, but it may be "make work" of the FDR New Deal style.

That's probably a solution we'll see attempted, but I don't know. I don't want to see that. That's a bandage for a giant wound. It might work for a few months, but then more people become unemployed.

Open AI's Codex is crazy. That tech will accelerate all facets of IT, which means we are increasing the rate of exponential growth by orders of magnitude.

But you know, I hope you're view on it is more correct than mine. At least for this transition period coming up soon enough.

1

AdditionalPizza OP t1_isg9xwj wrote

You're very set on a sci-fi writer from 1993's version being the absolute. He didn't even come up with the term, he just made a popular theory about it.

If you want to be so concrete on one man's theory, you should probably go with the original at least. Not just the first most popular. The entire definition was originally a rate of returns on tech that surpasses human comprehension. That's it, and I'm sticking with it.

1

AdditionalPizza OP t1_isg4pbb wrote

>That's basically the foundational document of singularity theory.

Yeah I know. If you consider it a definition of a word, I can understand not wanting to change it. If you consider it a theory, then well theories evolve all the time.

But I know, I was absolutely not talking about the singularity. I mentioned it because the original comment was referring to it, and I said what they were referring to sounded less like post-singularity, and more like transformative AI. I was actually mostly avoiding talking about the singularity in this post, more about pre-singularity.

1

AdditionalPizza OP t1_isg1q1y wrote

Yes I've seen that very old writing of it. There are several more modern view points on what a technological singularity could be, they include many different ways of achieving it, but they all conclude the same basic thing; Unfathomable change and uncontrollable runaway technological innovation.

Regardless, we can agree to disagree on that point. I was trying to avoid talking about AGI and self aware AI anyway.

1

AdditionalPizza OP t1_isfw4w7 wrote

Right now, those billionaires. People assuming I suggest drugging them to be more empathetic. I'm suggesting a possible optimistic scenario to those that have doubts billionaires and world leaders won't suppress us further with technology.

1

AdditionalPizza OP t1_isfvw73 wrote

If you have the exact blueprints of how the singularity will go down, then sure. But we have no idea if AGI is absolutely necessary for the singularity to occur yet. We don't even know if self awareness will be possible in AI, so it's possible a single entity could control an ASI and use it for whatever they want. We have no idea.

1

AdditionalPizza OP t1_isfv03z wrote

>The same thing will happen over the next 20 years.

The problem would be that AI would be cheaper, and perform several magnitudes more efficiently. There just won't be a job that AI can't perform better and for less cost. 24/7. At that point why would we even be striving for work? We should be striving to be unshackled from working half our lives.

>devalue individual worth

This could be the hard pill to swallow during an upcoming revolution for a lot of people. The "value" and "worth" will cease to exist. People will have to come to terms with unemployment no longer being a thing, it's simply mandatory employment disappears.

The part I worry about is the people that believe employment is essential and will prolong the suffering of many by dragging others to the bottom before the eventual collapse of the system. UBI is simply a stop-gap.

1

AdditionalPizza OP t1_isfsq7p wrote

I agree. My post is just optimism for the process in which that could have a possibility of happening. Hopefully we don't focus so much on fearing AI alignment that we forget to fear each other. Both are equally important, for the majority of society anyway.

2

AdditionalPizza OP t1_isfqvb1 wrote

Hmm, personally from that reasoning I would say your future self is less separated than the others, being that your actions in the present directly affect what you will feel. But also I suppose in this theory the present doesn't even exist so I don't know.

I can't remember what someone else thinks though, so I don't know how much I agree.

But anyway, what were you getting at?

1

AdditionalPizza OP t1_isfq9ok wrote

>the billionaires you are talking about “force medicating”

I'll start there, just because that's not at all what I said haha. They would possibly take a cure-all for longevity and healthy aging reasons. Not capture billionaires and shove a pill down their throat. I imagine it as a daily mix of medications and vitamins to make people healthier and prevent diseases. And people would regularly go and get diagnostics done to keep in top health.

>Sure, one can argue their treatment of workers could be better, but at the end of the day Bezos had to solve problems to get rich.

Sociopathic reasoning. "Eh so what if a ton of people suffer so I can have everything."

And I am fully aware capitalism is what we have and it's currently the best solution or at least appears to be. I'm talking about the future when this system just simply won't work, at least at some point it won't if we prescribe to the idea of a technological singularity.

0

AdditionalPizza OP t1_isfovt3 wrote

I agree, and that's the optimism of the post. Hopefully we get a sort of revolution long before the singularity. Otherwise it will be a very brutal road toward the singularity. But personally, unless the singularity occurs much quicker than I think it will, there will definitely have to be a transformation of society in the coming years.

1