AdditionalPizza

AdditionalPizza OP t1_ivvr5x4 wrote

I've seen that, I wouldn't exactly call it near what I'm talking about in terms of usability, mass adoption, convenience, or ability. I think people are under estimating how powerful it can/will be in the very near future and the possibility of it replacing most if not all current regular internet browsing.

3

AdditionalPizza OP t1_ivv0bqp wrote

>If the vision is to use current models hooked into the more commonplace virtual assistants, wouldn't this exist already, just in a less popular form?

Basically, or at least nearly. But in the grand scheme of things, most people don't know or care about it. Until our smartphones have them pre-installed, I don't think most people will take notice in a short enough timeframe to call it revolutionary toward society.

1

AdditionalPizza OP t1_ivuzq1w wrote

>People already talk to their phones and 'smart home' devices it'd just be bumping up the abilities a notch.

So you think it will be just "a notch" rather than substantially useful? I believe they will become more useful than googling things yourself. When I talk about voice assistants or virtual or whatever you want to call them, I mean the ability to type queries as well. So in that case, it could be for most people the "middle man" between user and internet.

On top of that, they could blast productivity and general knowledge off from those that don't use it. Compare say an elderly person that hasn't touched a smart phone to a 20 something year old college student in terms of technology know-how. I think the difference between someone that accesses their future-assistant and that college student today is a greater gap than that college student and the elderly person. I also think it will likely make the internet much more accessible to people that currently don't use it extensively, and it will have a greater affect on the average person's life than the internet itself did over the past ~20 years. It will hopefully be like conversing directly with the entirety of the internet.

I agree with your last 2 paragraphs. But the business side of things won't really show everday average people AI capabilities.

2

AdditionalPizza t1_iu5rm91 wrote

Turing test as in, you wouldn't be able to tell which subject you're conversing with is an AI and which is human? An AI today could probably pass that test if you programmed it that way and prompting was required. It might need a more robust memory though. Honestly I feel like it would be obvious which is the AI because it would "outclass" the human conversation. You can try and trick them with things like looping back to previous parts of a conversation, telling them they said something they didn't, call them a liar, all sorts of things. But it'd be pretty easy now to fool most people if someone wanted to create an AI to do that, assuming it's a blind test through text with subject A and subject B on the other side of a wall or whatever. If someone online asked you to prove you're human through text, good luck.

If you mean a test whether or not the AI is conscious, I don't think that will be absolutely provable. Possibly ever, depending on definitive proof in the future. I'm of the belief that when a certain threshold of intelligence is reached, 1 or maybe 2 different senses, and total autonomy; You reach consciousness. So long as someone/something has an ability to communicate with itself through thought, and has the ability to imagine; Then it should be considered conscious.

3

AdditionalPizza t1_iu048nq wrote

A large language model is a transformer. An LM has tokens which are basically parts of words, like syllables and punctuation/spaces. During training it forms parameters from data. The data isn't saved, just the way it relates tokens to other tokens. If it were connect the dots, the dots are tokens and parameters are the lines. You type out a sentence, which is made of tokens and it spits out tokens. It predicts what tokens to return to you by the probability it learned of one token most likely following another. So it has reasoning based on the parameters during training, and some "policies" its given during pre-training.

I think that's a valid way to describe it in simple terms.

2

AdditionalPizza t1_itz5i2x wrote

I don't think there will be a huge relative difference between the generation or 2 of AI preceding AGI or the generations directly following it.

The proto-AGI will probably be claimed to be AGI and it will make headlines, but people will argue it isn't. However it will be more than general enough to displace a lot of jobs. Even AI long before it, 2023-2025 will be good enough to automate a lot of jobs with specific fine tuning, but it will take another generation of models before mass adoption by corporations takes place and deploying them, sometime between 2025 and 2027. Models are already working behind the scenes at the background of major companies like Netflix, meta, nvidia, Google, Amazon, you name it they're most likely using them. 2023 generations will start being used in non-tech focused companies in the background more. Healthcare breakthroughs will start to be realized by 2024/2025, but I can't speak to how long that will take to trickle down to the public.

When true AGI is created, there will still be people claiming it isn't AGI, but in hindsight we will confirm it. It will be murky though because even before AGI our models will be self improving themselves in increments. I think we might define AGI as the first model that doesn't require human intervention to train, or possibly the first model with a general agent in a capable robotic body.

I believe predictions beyond 2025/2026 are pretty much impossible to make at this point for the general public.

Everyone (myself included) keeps recycling this notion of creative and intellectual jobs going first because it doesn't neccissarily require robotics to replace but I think that's only partially true. Those jobs will see layoffs first and already have, but full automation requires robotics anyway. I think we were sort of wrong before in thinking labour and low skill jobs would go first. But I think we may not have been totally wrong. Or at least not by decades or anything.

Robotics is going to make massive strides after 2025, I don't know how quickly but I think 2025-2026 will be for robotics what 2022 was for language models. Probably after a couple more years robots with AI will be an expensive proposition, but ultimately worth it for large corporations to replace human workers with. I can't imagine predicting details about this though.

4

AdditionalPizza t1_ityza30 wrote

By adding RL algorithms into pre-teaining, the model is able to learn new tasks without having to offline fine tune it. So it's combining reinforment learning with a transformer. And another benefit is the transformer sometimes makes more efficient RL algorithms than the originals that it was trained with.

RL is reinforment learning, a machine learning technique, which is like giving a dog a treat when it does the right trick.

It's kind of hard to explain it simply, and I'm not qualified haha. But it's a pretty big deal. It's makes it way more "out of the box" ready.

3

AdditionalPizza t1_itwyz3z wrote

This guy does yearly updates on quantum computing on youtube. He provides all the links for further reading.

It's a pretty confusing subject, but what I gather is scaling up number of qubits is going slowly but surely so far. There's been a few cool discoveries, though I don't have enough general knowledge to explain them well. No current sign of them being used for any new super exciting thing at the moment that I'm aware of.

14

AdditionalPizza t1_itw9irw wrote

Yeah this is what I'm saying. People will argue that you can just keep outputting more and more with extra productivity but that doesn't make sense economically. Shareholders don't care where the profit comes from for that quarter, and paying fewer wages is a good boost to net profit.

0

AdditionalPizza OP t1_itvwfwd wrote

I'm not so sure engineers and CEO's have been this optimistic about AI before, but they have for sure about other things. I could be wrong though.

What they're saying should, theoretically, get people looking into it themselves and reading the research, and seeing that they're onto something this time. Though I'll admit, presuming anyone would ever do that would be foolish on my part.

I'm just wondering how in-your-face this stuff has to be before people open their eyes, but I think I've came to the conclusion most people won't open their eyes until it hits them in the face.

2

AdditionalPizza OP t1_itvt9hd wrote

But that's not really the discussion. I don't care so much about what the minority that refuses technology does or doesn't do. They could go start their own low-tech society and pay taxes to their elected officials, but that doesn't help me in a world where I want to live with new technologies and strive to not have to work meaningless jobs ever again.

2

AdditionalPizza t1_itve5j8 wrote

The shitty thing about learning programming now, is by the time you're job ready entry level positions will be either gone or much less skilled leading to competition and lower wage. I was relearning it myself and when Codex was shown to correct its own errors and test, I gave up. Maybe I'm wrong and it's foolish to move on, but you only get one shot at life and I'm not wasting that amount of time on something AI has a direct scope on today.

1

AdditionalPizza OP t1_itutn3p wrote

Everyone just a few years ago assumed AI would start at the bottom of the pyramid and we would work up to creating human equivalent intelligence. But it seems like the opposite is true, and the basic functions are more difficult to simulate than the "higher" functions reserved for humans. Like creativity, intellect, language, reasoning, etc. Those seem to be easier to do than basic traits like fear, motor skills, and other basic things we think of as less unique to humans.

Humans are special when compared to other biological creatures, but we don't even fully know the intelligence of some other species. We just have the advantage of having evolved with thumbs and the ability to walk upright.

4