AdditionalPizza

AdditionalPizza OP t1_izz0b6x wrote

We won't know until we can do it. I'm sure there's some very forward thinkers already using AI to do this, but as long as you study it and use it a lot now you'll be ahead of those that don't. When the opportunity comes, you will be poised to take advantage of it.

2

AdditionalPizza OP t1_izz00rm wrote

They are easy to use on the surface, but using them currently for things you didn't think were possible is more difficult. Just the habit of always using them. The more you know about using them now, the more it will help in the future. It's like any skill, the more you use it the better you will get and there's no real substitute for that. Facebook is the same way, people use it differently, though it has its limitations because it's not exactly open-ended software, some people are inherently more practiced with it. Grandparents post on their grand kids wall or in random comments to ask things that should've been a private message, for example.

The window will probably be short until it is much easier to prompt and get what you want. But more importantly is learning what you can do with them before everyone else catches on or before you fall behind. There's a whole foundation of knowledge to learn about using AI and probably in the near future there will be more and more emergent abilities of AI that can be harnessed by those that keep up with it. Writing software with it might be possible very soon, and previously you needed a wealth of knowledge to do that. When everyone can do it, the head start will diminish. So you'd rather already be ahead and on to the next emerging thing, instead of learning from the beginning.

edit: had to fix some text

2

AdditionalPizza OP t1_iw3q6ha wrote

No idea, it's too theoretical to really discuss. I would assume that sentience/consciousness would have a major impact on the AI's abilities. It would also probably have a profound impact on the AI's motivations. You're now "gifting" the AI with the ability to choose what it want's to do based on it's own rationale and emotion.

1

AdditionalPizza t1_iw2hsol wrote

I think people would be happy if UBI comes to fruition, and the best chance of that happening with the least amount of turmoil and suffering is if automation sweeps quickly. If it's too slow of a rollout, there will be a lot of people stuck in limbo, feeling useless to society and losing their life savings.

I don't think any mentally stable person wishes that slow scenario at all. Yet, there's so many people that defend working full time for a living and saying automation won't take their job. Well, it probably will. I don't know when, but it most likely will. Best case scenario is sooner rather than later. They can't fathom that, they think the longer they can work the better. Reality is the shorter the timeframe for everyone, the better.

UBI implementation is so unpredictable though, because we just can't accurately guess how humans, specifically politicians/unions/luddites will react. If AI art generation has been a sign at all, it's that artists are going apeshit and in the broad scope of things, nobody cares about them and just want that sweet sweet automation.

So either it's a case of dominoes, one by one an industry becomes automated because people fight tooth and nail and slow progress. Or an entirely generalist AI capable of doing several tasks over most industries drops, and we're all left with enough time on our hands to enact change.

I will keep insisting the best thing people can do is simply be prepared, don't be caught off guard, and keep an eye on what AI is advancing so you can keep yourself mentally ok if your career path is disrupted.

As for the scenario of billionaires leaving us to die, I don't really see how that is likely. Ok sure, I won't deny someone might be in control of it; But outside of total enslavement of humankind (doubt) they would either need A. People that can afford to buy their products they make or B. Leave us behind and live in their own paradise in which case we will continue on doing our own thing.

It's hard to imagine one single person being in charge of an ASI that has utter control over the entire world and just wants everyone to starve to death.

11

AdditionalPizza t1_iw2ftl8 wrote

I can't imagine it will be the same as gpt3 because even small models now outperform gpt3 by a ton.

While it might be a smaller scale model it will still most likely be way more powerful. I do hope we get a full size model with all the new techniques though, but I wouldn't mind a pocket size one either.

5

AdditionalPizza t1_iw2fgzl wrote

Honestly it sounds like they're just trying to appease artists with what the artists think they want, while in reality it does nothing. Deviant Art definitely knows this. Deviant Art certainly knows AI art creation will outpace handmade work by a longshot, and they want a piece of the web traffic pie.

Go and try and explain how AI art works to all of these artists complaining and they will insist it's stealing their copywritten work and using it to generate images. They just don't understand the fundamental basics of how training works and that it doesn't store copies of their images to reuse elements of it. So might as well make them think they won a battle while AI training will just continue on.

33

AdditionalPizza OP t1_iw1b3ou wrote

And people in technical fields aren't notoriously awful at predicting what's best for the general public.

I'm not doing anyone any disservice, and not propogating anything negative here. My post is literally a poll asking people's opinion, and stating my own.

1

AdditionalPizza OP t1_iw0eblq wrote

>It actually predicts future rewards in order to choose how to act.

I do believe some version of this will ring true. It may be required to go beyond prompting for an answer. While that can be powerful on its own, I personally think some kind of self-rewarding system will be necessary. Consequences and benefits.

But, I left it out of this discussion, specifically because a sort of "pre-AGI" won't quite require it I don't think. I think the moment we are legitimately discussing AI consciousness being created, we are beyond initial prototypes.

1

AdditionalPizza OP t1_iw0da05 wrote

>your definition seems very wide

Yes it is very wide. I believe 2023 will reveal a more concrete "road map" toward full AGI. I think version 1.0 of an AI with sufficiently general capabilities will be released or announced to be arguably defined as proto-AGI.

4

AdditionalPizza OP t1_iw0ctks wrote

Oh interesting, you think the timeframe between proto-AGI and AGI is shorter than AGI to ASI?

I think by your definition we nearly have that with 2020 language models. We certainly could do it right now. I think 2023 is when we will do that, but it will require a few steps that need to be solved, at least so far as us in the general public assume it needs solved. They're working on it and I would be surprised if our next gen models in 2023 haven't at least been able to solve things like memory. Reinforcement Learning in the pretraining phase has massive potential to bridge the gap between a current narrow scope general AI and full blown generally capable AI that I'd define as proto-AGI.

But I think "strapping" together multiple models would fit the bill too. They aren't narrow AI, they're just not broadly general or capable enough to cover enough bases. We will see how it unfolds though.

1

AdditionalPizza OP t1_iw02859 wrote

I'm saying 2023 the concept will be proven, we will see a concrete roadmap toward AGI because of the success that SOTA models will achieve.

But I think our very slight difference in 2 basically synonymous words is more pedantic than I feel like debating haha. Precursor and prototype are so similar I see no reason to argue either way.

2

AdditionalPizza OP t1_iw00z6m wrote

>I don't like this definition because you don't need consciousness nor sentience to qualify an AGI or an ASI.

I said exactly that in the definition. I was defining Proto-AGI, not AGI. No consciousness required.

What's your definition of Proto-AGI that will require 5 years? Would you say our current models are too narrow?

1

AdditionalPizza OP t1_iw00jyx wrote

Your definition of prototype is not the full definition of the word though. Prototype can simply be the inspiration for later models. As in, we're on the right track and probably only adjustments/tweaking/fine-tuning, compute, and data away from being able to create full AGI. I think memory is a hurdle we will over come shortly.

1

AdditionalPizza OP t1_ivzza75 wrote

The definition of AGI is an AI that can learn any task a human can. Most people presume that would mean the AI would also have to be equal or greater to a human at those tasks.

I don't know where the idea came that AGI has to be conscious. As far as I'm aware that's never been the definition. It's a talking point often associated with AGI and mentioned for Turing Tests, but contrary to your experience I've never heard anyone claim it's a requirement of AGI outside of this sub.

I also see other mixed up definitions in this sub. A lot of people refer to the singularity as the years (or decades) leading up to the actual moment of the singularity.

7

AdditionalPizza OP t1_ivzavx1 wrote

>A far more interesting question, in my view, is when will algorithms be able to do any productive task that a human can do, at a competent level.

That's what I would personally define AGI as. Any task a human can do, at least intellectually but possibly physically as well but to me that's more robotics than intelligence. It may require a physical body to achieve true AGI.

I agree with your statement about consciousness, that's why I excluded it from the definition.

But, I somewhat disagree about GATO. But only slightly, and that's more to the point of my post. I don't know exactly what to define Proto-AGI as and how many general tasks it must be equal or greater to a human at. But I'd definitely define full AGI as capable in all human intellectual tasks at a level equal or greater to humans.

So GATO might be Proto-AGI today by definition. It's general, it's definitely not narrow. But I'm trying to say 2023 will be when we get a general AI that is able to meet or surpass human ability across most/many intellectual tasks. I think memory and reinforcement learning will be the key to achieving something that's basically AGI next year, but we'll probably move goal posts as it gets closer.

2

AdditionalPizza OP t1_ivyznve wrote

To clarify the definition I'm using a little more, just simply something between Narrow AI and AGI. When it can't be classified as just another narrow AI or several Narrow AI, but also hasn't reached the pinnacle of human ability in every task. It's a very broad range, sure, but something undeniably not just narrow AI.

As for an LLM's ability to learn, I don't have anything on hand at the moment without searching for it, but they've shown success in Reinforcement Learning during pretraining for language models. And the models were able to surpass the abilities of the original algorithms they were pretrained on. I strongly believe RL tied into an LLM will be vastly explored next year and the results will lead to something most would call or strongly resemble a Proto-AGI. Though of course, the term isn't official, it will be the point where people start really considering AGI on a shorter timeframe.

I don't know/think about any public release of this though. Just the existence.

1

AdditionalPizza OP t1_ivyk3u5 wrote

Totally agree. But I think the amount of work needed to get the current (past) technology to be much better was too much effort for too little gain. They're basically just search engines fueled mostly by top results which is generally a paid position to be in.

Automation is certainly slow too, not sure why. Just not enough customers maybe.

A new wave of digital assistant with language models is most likely the advancement you envisioned and it'll happen relatively quickly I bet.

4

AdditionalPizza OP t1_ivyjeoy wrote

Basically I define Proto-AGI as not narrow in scope, and not a "few trick pony" sticking a handful of narrow AI together but I also think there's a very broad definition between that and full AGI. I would call a generalist a proper Proto-AGI if the scope is wide enough.

I feel some people use Proto-AGI as a definition for "could be AGI, but it's not definitive."

I think we have the ingredients for the recipe pretty much right now to create a Proto-AGI. Funding is an issue among a few technical issues I think we will overcome next year. But this is my optimistic take. I definitely think before 2025 we will have it.

5

AdditionalPizza OP t1_ivyic1c wrote

There's arguments for it, there's also the argument that sentience just comes with adding senses like vision and hearing to an intelligent enough model. And consciousness may just be a certain level of intelligence, as in we may not have a choice when exploring AGI.

But who knows.

12