footurist

footurist t1_izmckld wrote

Not really, as the root of the problem lies further up the hierarchy. With capitalism and money and power silos enabled by current governmental structures you can spin up any new fancy technology you want - it's gonna find its way into the hands of centralized power. Until that changes the dance is gonna be the same, even if the tune changes.

1

footurist t1_izmbg49 wrote

That's a bit harsh. The guy accomplished a lot actually, queue the insufferably long list of patents and inventions...

Although it's becoming more obvious these days that it's likely going to take a bunch of conceptual breakthroughs to change course towards AGI from what's currently being developed. Admittedly he foresaw things to be quite different than they are today. Also the broken record stuff...

4

footurist t1_ix7fhfz wrote

For some reason these people aren't willing to accept just how different a continuously learning, efficient, general abstracter like our brain is from these giant clever data crunchers.

I highly doubt they'll be able to push those to resemble what we have.

−8

footurist t1_ivz26ai wrote

The inadequacy occurs with the usage of the term prototype, which has a reasonably well defined meaning. Basically it serves as an MVP for one or more concepts that are themselves well-defined, so their feasibility and worthiness can be displayed. In the case at hand the concept is true generality of learning as we know it, which the current mainstream paradigm is definitively not capable of. As mentioned before, they might achieve limited imitation thereof, to an extent which is probably quite hard to guesstimate, but never the real thing ( in their current form, evolution can always change the landscape of course, but then they wouldn't be the same thing anymore ).

I recommend some YouTube videos by Numenta. Jeff Hawkins can explain these kinds of things to laymen incredibly well ( he was on Lex's podcast aswell ).

2

footurist t1_ivyw1fq wrote

Aggressive TLDR : Inadequate definition term

I've read about these "Proto-AGI" definitions before here, but to me these mostly don't make sense.

Perhaps there's debate about the definition of AGI itself, but in general ( heh ) the G in it should imply the ability to ( constrained, because total generality isn't really achievable with our current knowledge I believe, have read ) learn any task that ( continuously aswell ) and like a human would.

The coming up of these definitions chronically lined up with the rise of transformer based LLMs I believe, especially GPT-3. This timeline makes sense.

However, these architectures don't learn like humans do at all. They don't leverage armadas of extremely subtle abstractions like our brains ( the kind of which can be displayed in simple thought experiments, but which I'm too tired to go through here; think carefully about stages in the first time assessing the rules of a roundabout for example ) efficiently do and they don't learn continuously. They're more like impressive data crunchers than efficient abstracters like our brains.

To me it's only logical that this ability to potentially learn each and every task that crosses one's mind and approach human level in it ( again, within the constraints mentioned above ), leveraging efficient transfer learning along the way, were deemed a requirement of this definition, because otherwise the agent wouldn't really be a general learner, but merely a sort of wasteful imitator thereof. That is especially true for the current LLMs, however impressive they are.

So, in conclusion, maybe at the admission of improving the term at hand something in resemblance of what is talked about in this post could indeed surface in the coming year. But as it stands, no imo.

2

footurist t1_itkeds6 wrote

Reply to comment by ChronoPsyche in how old are you by TheHamsterSandwich

I'd bet on that too. Usually, once the teens have been "exhilarated futurists" for a while they'll probably realize the wheels aren't spinning as fast and the mountain top is a lot foggier as it appeared to be. Then they'll probably progress into a more reserved kind of optimism for the future.

I could be wrong, but that would explain the amount of ( imo ) partly unfounded exhilaration in here.

5

footurist t1_itb31zt wrote

Unfortunately I think not many meat lovers will opt for this, only mostly the ones that would have already accepted today's common meat alternatives or even just simply to refrain from meat entirely. Not to speak of the purists.

This is a noble attempt but if you really want to eliminate meat consumption for the sake of the planet and the poor animals then you need to come up with something that Gordon Ramsay could not distinguish from a freshly grilled high quality steak.

What a task...

1

footurist t1_irwffnf wrote

If you're thinking going towards capabilities even remotely approaching Laplace's daemon ( even just for tiny chunks of the universe like the weather of city x ) then sadly ( or not ? ) that kind of assurance is way too computationally expensive and requires datasets no one can assemble.

However, a lot weaker variants may be possible, for that I don't know enough.

SPOILER

>!That said, in the tv show Devs they got it to work, lol.!<

1

footurist t1_irn6v8i wrote

If you listened to any of Aubrey's talks about the topic then you'll know that the man knows A LOT about this. And I think despite having very likely been the victim of a coup and character assassination attempt his new foundation will flourish since some of the deep pocketed donors have already sued most of their money back and invested that in the new foundation.

If anybody in this list got a clue on this, it's him.

5