MrEloi

MrEloi t1_j5kueub wrote

I've no idea about the AGI timeline.

However, we are very close to quasi AGI.

We could have 99% human-like help desk, human like doctors etc within a year or so.

2

MrEloi t1_j5ku3gu wrote

That is clearly the obvious next step.

I'm not sure how easy that will be 'tho.

It could be that a large 'frozen' model in combination with some clever run-time code and a modicum of short/medium term memory would suffice.

After all, the human brain seems (to me) to be a huge static memory plus relatively little run-time stuff.

3

MrEloi t1_j5j2hz1 wrote

Most people are already uniquely identifiable via browser fingerprinting.

The Powers That Be can find you if they are interested enough.

The Una Bomber had the 'right' idea with regard to security - he lived in a basic hut in the woods.

Ironically, he was identified by his writing style ... his brother recognized the text style in a letter sent by Ted Kaczynski.

1

MrEloi t1_j567l6l wrote

Agreed.

I generally try to make helpful or interesting posts .. but there are always random spotty faced teenage boys sitting in squalor in their parent's basement who simply have to make a 'clever' and/or abusive reply.

There are also 'special interest groups' out there.
I made a post on MachineLearning critical of Google ... instantly mega downvoted.
I suppose many DeepMind staff reside there.
They may be geniuses .. but child geniuses.

2

MrEloi t1_j55svcd wrote

>we simply don't have enough data on the collective internet for us to keep scaling it further than that.

Why do we need more data? We already have a lot.

We now need to work on the run-time aspects more e.g. short and long term memories etc.

5

MrEloi t1_j55smpe wrote

Some say that ChatGPT is "just a database which spits out the most probable next word".

These naysayers should dive into how the transformer systems really work

It's clear (to me at least) that these systems embody most/much of what a true AI needs.

That linked article covers the next steps towards AI comprehensively.

5

MrEloi t1_j4z9zid wrote

>I would have figured the billionaires and state leaders would have swooped in

I think that they got caught out by OpenAI dumping chatGPT into the open.

Perhaps Altman got sick of the secrecy and decided to do something about it?

Anyway, it looks like the secret is out .. and that OpenAI are getting smacked about the head. That would explain their sudden reluctance to release GPT-4.

6

MrEloi t1_j4z9kgp wrote

Good analysis.

Closely matches what I thought.

TBH I find his "cuddly innocent research scientist" persona slightly fake.
In reality, CEOs of major firms are tough, really tough. They have to be.

He is clearly being dishonest when discussing Google - he must have a very good idea of what they are doing.

So, if he can smoothly tell a lie there, what else is he lying about?

At the end of the day, the public will only be told, and be supplied with, whatever news and software they deign to let us have.

10

MrEloi t1_j4w1s7l wrote

Had to happen : the Big Players won't give up their positions easily.

For example, do you really believe that Disney will allow small firms to use AI to generate decent quality films & videos for pennies?
(FYI Disney managed to get the US Govt to modify Copyright law in order to maintain commercial control of Mickey Mouse!)

The huge firms have the money to drain the finances of the small players, even if there is no real case to prove.

They also have the money to influence Copyright law, and the like, to their advantage.

At the end of the day, it will be Business As Usual, with all the toys being owned by the rich and powerful for their own advantage.

15