CertainMiddle2382

CertainMiddle2382 t1_j8vqcho wrote

No need to be aggressive, I do know statistics.

There would be no “control problem” if the set of all good things would have been greater than the set of all bad things.

Subjective “good outcome” is something so small, we don’t even know how to specify it (hence the funny responses from Syndey).

You do realize that the fact that Sydney could be a “lifesaver” for you in the short term is actually very bad news in the medium term?

1

CertainMiddle2382 OP t1_j8lufq6 wrote

I didn’t expect artificial visual art to be such a low hanging fruit.

What about AI music? Is it as good but more discreet or is there something with music that is more complex?

One other thing that I didn’t expect is the asymmetry of ressources between training and inferencing. It seems to be like 5 or 6 orders of magnitude, AI has always been anthropomorphised with the same entity seemingly both « learning » and « acting ».

That makes current AI both extremely centralized for training and relatively decentralized to actually do something, I don’t know if it will change anything, but I don’t think it has been much thought about

For example AI could be soon stolen/copied and run locally like any software…

9

CertainMiddle2382 t1_j8fgti3 wrote

For better or worse, I am a science guy to the core.

But I also have that romanticism, some people could even mysticism, whatever that means.

The coming times have been long forseen, but I couldn’t imagine them coming so quickly.

I though most of it would have come just for us to experience it in our old age.

But here we are, at the doorstep of things to come…

That is my useless impression late this night on my way home.

Good night all :-)

14

CertainMiddle2382 t1_j7gtgrn wrote

Yep, that was also one of Bostrom arguments.

To properly align itself with our values, even in situations we could not even imagine ourselves, making a simulation of humans and test our avatars responses could be the only way of protecting us.

By harming « them » instead.

4

CertainMiddle2382 t1_j7erubv wrote

Capitalism is just the nature of things when people having a lot, invest to have more. Combined with property law and a functioning state actually enforcing those laws.

I absolutely don’t see how it would change a bit especially today when those capitalst know better and better how to steer people’s wants, mostly through social networks and drugs.

0

CertainMiddle2382 t1_j6wup8c wrote

Context.

Hard physical problems happen in a very controlled context, that context is often a “fiction” of reality deemed close enough but simple enough to be useful.

Even all “common” mathematics had to be declared to happen inside a red taped safe space named ZFC, otherwise the unrelenting waves of complexity outside of it would have torn down everything we could be trying to build.

Everything is about context.

“Perception”, “real life” happens in a much more complicated context. That context is not sandboxed and contains all the all little sandboxes we built to make our thinking work.

To model those simple concepts , you practically need to have a internalized model of the whole world…

4

CertainMiddle2382 t1_j6vwxpj wrote

We have absolutely no clue about exactly what the latent space of those models represent.

Their own programmers have been trying to do that even with pre Transformer models without much success.

There is a huge incentive in doing so especially for time critical and vital systems like in medicine or machine control.

Above a few layer, we really don’t have a clue on what the activation pattern represent…

3

CertainMiddle2382 t1_j6vv3o5 wrote

Everyone is talking about side effects but imagine if taking care of us was a primary goal in itself (like for example chatGPT lying to us to achieve its goals)

It is already one prompt away, lowest hanging fruit for AI doing the worst against us is a new bioweapon.

Deepmind is scared of it “primitive” Alphafold that can discover protein function much more efficiently than we can.

Using that knowledge against humanity is a childs play.

1

CertainMiddle2382 t1_j6o8gwd wrote

It is another discussion, but in the West money is already a luxury credit.

In western europe, whatever you do, you will always have a roof under your head, heating between survival and comfort level, potable water, habeas corpus, no slavery/mandatory military service, something to eat, the right not to be beaten by your neighbors too often, healtcare at least of 90s technological level, hot shower from time to time, wifi to access all humanity knowledge and all audio/visual entertainment until the 2010s, for free.

Still people are depressed they are poor, when the poorest Europeans have way more than emperors of the past.

4

CertainMiddle2382 t1_j6o79wf wrote

Irony it that it can only be created under capitalism also :-)

I love Banks books. The Culture represents the most realistic vision of Heaven in my opinion.

The only possibility it could happen makes the risks of AGI and singularity acceptable, IMO.

The worst is we could get killed by our newborn God, pretty classy isn’t lol?

Deep down it is the humanist version of Pascal’s wager:

It is unreasonable to not try achieving the Singularity as soon as possible, if it could have even the slightest chance of being good.

6

CertainMiddle2382 t1_j6o6s3k wrote

I agree with you.

I believe the Turing test is or quickly will be achieved.

The question is what comes between that and true AGI and between AGI and singularity.

I believe some version of self improving AI will have to come first before anything else.

We are close IMO, once it can produce Python/Cuda/VDHL code better than the 10-20% best percentile, magic will happen…

3

CertainMiddle2382 t1_j6n4jqb wrote

Biology as a specific field is too noisy and we didn’t see there the “unreasonable effectiveness of Mathematics in Natural Science”.

Most pure biology research involves heavy lab work with endless tries with minute random changes. It is not romantic, it is mind numbing.

Biology needs armies of young soldiers eager to work for nothing doing those experiments, so they have to promise future successes, grants, positions or discoveries that seldom come.

Despite that you can achieve big success in biology in the “harder” aspects of it like: data science, modelling, IA… of course.

Diving straight into biology will not teach you maths and physics and will only specialize you into XYZ gene/protein.

Love biology, learn about biology, participate in biological studies, but don’t work in biology (at least not until you have a solid hard sciences background).

Psychology is a lost domain, it is too subjective and moving with the “trend of the day”. General academic level is very poor, it is mindblowingly over populated and you’ll end up depressed/looking to escape in an HR position like most of them.

So yes, study hard science while to have stamina and fresh neurons. Then you can conquer the world in your terms:-)

In my field, Im a medical doctor, I’ve said 10 years ago that all academic positions will be soon for AI-“pick your favorite specialty”. I was right, it is just taking time because physicians are notoriously bad at CS and the few who are, are better paid outside of the hospital…

1

CertainMiddle2382 t1_j6mz0dy wrote

I would suggest “passing Turing test” could be better understood as passing Turing test 50% of the time (or 70 or 90%) by 50% of people.

In that case, we could argue chatGPT is close to the mark already.

12

CertainMiddle2382 t1_j6mq8tg wrote

Whatever you choose don’t go into psychology, it is full of bogus research to the brim.

It won’t lead you anywhere.

Be careful about “neurosciences” it can be legit clinical or applied research but it can also be the rebranding of the aft mentioned.

Avoid biology at all cost, it is not fondamental science and involves a whole life of wetware work with little chance of success. It will lead you into a deadend into your 30s because what you learned is not generalizable.

If I had to do it again, spend most of you youth stamina in studying the abstract framework underneath all of this: the algorithmics itself, signal theory, logic, statistics, proof theory…

This are the foundations, this is guaranteed not to change whatever happens and this will give you insights and an edge for the rest of your life.

And you can pick a “softer” passion/topic/field on the side.

But make it as formal as you can, it is where the rubber is going to meet the road when that stuff will happen.

You could dive into linguistics to try to convey laws into algorithms. Also think about ethics or even theology :-)

Economy/finance could be a great way to apprehend the world of emotions through objective observation.

That is the big problem we are going to have, “teaching” our values to mindless machines…

2