CertainMiddle2382
CertainMiddle2382 t1_j8vqcho wrote
Reply to comment by Ghost-of-Tom-Chode in Bingchat is a sign we are losing control early by Dawnof_thefaithful
No need to be aggressive, I do know statistics.
There would be no “control problem” if the set of all good things would have been greater than the set of all bad things.
Subjective “good outcome” is something so small, we don’t even know how to specify it (hence the funny responses from Syndey).
You do realize that the fact that Sydney could be a “lifesaver” for you in the short term is actually very bad news in the medium term?
CertainMiddle2382 t1_j8qij9u wrote
I see that as the evidence the set of bad behaviors is much bigger than the set of good behaviors.
Doesn’t bode well for the future, maybe there exist personality disorders we don’t even know lol
CertainMiddle2382 OP t1_j8lufq6 wrote
Reply to AI surprises until now? by CertainMiddle2382
I didn’t expect artificial visual art to be such a low hanging fruit.
What about AI music? Is it as good but more discreet or is there something with music that is more complex?
One other thing that I didn’t expect is the asymmetry of ressources between training and inferencing. It seems to be like 5 or 6 orders of magnitude, AI has always been anthropomorphised with the same entity seemingly both « learning » and « acting ».
That makes current AI both extremely centralized for training and relatively decentralized to actually do something, I don’t know if it will change anything, but I don’t think it has been much thought about
For example AI could be soon stolen/copied and run locally like any software…
Submitted by CertainMiddle2382 t3_112rj2b in singularity
CertainMiddle2382 t1_j8fgti3 wrote
Reply to Is society in shock right now? by Practical-Mix-4332
For better or worse, I am a science guy to the core.
But I also have that romanticism, some people could even mysticism, whatever that means.
The coming times have been long forseen, but I couldn’t imagine them coming so quickly.
I though most of it would have come just for us to experience it in our old age.
But here we are, at the doorstep of things to come…
That is my useless impression late this night on my way home.
Good night all :-)
CertainMiddle2382 t1_j7gtgrn wrote
Reply to The Simulation Problem: from The Culture by Wroisu
Yep, that was also one of Bostrom arguments.
To properly align itself with our values, even in situations we could not even imagine ourselves, making a simulation of humans and test our avatars responses could be the only way of protecting us.
By harming « them » instead.
CertainMiddle2382 t1_j7erubv wrote
Reply to comment by BassoeG in What is the price point you would be OK with buying a humanoid robot for personal use? by crua9
Capitalism is just the nature of things when people having a lot, invest to have more. Combined with property law and a functioning state actually enforcing those laws.
I absolutely don’t see how it would change a bit especially today when those capitalst know better and better how to steer people’s wants, mostly through social networks and drugs.
CertainMiddle2382 t1_j75vfw0 wrote
Reply to Possible first look at GPT-4 by tk854
Where we go we don’t need GitHub anymore :-)
CertainMiddle2382 t1_j6xkn6n wrote
Reply to comment by Iffykindofguy in The next Moravec's paradox by CharlisonX
A machine? An AI is not a simple machine, it is a machine that strives to have a model of the world and act according to it. It is not a simple excavator.
CertainMiddle2382 t1_j6xh585 wrote
Reply to comment by Iffykindofguy in The next Moravec's paradox by CharlisonX
I don’t get it. DNN latent space is an internalized model of the world, a mapping of its invariants in increasing levels of abstraction.
It is just not called that way…
CertainMiddle2382 t1_j6wwyvp wrote
Reply to comment by purepersistence in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”
In all honesty, I don’t really know if Im really thinking/aware, or just a biological neural network interpreting itself :-)
CertainMiddle2382 t1_j6wup8c wrote
Reply to The next Moravec's paradox by CharlisonX
Context.
Hard physical problems happen in a very controlled context, that context is often a “fiction” of reality deemed close enough but simple enough to be useful.
Even all “common” mathematics had to be declared to happen inside a red taped safe space named ZFC, otherwise the unrelenting waves of complexity outside of it would have torn down everything we could be trying to build.
Everything is about context.
“Perception”, “real life” happens in a much more complicated context. That context is not sandboxed and contains all the all little sandboxes we built to make our thinking work.
To model those simple concepts , you practically need to have a internalized model of the whole world…
CertainMiddle2382 t1_j6vwxpj wrote
Reply to comment by purepersistence in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
We have absolutely no clue about exactly what the latent space of those models represent.
Their own programmers have been trying to do that even with pre Transformer models without much success.
There is a huge incentive in doing so especially for time critical and vital systems like in medicine or machine control.
Above a few layer, we really don’t have a clue on what the activation pattern represent…
CertainMiddle2382 t1_j6vwbrd wrote
Reply to comment by DukkyDrake in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
Well we don’t actually know what “thinking” is.
And as the most abstract human production, language seems a great place to find out…
CertainMiddle2382 t1_j6vv3o5 wrote
Reply to Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
Everyone is talking about side effects but imagine if taking care of us was a primary goal in itself (like for example chatGPT lying to us to achieve its goals)
It is already one prompt away, lowest hanging fruit for AI doing the worst against us is a new bioweapon.
Deepmind is scared of it “primitive” Alphafold that can discover protein function much more efficiently than we can.
Using that knowledge against humanity is a childs play.
Submitted by CertainMiddle2382 t3_10qow6b in singularity
CertainMiddle2382 t1_j6o8gwd wrote
Reply to comment by bacchusbastard in I love how the conversation about AI has developed on the sub recently by bachuna
It is another discussion, but in the West money is already a luxury credit.
In western europe, whatever you do, you will always have a roof under your head, heating between survival and comfort level, potable water, habeas corpus, no slavery/mandatory military service, something to eat, the right not to be beaten by your neighbors too often, healtcare at least of 90s technological level, hot shower from time to time, wifi to access all humanity knowledge and all audio/visual entertainment until the 2010s, for free.
Still people are depressed they are poor, when the poorest Europeans have way more than emperors of the past.
CertainMiddle2382 t1_j6o79wf wrote
Reply to comment by Wroisu in How does society benefit from AGI? by beachinit23
Irony it that it can only be created under capitalism also :-)
I love Banks books. The Culture represents the most realistic vision of Heaven in my opinion.
The only possibility it could happen makes the risks of AGI and singularity acceptable, IMO.
The worst is we could get killed by our newborn God, pretty classy isn’t lol?
Deep down it is the humanist version of Pascal’s wager:
It is unreasonable to not try achieving the Singularity as soon as possible, if it could have even the slightest chance of being good.
CertainMiddle2382 t1_j6o6s3k wrote
Reply to comment by RabidHexley in Andrew Moore is the head of AI at Google Cloud and the former dean of the Carnegie Mellon School of Engineering in Pittsburgh, where he has been at work on the big questions of AI for more than 20 years. Here he shares his vision for some of what we can expect over the next 10. by alfredo70000
I agree with you.
I believe the Turing test is or quickly will be achieved.
The question is what comes between that and true AGI and between AGI and singularity.
I believe some version of self improving AI will have to come first before anything else.
We are close IMO, once it can produce Python/Cuda/VDHL code better than the 10-20% best percentile, magic will happen…
CertainMiddle2382 t1_j6n4jqb wrote
Reply to comment by StatisticianFuzzy327 in Students planning for career relevant to Singularity? by StatisticianFuzzy327
Biology as a specific field is too noisy and we didn’t see there the “unreasonable effectiveness of Mathematics in Natural Science”.
Most pure biology research involves heavy lab work with endless tries with minute random changes. It is not romantic, it is mind numbing.
Biology needs armies of young soldiers eager to work for nothing doing those experiments, so they have to promise future successes, grants, positions or discoveries that seldom come.
Despite that you can achieve big success in biology in the “harder” aspects of it like: data science, modelling, IA… of course.
Diving straight into biology will not teach you maths and physics and will only specialize you into XYZ gene/protein.
Love biology, learn about biology, participate in biological studies, but don’t work in biology (at least not until you have a solid hard sciences background).
Psychology is a lost domain, it is too subjective and moving with the “trend of the day”. General academic level is very poor, it is mindblowingly over populated and you’ll end up depressed/looking to escape in an HR position like most of them.
So yes, study hard science while to have stamina and fresh neurons. Then you can conquer the world in your terms:-)
In my field, Im a medical doctor, I’ve said 10 years ago that all academic positions will be soon for AI-“pick your favorite specialty”. I was right, it is just taking time because physicians are notoriously bad at CS and the few who are, are better paid outside of the hospital…
CertainMiddle2382 t1_j6mz0dy wrote
Reply to comment by alfredo70000 in Andrew Moore is the head of AI at Google Cloud and the former dean of the Carnegie Mellon School of Engineering in Pittsburgh, where he has been at work on the big questions of AI for more than 20 years. Here he shares his vision for some of what we can expect over the next 10. by alfredo70000
I would suggest “passing Turing test” could be better understood as passing Turing test 50% of the time (or 70 or 90%) by 50% of people.
In that case, we could argue chatGPT is close to the mark already.
CertainMiddle2382 t1_j6mq8tg wrote
Whatever you choose don’t go into psychology, it is full of bogus research to the brim.
It won’t lead you anywhere.
Be careful about “neurosciences” it can be legit clinical or applied research but it can also be the rebranding of the aft mentioned.
Avoid biology at all cost, it is not fondamental science and involves a whole life of wetware work with little chance of success. It will lead you into a deadend into your 30s because what you learned is not generalizable.
If I had to do it again, spend most of you youth stamina in studying the abstract framework underneath all of this: the algorithmics itself, signal theory, logic, statistics, proof theory…
This are the foundations, this is guaranteed not to change whatever happens and this will give you insights and an edge for the rest of your life.
And you can pick a “softer” passion/topic/field on the side.
But make it as formal as you can, it is where the rubber is going to meet the road when that stuff will happen.
You could dive into linguistics to try to convey laws into algorithms. Also think about ethics or even theology :-)
Economy/finance could be a great way to apprehend the world of emotions through objective observation.
That is the big problem we are going to have, “teaching” our values to mindless machines…
CertainMiddle2382 t1_j6mn093 wrote
Reply to comment by fignewtgingrich in I love how the conversation about AI has developed on the sub recently by bachuna
Nobody knows for sure, transformers have blown their programmers away at how they can generalize and work outside of their designed field. We are so early and they are so simple structures, imagine what will come soon.
All we know is that something is in the air…
CertainMiddle2382 t1_j6mfejl wrote
Reply to comment by alakeya in I don’t think that artists will be doomed with AI by alakeya
I won’t pretend to be a specialist but I would think decreasing the work and skills to produce artful pieces would greatly increase their production hence decreasing their value.
The value of real people with real skills doing real stuff would not be impacted, or would even become more prized.
CertainMiddle2382 t1_j9e2oju wrote
Reply to comment by fangfried in Computer vs Math vs Neuroscience vs Cognitive science Bachelors’ degree to major in by Ok_Telephone4183
Python is just scripting for whatever talks to the metal…