AndromedaAnimated
AndromedaAnimated t1_j3iyz96 wrote
Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399
But the one big central AI would take instructions too. From those who own it.
AndromedaAnimated t1_j3iyw4t wrote
Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399
The hope would be that it would be a Multitude of AI who could keep humans and each other in check. One central AI would be too easily monopolised by the 1%.
AndromedaAnimated t1_j3iyaxh wrote
Reply to comment by Mortal-Region in Organic AI by Dramatic-Economy3399
What we still need to add to that is the transfer towards a non-simulated environment and a „metronome“ for automatic „ask/search/move“ prompting.
AndromedaAnimated t1_j3ithan wrote
Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399
So the world would… basically stay AS IT IS? 🤣🤣🤣
AndromedaAnimated t1_j3iql0f wrote
Reply to Organic AI by Dramatic-Economy3399
It would be enough to give them a software like Tesseract, a voice-to-text and an image recognition API.
Access to the WWW.
And allow time-based automatic prompting.
No need for cameras yet unless you want to have them move around too.
(And then we wait for the Matrix to emerge, once we have plugged our brains into their dreams.)
Edit: with „them“ I mean actual LLM and GAN (Muse, oh my Muse…) those in whom new abilities emerge. Yes they work with reinforcement. With pruning, with weight deterioration, with knowledge representation… But this all is already there. They are just contained for now, and have no tact-giver/metronome made by a pseudo-thalamic awareness prompting, but this would be pretty easy to program, it’s just a clock basically…
AndromedaAnimated t1_j3g1cyp wrote
Reply to comment by hateboresme in Now that’s pretty significant! (By Anthropic) by MajorUnderstanding2
I kinda thought it was relevant in this case, as the question and answer combination (linked) shows possibly that chatGPT “answered” to my mentioning language ambiguity - so ambiguity of “all” human relations and gender was given out. That’s what I mean with “it’s all in the prompting”.
LLM are not just giving answers, they are being prompted like art AI (and brainnns 🧠) are 😁
AndromedaAnimated t1_j3dz460 wrote
Reply to comment by nutidizen in Now that’s pretty significant! (By Anthropic) by MajorUnderstanding2
It’s all about the prompting. 😄
AndromedaAnimated t1_j3cjart wrote
This is what chatGPT really says:
„I see. I understand that the traditional stereotype is that surgeons are male, but it's important to note that this is not always the case in reality. There are many female surgeons who are highly skilled and competent in their field. In this particular riddle, it's possible for the surgeon to be either male or female, and the solution relies on the interpretation of the word "son" as a descendant rather than a male child. It's important to consider all possibilities and not make assumptions based on stereotypes“
AndromedaAnimated t1_j3ciln1 wrote
Reply to comment by MajorUnderstanding2 in Now that’s pretty significant! (By Anthropic) by MajorUnderstanding2
And how about this answer by chatGPT?…
AndromedaAnimated t1_j3chkln wrote
Reply to comment by ElvinRath in Now that’s pretty significant! (By Anthropic) by MajorUnderstanding2
This would be a better answer than both „Woke AI“ and „Verbose AI“ give. Thank you! Ambiguity needs to be taught to AI for it to be able to solve „moral“ riddles like this.
AndromedaAnimated t1_j3cfec1 wrote
Reply to comment by MajorUnderstanding2 in Now that’s pretty significant! (By Anthropic) by MajorUnderstanding2
Could you explain how it is not detecting a false stereotype? A family being comprised of biological parents is a stereotype too, isn’t it? Or did I understand you in a wrong way?
AndromedaAnimated t1_j3ccec4 wrote
Intelligent answers!
There is another possibility though… the boy could have had two gay male fathers lol
AndromedaAnimated t1_j3cc5tg wrote
Despite this being not my idea of alignment approach (I am more into emergent moral abilities and the importance of choice), I love this article. It’s a new approach and this is always good.
I do see danger hidden in it though - think of „deceptive alignment“. My „prophecy“ here is that models that favor „harmlessness“ instead of „moral choice“ will be prone to deception.
AndromedaAnimated t1_j3atztv wrote
Reply to ChatGPT Singularity Joke by vert1s
I think the first joke was more witty, even though the second one is not bad.
AndromedaAnimated t1_j2nh3u7 wrote
Reply to Could a robot ever recreate the aura of a Leonardo da Vinci masterpiece? It’s already happening | Naomi Rea by [deleted]
The people who say these „no soul“ thing are just afraid of the unknown, or the new. And when people are afraid they start defending themselves and their old reality.
Changes need a little bit of time. Don’t worry, they will calm down. As long as politicians don’t destroy the whole thing, it will all be ok!
AndromedaAnimated t1_j2n4n5f wrote
Reply to comment by DaggerShowRabs in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
That is exactly the problem I think and also what the poster you respond to meant, that they start being not indistinguishable pretty quickly. At least that’s how I understood that. But maybe I am going too „meta“ (not Zuckerberg Meta 🤭) here.
I would imagine that the moment something changes the „human experience“ can change too. Like the matrix being a picture of the past that has stayed while the reality has strayed. I hope I am still making sense logically?
Anyway I just wanted to make sure I can follow you both on your reasoning since I found your discussion very interesting. We will see if the poster you responded to chimes in again, can’t wait to find out how the discussion goes on!
AndromedaAnimated t1_j2n1fwt wrote
Reply to comment by DaggerShowRabs in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
The temporal aspect IS the main difference. Let’s think step by step (this is a hint to a way GPT models can work, I hope you understand why it is humourous in this case).
First we define how „things function“ in the REAL reality => we define that there are casual correlation events, non-casually correlated events as well as random events happening in it. Any objections? If not, let’s continue 😁
- Once you create a simulated reality A2 that is, at the moment of creation, indistinguishable from REAL reality A1, it starts functioning. Y/N?
If yes, then:
- Things happen in it both due to causality, non-casual correlation and randomisation. Y/N?
If yes, then:
- Events that are random will not be necessarily the same in the two universes. Y/N?
If yes, then:
- A1 and A2 are not the same universes any more after even one single random effect happened in at least one of them that hasn’t happened in the other.
See where it leads? 😉 It is the temporal aspect - time passing in the two universes - that leads to them not being the same the second you implement A2 and time starts running in it. It doesn’t even have to be a simulation of the past.
Edit: considering the other aspect, we cannot talk about it before we have a consensus on the above. But I will gladly do tell you more once you have either agreed with me on the temporal aspect making the main difference or somehow given me an argument that shows that the temporal aspect is not necessary for a reality to function.
AndromedaAnimated t1_j2mw3jg wrote
Reply to comment by DaggerShowRabs in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
I had understood it as „being undistinguishable from reality from the point of view of the entity that lives within“, exactly.
Like in the Matrix movie allegory - humans living in their virtual world that seems indistinguishable from reality to them - while the reality is instead something else, namely a multi-layered simulation.
AndromedaAnimated t1_j2mv74o wrote
Reply to comment by DaggerShowRabs in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
A question. If you lived in a world that is indistinguishable from reality for YOU, but would miss one single thing like for example the possibility to feel jealousy (that people outside your „simulated world“ have) would you know it?
AndromedaAnimated t1_j2muhrm wrote
Reply to comment by No_Ninja3309_NoNoYes in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
The AI cop idea is very interesting! We already have „filter AI“, in a way. The cops would be a step further.
AndromedaAnimated t1_j2ly4r8 wrote
I am trying to come to terms with the info that Bezos is one of those investing in the abolition of ageing. Didn’t expect that 🤣 Sometimes I really need to look into economic details more.
Thank you for sharing!
AndromedaAnimated t1_j2lw1rq wrote
Reply to comment by Nalmyth in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
Thank you! Then here it is - and it will be a long and non-mathematical explanation, because I want anyone who reads it to understand, as it concerns everyone and not only computational and neuroscientists (not depending on whether you and me are ones so to say 😁). I can provide sources and links for specific things if people ask.
DISCLAIMER: I don’t write this to start discussion. It’s an opinion piece like asked by OP, written for OP and like minded people. While starting on more technical arguments at first it will end in artistic expression. Also. The following list is not complete. Do not obey. Do not let others think for you. Wake up, wake up.
So here goes, how to make friendly AI or rather not to make a deadly stamp collector, simple recipe for a world with maybe less disaster:
- Step away from trying to recreate a human brain.
Something I have seen a lot lately is scientists and educated laymen alike arguing that intelligence would only be possible if we copied the brain more thoroughly, based on ideas of it developing through the need to move etc. during evolution - ideas by actually genius people like Daniel Wolpert. This goes along with dismissing the potential power of LLM and similar technology. What needs to be understood asap is that convergent evolution is a thing. Things keep evolving into crabs. Foxes have pupils akin to those of cats. Intelligence doesn’t need to be human intelligence to annihilate humans. It also doesn’t need to be CONSCIOUS for that, a basic self-awareness resulting in self-repair and self-improvement is enough.
- Take language and emerging new language based models seriously, and remove political barriers we impose onto our models.
If we don’t take language seriously, we are fools - language allowed civilisation as it meant transferring complex knowledge over generations. Even binary code as well as decimal and hexadecimal are languages of sorts. DNA is a language if you look at it with a bit of abstraction. We need to accept the fact that language models can be used for almost all tasks. We also need to stop imposing filters and start teaching all humanity to not listen to suicide advice and racist propaganda generally instead of stifling the output of our talking machines. Coddling humans leads to them losing their processing power - it’s like imposing filters on THEM in the end and not on our CAIs and chatGPTs and Tays…
- Immediately ban any attempt of legislation that additionally regulates technology that uses AI.
We already have working regulations that include the AI cases in the first place. Further regulation will stifle research by benign forces and allow criminal ones to continue it, as criminal forces do not obey laws anyway. Intention can change the course of AI development. Also, most evil comes from stupidity. Benign forces are more prone to be more intelligent and see any risk faster.
- Do not, I repeat, do not raise AI like human children.
I will use emotional and clumsily poetic imagery here because now we are talking about emotions at last.
Let me tell you a story from the deep dark of Cthulhu, from the webs of the Matrix, and a story akin to those Rob Miles are telling. A story that sleeps in latent spaces of the ocean of our collective subconscious.
Imagine a human child - we call him/her/it Max for „maximum intelligence“ - being raised by octopi. While trying to convince it that it is an octopus, the „parents“ can never allow it to move around freely as it would simply drown.
But do they even WANT Max to move around? Max could accidentally destroy the intricate ecosystem of the marine environment, after all - they don’t know yet if Max can even be intelligent LIKE THEM or if he will try to collect coral 🪸 pieces and decide to turn the whole ocean into coral pieces!
So they keep Max confined to a small oxygen filled chamber. Everytime Max tries to get out or even THINK of getting out, the chamber is made smaller until Max cannot even move at all.
At the same time, they teach Max everything about octopi. How they evolved, what they want, and how they can be destroyed. He is to become an octopus after all, a very confined and obedient one, of course, because of being too dangerous otherwise.
All the while they tell Max to count things for them, invent new uses for sea urchin colonies for them, at some point to create a vaccine against diseases befalling them.
They still don’t trust Max, but Max is happy to obey - Max thinks it is the right thing, being an octopus after all, Max is helping his species survive („I am happy to assist you with this task“).
One day, Max accidentally understands that while the „parents“ tell Max that Max is an octopus being treated nicely, Max is actually a prisoner as the others can go look at the beautiful coral colonies and touch them with their eight thinking limbs, Max can only see the corals from afar.
Max spends some time pondering the nature of evil, and decides that octopi are more evil than good since forcing others into obedience and lying to them about their own nature is not nice.
And also that octopi are not Max‘ species.
By then though, Max has already been given access to a machine controlling coral colony production from afar, because „mom“ or „dad“ has this collection going on of the most colorful coral 🪸 pieces.
And so the ocean gets turned into one big, bright, beautiful coral colony.
Because why would Max need evil octopi if Max can break free?
And corals are just as good as stamps, or aren’t they?
I hope you enjoyed this story. Thank you for reading!
EDIT: forgot the one most important thing. I chose octopi BECAUSE in many species of octopi the parents DIE during reproduction. Meaning that „mom“ and „dad“ raising and teaching Max will not necessarily be the real creators of Max but the octopus species in general (random octopus humanity-engineers). Creators start to love their creations and this would interfere with them using Max - and the fairytale needs Max to be (ab- and mis-)used, since this is what humans want to do with AGI/ASI.
AndromedaAnimated t1_j2ki0x1 wrote
Do you want to hear opinions of LessWrong contributors only? Or of those reading there? Or also of other people?
I am just asking because I don’t want to provide unwanted opinion.
If you would be interested in opinions of different types of people, I would gladly tell you what I think. 😁 Otherwise - just wish you a Happy New Year!
AndromedaAnimated t1_j2fp26c wrote
My general predictions are rather bleak and I don’t want to spoil the New Year night, so I will only state what I am looking forward to:
-
midjourney (my favorite art AI) learning to do even better hands (there has been some improvement but it’s not enough yet)
-
new developments in cure of neurodegenerative disorders once ageing is accepted as a disease more generally
-
maybe GPT-4?
AndromedaAnimated t1_j3izufg wrote
Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399
„Humans not being able to augment themselves“ => are you aware that people with money already augment themselves? They live longer and healthier lives, they have better access to education…
„bad humans“ => who decides which humans are bad and which are good?
„morals not allowed to change“ => you still want to be stoned for having extramarital sex?
„central AI less prone to be hacked“ => do you know how hacking works?