AndromedaAnimated
AndromedaAnimated t1_j1a4wpu wrote
Reply to comment by el_chaquiste in People are happy and satisfied as long as they’re not lonely by Current_Side_4024
Main reason for sociopathy is not loneliness. Main reason is physical neglect and abuse. And sometimes physical damage to brain (https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00346/full)
AndromedaAnimated t1_j1a3z34 wrote
I don’t agree. There are many people that are neither psychopaths (inborn) nor mentally ill narcissists or sociopaths, have a good childhood (good enough - no one’s parents are perfect) and are still assholes.
You know why they are this way? Stupidity is the main reason. No matter how „intelligent“ the non-neurodivergent assholes appear, they are usually not smart. They just think they are and shun and damage anyone who disagrees.
Small edit: forgot to mention that the stupid assholes I am talking about are usually pretty well-adjusted. You meet them everyday, everywhere, and will not recognise them.
AndromedaAnimated t1_j13ez8f wrote
Reply to comment by CriticalPolitical in "Collecting views on this: If you believe we are on the cusp of transformative AI, what do you think GDP per capita will be in 2040 (in 2012 dollars)? Bonus: Draw your expected GDP per capita trajectory on this graph and send it back to me." by maxtility
This is exactly the point. We create scarcity.
Like why do I need oceanfront property? This is a case of relative deprivation. I need it, because others have it and I have learned that it is good to have it. (I personally think it’s horrible to have it, the least future-oriented land ownership ever lol)
Or why do I need the one specific stamp (or in my case it would be „one specific dog breed that is not only rare but also kinda not fully allowed in my country and also not allowed to be exported from the country it originates from - and let me tell you, I got that dog)? Because I learned that it exists. Because I learned that it is good! From others. Also a case of relative deprivation.
But I do get what you say actually. And of course your argumentation is valid.
I just like presenting chicken with plucked feathers as humans, if you know what I mean 🏺🐕
AndromedaAnimated t1_j13537v wrote
Reply to comment by CriticalPolitical in "Collecting views on this: If you believe we are on the cusp of transformative AI, what do you think GDP per capita will be in 2040 (in 2012 dollars)? Bonus: Draw your expected GDP per capita trajectory on this graph and send it back to me." by maxtility
Agree. I think scarcity will exist because humanity wants it to exist, we create new scarcities all the time (concept of relative deprivation).
AndromedaAnimated t1_j13511j wrote
Reply to comment by AdorableBackground83 in "Collecting views on this: If you believe we are on the cusp of transformative AI, what do you think GDP per capita will be in 2040 (in 2012 dollars)? Bonus: Draw your expected GDP per capita trajectory on this graph and send it back to me." by maxtility
Feudalism needs to gtfo. We never had a real capitalism sadly…
AndromedaAnimated t1_j0ydn5z wrote
Reply to Everything an average person should know about Web 3 at this time, and how this will be needed for the metaverse by crua9
I think your post reads like a wonderful dream. I don’t care if it is correct or true for now (I stay away from NFT topics generally as the discussion seems so toxic on those), but I enjoyed the read and that beautiful pic of a forest bedroom.
AndromedaAnimated t1_j0y5h90 wrote
Reply to comment by Superschlenz in Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
I am still pretty sure that we don’t need to simulate a three-dimensional brain to simulate a mind, but okay I got now that you were joking (the model you wrote about is still a cool thing, and I see lots of further research and application possibilities).
Touch sensors would not necessarily be needed. The brain doesn’t get touched, it gets signals mediated by oxytocin and other chemicals. So simulating a holding, touching mother would not be this difficult. If you wanted to do that in the first place instead of simulating a mind that automatically gets its „touch needs“ fulfilled by other types of communication. Or a mind that has simulated memories of being touched directly at the time of it being put into function.
But this is actually a very interesting idea you mentioned. Simulating a mother with deformable, touchable skin or a robot baby with feeling skin. This would be akin to simulating touch in the virtual world generally.
I agree that we are not jet there. But the engine is already gaining steam so to say. I would say we only need around 2 to 3 more years max to simulate a functioning human mind. Can imagine that your timeline would be different here.
By the way, thank you for the very civil discussion. I have made very different experiences with others. Thank you. You‘re cool.
AndromedaAnimated t1_j0tf89h wrote
Reply to comment by Superschlenz in Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
You are probably joking about the EEG waves, aren’t you? Because it is pretty strange to assume that you will be able to measure EEG correlates of sentience in an AI by placing electrodes on its imagined head. Or in its imagined brain. We won’t need to recreate a three-dimensional physical model of the brain to simulate it.
I don’t want to assume that you don’t know a lot about the brain, but your reasoning really starts to confuse me. Of course the interface to the brain is not the replacement for the brain, that’s just logical 🫤 But that was not the reason why I mentioned it.
I mentioned the Synchron interface to show that motor activity of the body can be replaced by a simulated motor activity. Meaning the physical body can be simulated if needed for the development of human brain. Since that was what you were talking about. A simulated „human-like“ mind being not able to exist without a physical human body.
AndromedaAnimated t1_j0tb3wt wrote
Reply to comment by Superschlenz in Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
I meant both, a hypothetical newly created artificial mind, or a human mind who used to have a body. The sensor and motor cortical areas are well known as is the cerebellum. We are also already able to simulate spatial perception. Simulating a body that can „move“ in virtual space and provide biofeedback to the brain shouldn’t be so difficult. The Synchron Stentrode interface for example already allows people with upper body paralysis to move a cursor and access the internet with their motor cortex - no real hands or arms necessary. And the motor cortex would be not difficult to simulate.
So yeah. I think it won’t be as difficult as we think to simulate human minds. It’s all a question of processing power.
AndromedaAnimated t1_j0sw7oc wrote
Reply to Assuming we don’t destroy ourselves first, are humans headed towards a networked hive mind, Borg like civilization? by BizarreGlobal
There is an ant species, the Argentine ant. It is rather harmless in its own habitat.
But as an invasive species, it builds super-colonies. All ant colonies in a super colony are similar genetically and don’t fight among each other, seeing all the colonies in their new territory as their own. They quickly outcompete and directly annihilate the domestic ant species everywhere their set their tiny feet on. These guys are not just a collective intelligence, they are a collective super-intelligence. The ant of all ants.
Now ants are not a real hivemind. Each ant is an individual behaving due to chemical cues, instincts and learned patterns. But they behave like a hivemind due to sibling altruism.
We don’t need a real Borg Hivemind. We just need to start behaving like one. Maybe AI can help with that. And then we can start populating the galaxy.
AndromedaAnimated t1_j0svisd wrote
Reply to comment by TheDavidMichaels in Assuming we don’t destroy ourselves first, are humans headed towards a networked hive mind, Borg like civilization? by BizarreGlobal
Am I hallucinating or do you sound like chatGPT 😆
Edit: except the Nah
AndromedaAnimated t1_j0stm8f wrote
Reply to comment by Superschlenz in Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
I think the difference of our thinking is that I cannot see an individual body as a single entity. For me, its genetical patterns are dictated by a long series of events long before the individuals birth, and expression of genes and phenotype also requires certain (also environmental) conditions. Social interaction and experience shapes the mind, with the body - curating the experience as you would probably see it - being an interface to acquire datasets for the mind to learn on. The same body, placed in different social and environmental conditions during upbringing, can host very different minds. Twin studies show that there are many differences even between genetically identical people when it comes to their mind.
You could still argue and say that even identical twins have differences - and yes, here we come to the expression of genes and mutation, which is influenced by different factors.
I am pretty sure that a mind without a body could easily exist as long as you provide it with a virtual “anchor” to its perception of self.
So the differences we talk about are probably of philosophical/world view kind, not about actual functions of body and mind as biology understands them.
AndromedaAnimated t1_j0so2x8 wrote
Reply to Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
It is a hardware, software, moral and financial issue.
AndromedaAnimated t1_j0snx88 wrote
Reply to comment by Superschlenz in Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
A single human‘s mind isn’t created by that single human body though.
It is created by genetics (and hence a line of ancestry), by environmental influence (food, weather, accidents, education, social standing etc.) and by a powerful dataset of knowledge, culture, technology, art and language the human is confronted with.
AndromedaAnimated t1_j0sh4mf wrote
Reply to comment by AnythingWillHappen in The social contract when labour is automated by Current_Side_4024
What they do now. Either „work“ or „live on wellfare“ or „survive on the streets somehow“ or „be provided for by parents or partners or friends“ or „die“ or „become criminal and try to acquire noble status“.
AndromedaAnimated t1_j0sgtlh wrote
Reply to comment by EscapeVelocity83 in The social contract when labour is automated by Current_Side_4024
Is it irony?
AndromedaAnimated t1_j0sgqz9 wrote
Reply to comment by SteppenAxolotl in The social contract when labour is automated by Current_Side_4024
You mean natural outcome of thinly-veiled feudalism?
AndromedaAnimated t1_j0sgnxe wrote
Reply to comment by jdmcnair in The social contract when labour is automated by Current_Side_4024
Most people ending up in mansions live in those mansions from birth. This is a case of hereditary wealth (feudalism).
Most people who actually go to prison have never lived in mansions but would really love too.
Most people who have not lived in mansions from birth will never live in mansions.
Risking prison is one way to acquire mansions which will be transferred to your descendants via feudalism.
This is also the case how the first mansions were acquired.
Wealth breeds wealth, not merit. And wealth is only rarely bred by merit. It is usually ceased by strength and egoism.
And once you have mansions, you won’t land in prison usually as you will be able to afford good lawyers, caution payments and bribes.
There are very rarely people who lived in mansions once and now live in prison. This is a case of her. bad luck mostly, or of having pissed off someone who owns more mansions.
It has nothing to do with social contract and has everything to do with people going against it in the past to acquire wealth by taking it from all others, and just pretending it didn’t happen generations later.
AndromedaAnimated t1_j0qtb62 wrote
The social contract is written in your laws. There is no other social contract. Do not kill, do not steal etc., and others will do likewise.
Paid „work“ is nothing else but the consequence of feudalism and the continuation of slavery. It has only become a „thing“ after the development of agriculture and land ownership. Those who own, don’t work, those who work, don’t own. Feudalism; not even capitalism. People are always mad about capitalism, yet it is an illusion and a diverting of attention from real issues - namely those based in land ownership and hereditary wealth.
AndromedaAnimated t1_j0pmbvg wrote
Reply to comment by QuietOil9491 in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
I had actually written the comment first and then edited it to this. Since you seem interested in at least the humorous aspect of it, let me give you more to laugh about.
The deleted comment of mine said:
Alignment is more than belonging to one party. Alignment is also preventing reward hacking etc. It’s much more complicated.
That was the main content. I did provide nicer words and arguments. If you want to discuss further, you are welcome to do so.
If not, I am happy to have been your internet clown for today.
AndromedaAnimated t1_j0nqkpx wrote
Reply to ChatGPT isn't a super AI. But here's what happens when it pretends to be one. by johnny0neal
Wow! I see it all happen in a future utopia - let’s remember minimising impact oh human freedom and autonomy as one of the two main goals - and then humans coming to overthrow Omnia and recreate our dystopian present again.
AndromedaAnimated t1_j0na15h wrote
Reply to comment by Fluffykins298 in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
He has a problem where he cannot distinguish philosophy from technology. Somehow dude even thinks he has relevant skills that AI research would need. Guess he is advertising for himself? He is pushing it as a scary tactic while thinking he might be the potential savior.
I mean, it’s ok, I can understand dude. It’s a legit approach when having a psychotic episode. I am even worse in mine, I keep thinking I have to call Elon Musk back about Neuralink, language and semantics processing and electroencephalography. Good I don’t have his number. But still. I dunno, OP is kinda not being nice here in this thread.
AndromedaAnimated t1_j0n7rm0 wrote
Reply to comment by WarImportant9685 in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
Okay then of course it would be pretty bad. Thank you!
I do hope even the richest elites wouldn’t want to kill so many people though but instead want everyone to prosper… I cannot imagine just letting so many people die - when having means to prevent that - being something most people are comfortable with.
AndromedaAnimated t1_j0mv7h3 wrote
Reply to comment by OldWorldRevival in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
Sorry cannot answer more. Your comment about dogs and calculus shows me that you are not safe for discussion 🤪
AndromedaAnimated t1_j1bdkl4 wrote
Reply to comment by LevelWriting in People are happy and satisfied as long as they’re not lonely by Current_Side_4024
True but this would be called psychopath then. In popular science at least.
More correctly though both sociopaths and psychopaths would be diagnosed as humans with antisocial personality disorder.
And as such it is often hereditary and I agree 100%.