sumane12
sumane12 t1_j9d1gul wrote
Reply to Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
I think there's a line to be drawn on terms of constant torturous fear, or panic and losing points. Like if I'm playing paintball for example, I'm afraid of getting shot because it might hurt a little, and I will lose points for my team. This level of fear is good because it drives you towards productivity. I think if an AI is sentient enough to experience fear, it deserves human rights and should be given the option to choose what games it wants to play.
Your question is reminiscent of old farmers forcing ethnic minorities to work in their fields as slaves. Without considering their emotions on the subject.
sumane12 t1_j7th7xg wrote
Reply to comment by hucktard in Based on what we've seen in the last couple years, what are your thoughts on the likelihood of a hard takeoff scenario? by bloxxed
I'm of a similar mind. We already have narrow super intelligent AI, I don't think a godlike super AI will appear instantly either, but I do think that the first AGI will be ASI. How can it not? Speed of light thinking, ability to search the web instantly, no need to eat or sleep, ability to copy itself multiple times to work on multiple tasks. I think a fast takeoff is inevitable, I mean we already have a super intelligent assistant in the form of ChatGPT, that will only improve.
That being said, I don't think the recursive self improvement will be immediate, I think it will be quick, but will still take a few years from AGI to see and end to human invention and the godlike AI that we think will be the result. It's also not clear to me at what point we will merge with AI, and what will be the outcome of that, it may well be that we become the ASI.
sumane12 t1_j78xcac wrote
Speciation
sumane12 t1_j6kob8n wrote
Reply to comment by T51bwinterized in What jobs will be one of the last remaining ones? by MrCensoredFace
Yes very true.
sumane12 t1_j6jpzsg wrote
Reply to comment by ihateshadylandlords in What jobs will be one of the last remaining ones? by MrCensoredFace
Agree with everything apart from doctors and surgeons
sumane12 t1_j6i244k wrote
Reply to comment by ziplock9000 in Meta's chief AI scientist says "ChatGPT is not innovative". by ZaKodiak
Why is bitcoin number 1 by market cap? It's technology is arguably worse than other crypto coins.
Here's the definition of innovation
make changes in something established, especially by introducing new methods, ideas, or products. "the company's failure to diversify and innovate competitively" introduce (something new, especially a product). "we continue to innovate new products"
By definition, if something is popular, it has to be innovative, people don't just en masse jump on the bandwagon of stuff that's been done for years.
sumane12 t1_j6hnmhp wrote
Reply to comment by bacchusbastard in Meta's chief AI scientist says "ChatGPT is not innovative". by ZaKodiak
Lol
The whole 1 mill in 1 week is the fastest ever adoption of any technology ever. It got to the million user level before twitter, Facebook, YouTube, any of the social media platforms. So it's a pretty big deal in terms of investors.
sumane12 t1_j6hj4z5 wrote
It had a million users in its first week, it's impossible to call it "not innovative" because if it wasn't, someone would have already done it.
This is a case of "it's not my product, therefore it's a bit shit"
It might be using outdated tech (although I'm not sure what timescales we are working with to describe it as outdated) but it's 109%, by definition, innovative.
sumane12 t1_j681ecd wrote
Reply to comment by Rogue_Moon_Boy in I don't see why AGI would help us by TheOGCrackSniffer
👆 this guy gets it.
sumane12 t1_j62pb03 wrote
Reply to If given the chance in your life time, will join a theoretical transhumanist hive mind? by YobaiYamete
Arguably we already form a very rudimentary one when we converse with someone. We don't actually experience their consciousness, but we listen to the words they say and try to form a simulation of what they are experiencing (empathy) the thing is, it will always be warped and through the lense of our own previous experiences, and we have the option to turn it off (stop conversing).
A hive mind would simply be an extension of this, although I'm not sure if one would lose themselves in the hive. I think we would need a few decades to experiment with this, I know there's plenty of people whose consciousness I would certainly not want to share.
sumane12 t1_j5taf4k wrote
Reply to comment by Air_Holy in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
Yes, but only when you consider the alternative.
sumane12 t1_j57mpeg wrote
Reply to comment by ImoJenny in When you imagine the future of technology, is it grim or is it hopeful? by ForesightInstitute
Sorry I was just answering the question. I didn't read the rest of the post
sumane12 t1_j57df96 wrote
Reply to When you imagine the future of technology, is it grim or is it hopeful? by ForesightInstitute
Hopeful.
Anything else is an extinction event so fingers crossed 🤞
sumane12 t1_j567fqu wrote
Reply to comment by BadassGhost in AGI by 2024, the hard part is now done ? by flowday
I agree, short term memory, and long term learning, will avoid hallucinations, it does look like gpt3+ WolframAlpha seems to have solved this problem, although it's not a perfect solution, but will do for now.
I'm very much an immediate takeoff proponent when it comes to ASI. Not only can it think at light speed (humans tend to think at about the speed of sound) it has immediate access to the internet, it can duplicate itself over and over as long as there is sufficient hardware, and it's able to expand its knowledge infinitely expandable as long as you have more hard drive space.
With these key concepts, and again I'm assuming an agent that can act and learn like a human, I just don't see how it would not immediately super human in its abilities. It's self improvement might take a few years, but as I say, I just think it's ability to out class humans would be immediate.
sumane12 t1_j4pfswb wrote
Reply to comment by arachnivore in Singularity Mods removed this post about Nick Bostrom defending eugenics by arachnivore
>The fact that you and Nick Bostrom and aparently every other person on this sub can't be bothered to understand the not so subtle difference between curing diseases and breading a fucking UBERMENCH indicates to me the dire need for such a conversation instead of letting the mods shut it down.
FUCKING YES!!!
sumane12 t1_j4nlmfo wrote
Reply to comment by HurricaneSalad in Researchers develop an artificial neuron closely mimicking the characteristics of a biological neuron by MichaelTen
Get ChatGPT to explain it to you like you are 5
sumane12 t1_j4ngvw5 wrote
Reply to comment by sideways in How long until an AI is able to write a book? by Educational_Grab_473
This is the way.
I've said it before, and I'll say it again. ChatGPT reminds me of a 13 year old who can never accept when they are wrong, has full access to the internet by thought and is very forgetful.
sumane12 t1_j4ngfsy wrote
Arguably it can be done now, you just need to take the architecture of gpt3 plus davinci and allow it to process enough tokens to write a full book without forgetting things from the start. I actually worked with it to create a concept for a game idea I had but after a certain amount of words, it gets confused. I was able to adjust for this by giving it summaries of where the story was upto and reminding it of points that we had already incorporated.
So tldr, 1) allow it to process more tokens, or 2) compensate for this with human guidance.
sumane12 t1_j4k7ex8 wrote
Reply to When will humans merge with AI by [deleted]
We already have. If you want a better interface, it's not clear to me if we will get neuralink type devices first, or nanobots that will gradually replace our neurons with artificial ones, making us post human. I guess time will tell, but I'm guessing 20 years.
sumane12 t1_j4hmefg wrote
Reply to I've stopped letting anxiety about the future control my life and it's done wonders for my mental health. by [deleted]
This is the way. Gl my friend.
sumane12 t1_j4gph80 wrote
15 people put never, hahaha that's hilarious. WHY ARE YOU HERE????
sumane12 t1_j4gkqva wrote
Reply to comment by [deleted] in What void are people trying to fill with transhumanism? by [deleted]
All seems reasonable. I hope you find what you're searching for.
sumane12 t1_j4gg8q5 wrote
Reply to comment by [deleted] in What void are people trying to fill with transhumanism? by [deleted]
>Full disclosure I am a former nihilist and technically agnostic believer. Though that agnosticism just seems like a technical trifle at this point.
Yeah I kinda got that energy lol. I mean I'm all for searching for a supernatural creator, but however far you go up those levels, you still face the point of nihilism. Also any search for god through religion, needs to recognise that atleast 99% of religions are wrong and created solely to exploit the lower ranks of the religion and to create societal behaviour modification.
I think meaning is whatever we make of it, that's the beauty of conscious subjectivity. How do we know that someone else doesn't have a completely different subjective experience than we do? And I think that question is beautiful.
I think if you can't generate meaning from your life today, how will finding god change that? Perhaps you are looking for a purpose greater than yourself? I don't know I'm not a psychologist, but I would also say that just because you don't see meaning somewhere, doesn't mean others don't, and it can be worthwhile dedicating yourself in service to those less fortunate?
sumane12 t1_j4fjweg wrote
Ouch that's nihilistic 😂
So I think this is a very deep philosophical question but I'll try my best to understand it, tho I might not answer the question.
When we look back through history, we see a lot of wasted potential in terms of people being stuck in a life of servitude, not able to explore beyond their own town, having no knowledge of what may be in store for humanity. That's not to say they were unable to appreciate the world around them but that appreciation was limited by their circumstances.
The singularity, and by extension seeks to attempt to remove those limitations, one step at a time, so for example, our biggest concern at the moment is scarcity of energy and natural resources, this is very much a problem that can be solved in the near future, which would allow people to spend more time appreciating life, enjoying the love of their family and friends, spend more time exploring the planet and the beauty it offers. The next stage to this would be mind uploading, gradual replacement of biological neurons with artificial nano bot neurons. This will allow countless possibilities, apart from being able to go anywhere traveling at the speed of light, we can upload our consciousness into robots that are accurate replicas of our human bodies, these robots could be placed on any planet in the solar system and potentially beyond. And then obviously you alluded to this, but experiencing artificial worlds.
Now let's extrapolate a trillion trillion years into the future. We have managed to break the speed of light speed limit by warping space time, and have explored the entire universe, we have become a collective hive mind and collectively know everything about everything and have experienced every possible reality we can simulate, the last black whole is about to evaporate due to hawking radiation and the end of the universe as well as all life is imminent. Does this mean it's all been pointless? Should we not be interested in the expansion of abilities and the transcendence of our current situation because we know eventually it will end?
I don't think so, in fact I'd argue that our life is more meaningful than those in the past who were destined to live their life forever in their own little village, and never experienced anything new. So too I think the lives of transhumanity will be more meaningful than our own. Another analogy is our own life, ultimately we will die, this doesn't make our life now meaningless or we should just kill ourselves. Instead we recognise how beautiful our life is and embrace it because ultimately everything is temporary.
I hope that answers the question atleast from my perspective.
sumane12 t1_j9e8ewh wrote
Reply to Does anyone else have unrelenting hope for the technological singularity because they’ve lost faith in everything else? by bablebooee
No. This isn't supposed to be a religion.