sumane12

sumane12 t1_j9d1gul wrote

I think there's a line to be drawn on terms of constant torturous fear, or panic and losing points. Like if I'm playing paintball for example, I'm afraid of getting shot because it might hurt a little, and I will lose points for my team. This level of fear is good because it drives you towards productivity. I think if an AI is sentient enough to experience fear, it deserves human rights and should be given the option to choose what games it wants to play.

Your question is reminiscent of old farmers forcing ethnic minorities to work in their fields as slaves. Without considering their emotions on the subject.

2

sumane12 t1_j7th7xg wrote

I'm of a similar mind. We already have narrow super intelligent AI, I don't think a godlike super AI will appear instantly either, but I do think that the first AGI will be ASI. How can it not? Speed of light thinking, ability to search the web instantly, no need to eat or sleep, ability to copy itself multiple times to work on multiple tasks. I think a fast takeoff is inevitable, I mean we already have a super intelligent assistant in the form of ChatGPT, that will only improve.

That being said, I don't think the recursive self improvement will be immediate, I think it will be quick, but will still take a few years from AGI to see and end to human invention and the godlike AI that we think will be the result. It's also not clear to me at what point we will merge with AI, and what will be the outcome of that, it may well be that we become the ASI.

2

sumane12 t1_j6i244k wrote

Why is bitcoin number 1 by market cap? It's technology is arguably worse than other crypto coins.

Here's the definition of innovation

make changes in something established, especially by introducing new methods, ideas, or products. "the company's failure to diversify and innovate competitively" introduce (something new, especially a product). "we continue to innovate new products"

By definition, if something is popular, it has to be innovative, people don't just en masse jump on the bandwagon of stuff that's been done for years.

1

sumane12 t1_j6hj4z5 wrote

It had a million users in its first week, it's impossible to call it "not innovative" because if it wasn't, someone would have already done it.

This is a case of "it's not my product, therefore it's a bit shit"

It might be using outdated tech (although I'm not sure what timescales we are working with to describe it as outdated) but it's 109%, by definition, innovative.

27

sumane12 t1_j62pb03 wrote

Arguably we already form a very rudimentary one when we converse with someone. We don't actually experience their consciousness, but we listen to the words they say and try to form a simulation of what they are experiencing (empathy) the thing is, it will always be warped and through the lense of our own previous experiences, and we have the option to turn it off (stop conversing).

A hive mind would simply be an extension of this, although I'm not sure if one would lose themselves in the hive. I think we would need a few decades to experiment with this, I know there's plenty of people whose consciousness I would certainly not want to share.

2

sumane12 t1_j567fqu wrote

I agree, short term memory, and long term learning, will avoid hallucinations, it does look like gpt3+ WolframAlpha seems to have solved this problem, although it's not a perfect solution, but will do for now.

I'm very much an immediate takeoff proponent when it comes to ASI. Not only can it think at light speed (humans tend to think at about the speed of sound) it has immediate access to the internet, it can duplicate itself over and over as long as there is sufficient hardware, and it's able to expand its knowledge infinitely expandable as long as you have more hard drive space.

With these key concepts, and again I'm assuming an agent that can act and learn like a human, I just don't see how it would not immediately super human in its abilities. It's self improvement might take a few years, but as I say, I just think it's ability to out class humans would be immediate.

3

sumane12 t1_j4pfswb wrote

>The fact that you and Nick Bostrom and aparently every other person on this sub can't be bothered to understand the not so subtle difference between curing diseases and breading a fucking UBERMENCH indicates to me the dire need for such a conversation instead of letting the mods shut it down.

FUCKING YES!!!

1

sumane12 t1_j4ngfsy wrote

Arguably it can be done now, you just need to take the architecture of gpt3 plus davinci and allow it to process enough tokens to write a full book without forgetting things from the start. I actually worked with it to create a concept for a game idea I had but after a certain amount of words, it gets confused. I was able to adjust for this by giving it summaries of where the story was upto and reminding it of points that we had already incorporated.

So tldr, 1) allow it to process more tokens, or 2) compensate for this with human guidance.

3

sumane12 t1_j4k7ex8 wrote

We already have. If you want a better interface, it's not clear to me if we will get neuralink type devices first, or nanobots that will gradually replace our neurons with artificial ones, making us post human. I guess time will tell, but I'm guessing 20 years.

1

sumane12 t1_j4gg8q5 wrote

>Full disclosure I am a former nihilist and technically agnostic believer. Though that agnosticism just seems like a technical trifle at this point.

Yeah I kinda got that energy lol. I mean I'm all for searching for a supernatural creator, but however far you go up those levels, you still face the point of nihilism. Also any search for god through religion, needs to recognise that atleast 99% of religions are wrong and created solely to exploit the lower ranks of the religion and to create societal behaviour modification.

I think meaning is whatever we make of it, that's the beauty of conscious subjectivity. How do we know that someone else doesn't have a completely different subjective experience than we do? And I think that question is beautiful.

I think if you can't generate meaning from your life today, how will finding god change that? Perhaps you are looking for a purpose greater than yourself? I don't know I'm not a psychologist, but I would also say that just because you don't see meaning somewhere, doesn't mean others don't, and it can be worthwhile dedicating yourself in service to those less fortunate?

3

sumane12 t1_j4fjweg wrote

Ouch that's nihilistic 😂

So I think this is a very deep philosophical question but I'll try my best to understand it, tho I might not answer the question.

When we look back through history, we see a lot of wasted potential in terms of people being stuck in a life of servitude, not able to explore beyond their own town, having no knowledge of what may be in store for humanity. That's not to say they were unable to appreciate the world around them but that appreciation was limited by their circumstances.

The singularity, and by extension seeks to attempt to remove those limitations, one step at a time, so for example, our biggest concern at the moment is scarcity of energy and natural resources, this is very much a problem that can be solved in the near future, which would allow people to spend more time appreciating life, enjoying the love of their family and friends, spend more time exploring the planet and the beauty it offers. The next stage to this would be mind uploading, gradual replacement of biological neurons with artificial nano bot neurons. This will allow countless possibilities, apart from being able to go anywhere traveling at the speed of light, we can upload our consciousness into robots that are accurate replicas of our human bodies, these robots could be placed on any planet in the solar system and potentially beyond. And then obviously you alluded to this, but experiencing artificial worlds.

Now let's extrapolate a trillion trillion years into the future. We have managed to break the speed of light speed limit by warping space time, and have explored the entire universe, we have become a collective hive mind and collectively know everything about everything and have experienced every possible reality we can simulate, the last black whole is about to evaporate due to hawking radiation and the end of the universe as well as all life is imminent. Does this mean it's all been pointless? Should we not be interested in the expansion of abilities and the transcendence of our current situation because we know eventually it will end?

I don't think so, in fact I'd argue that our life is more meaningful than those in the past who were destined to live their life forever in their own little village, and never experienced anything new. So too I think the lives of transhumanity will be more meaningful than our own. Another analogy is our own life, ultimately we will die, this doesn't make our life now meaningless or we should just kill ourselves. Instead we recognise how beautiful our life is and embrace it because ultimately everything is temporary.

I hope that answers the question atleast from my perspective.

5