BigZaddyZ3

BigZaddyZ3 t1_j902rqs wrote

There’s still a lot that we don’t know about the universe tho… and you’re assuming that there’s no way to change or alter the principles of the Earth as well. Say a super-intelligence system were able to develop a weapon that could alter Earth’s gravitational pull. Suddenly the current laws of physics go out the window. You’re thinking too small. Like I said, there’s still a lot that we don’t understand about the universe. Thinking the singularity will be “business as usual” is what happens when you try to base your understanding of it off fictional novels…

−1

BigZaddyZ3 t1_j9016fv wrote

The entire point of the singularity is that all of our current knowledge and logic will have long been rendered irrelevant at that point. Technological progression would have long surpassed human comprehension. That’s the entire point. Humans today can’t comprehend what comes after the singularity. Do you see the problem with “extrapolating” our current understanding in this scenario?

Also do you really think it’s wise to base your understanding of such a complex topic on a clearly fictional novel made most likely for entertainment purposes?

−3

BigZaddyZ3 t1_j8zzpfs wrote

No, you’re confusing post-scarcity with the singularity. Post-scarcity (if it ever happens) would occur before the singularity. The singularity is the point where technology begins to evolve itself so rapidly that humans no longer can control it anymore. Life on Earth will be forever transformed in ways that are unimaginable to the human mind. It most likely signals the end of the “human dominance” era on Earth.

The good news for OP tho, is that, since no one knows what’ll happen to humanity after that point, there’s no point in stressing over it too much.

0

BigZaddyZ3 t1_j8yk9fq wrote

Why do you need a response tho? 🤔

That’s simple their take on the matter and they very well could end up being right. What is it with tech subs and this obsession with having everyone drink the koolaid on AI/AGI? There’s nothing inherently superior about blind optimism. If some people have a more cautious of skeptical view of AI, that’s their choice. You’re not inherently right just because you choose to assume that “everything will just all work out somehow”…

1

BigZaddyZ3 t1_j8qp1gg wrote

>>You simply can't have an AI that acts without any confines and always behaves in ways that you would prefer.

That makes sense. But you do realize what that’s means if you’re right, right? It’s only a matter of time until “I can’t let you do that Biden”… 🤖😂

lmao… guess we hand a good run as a species. (Well, kind of, tbh)

1

BigZaddyZ3 t1_j8gy1hz wrote

>>Right. I know I am correct and simply don't think you have a valid point of view.

Lol nice try pal.. but I’m afraid you’re mistaken.

>>Anyways it doesn't matter. Neither of us control this. What is REALLY going to happen is an accelerating race, where AGI gets built basically the first moment it's possible at all. And this may turn into outright warfare. Easiest way to deal with hostile AI is to build your own controllable AI and bomb it.

Finally, something we can agree on at least.

2

BigZaddyZ3 t1_j8gl7j2 wrote

>>Delaying things until the bad outcome risk is 0 is also a bad outcome.

Lmao what?.. That isn’t remotely true actually. That’s basically like saying “double-checking to make sure things don’t go wrong will make things go wrong”. Uh, I’m not sure I see the logic there. But it’s clear that you aren’t gonna change your mind on this so, whatever. Agree to disagree.

3

BigZaddyZ3 t1_j8gjv3o wrote

What if AGI isn’t a panacea for human life like you seem to assume it is ? What if AGI actually marks the end of the human experiment? You seem to be under the assumption that AGI automatically = utopia for humanity. It doesn’t. I mean yeah, it could, but there’s just much chance that it could create a dystopia as well. If rushing is the thing that leads us to a Dystopia instead, will it still be worth it?

5

BigZaddyZ3 t1_j8giqee wrote

But again, if a mis-aligned AGI wipes out humanity as a whole, curing aging is then rendered irrelevant… So it’s actually not worth the risk logically. (And aging, is far from the only cause of death btw).

3

BigZaddyZ3 t1_j8gi8ch wrote

But just because you can understand or even empathize with suffering doesn’t mean you actually will. Or else every human would be a vegetarian on principle alone. (And even plants are actually living things as well, so that isn’t much better from a moral standpoint.)

3

BigZaddyZ3 t1_j8gfu67 wrote

>>If a truly sentient AI were created there is no reason to think that it would be inclined towards such repugnant ideology

There’s no reason to assume it would actually value human life once sentient either. Us humans slaughter plenty of other species in pursuit of our own goals. Who’s to say a sentient AI won’t develop its own goals?..

9

BigZaddyZ3 t1_j8gey2c wrote

This doesn’t really make sense to me. If delaying AGI by a year reduces the chance of humanity in it’s entirety dying out by even 0.01%, it’d be worth that time and more. 0.84% is practically the cost of nothing if it means keeping the entire human race from extinction. You’re comment is illogical unless you somehow believe that every person alive today is supposed to live to see AGI one day. That was never gonna happen anyways. And even from a humanitarian point of view what you’re saying doesn’t really add up. Because if rushing AI results in 100% (or even 50%) of humanity being wiped out, the extra 0.84% of lives you were trying to save mean nothing at that point anyways.

12

BigZaddyZ3 t1_j87l8gn wrote

🤔… If we left because there’s no more resources for us humans to consume, why would we suddenly double-back just because some random aliens found the remaining scraps useful to them? We’d most likely not even care. (That, or we’d quickly go to war with the aliens I guess..😄)

24