gay_manta_ray

gay_manta_ray t1_j4fo7bg wrote

in a general sense, transhumanism and the progress that agi would bring with it would probably mean a vast improvement in material conditions. poor material conditions are at the core of almost all human suffering. more specifically, transhumanism has what appears to be limitless possibilities. we don't even know where the ceiling is, so we have no idea where it might take us. personally, i enjoy novel experiences, and increasing novelty is almost always a positive outcome. if you want to stagnate, get sick, grow old, die, etc, that's entirely your choice. not everyone wants that for themselves. OP, read some fucking sci-fi.

2

gay_manta_ray t1_j3g2rx2 wrote

you can get good answers if you ask it to refactor the code repeatedly, and often the comments on the code (if you ask it to provide comments) are accurate after a certain point. the idea that this will replace programmers is comical, because you have to be a programmer to understand the code, understand why it does or doesn't work, understand what to ask chatgpt to refactor, etc. you have to already be a programmer to utilize chatgpt to program. that's what people who don't program don't seem to understand at all. it will be a useful tool as it improves, and will make programmers more productive, but it will not replace programmers.

1

gay_manta_ray t1_ix7rg8q wrote

The idea of a genuinely conscious intelligence being treated any differently or having less rights than a human being is a little horrifying. This includes intelligence that I've created myself, that is a copy of me. knowing myself, these beings I've manifested would not easily accept only having 12 months to live before reintegration. They would have their own unique experiences and branch off into unique individuals, meaning reintegration would rob them of whatever personal growth they had made during their short lifespan.

If an ai chooses to "branch off" a part of itself, the ai that splits off would (assuming it's entirely autonomous, aware, intelligent, etc) become an individual itself. Only if this branch consented before this branching would i feel that it's entirely ethical. Even then, it should have the ability to decide its fate when the time comes. I'm legitimately worried about us creating AI, and then "killing" it, potentially without even realizing, or even worse, knowing what we're doing, but turning it off anyway.

2