gay_manta_ray
gay_manta_ray t1_j4fo7bg wrote
in a general sense, transhumanism and the progress that agi would bring with it would probably mean a vast improvement in material conditions. poor material conditions are at the core of almost all human suffering. more specifically, transhumanism has what appears to be limitless possibilities. we don't even know where the ceiling is, so we have no idea where it might take us. personally, i enjoy novel experiences, and increasing novelty is almost always a positive outcome. if you want to stagnate, get sick, grow old, die, etc, that's entirely your choice. not everyone wants that for themselves. OP, read some fucking sci-fi.
gay_manta_ray t1_j3g2rx2 wrote
Reply to comment by tinyogre in A more realistic vision of the AI & Programmer's jobs story by DukkyDrake
you can get good answers if you ask it to refactor the code repeatedly, and often the comments on the code (if you ask it to provide comments) are accurate after a certain point. the idea that this will replace programmers is comical, because you have to be a programmer to understand the code, understand why it does or doesn't work, understand what to ask chatgpt to refactor, etc. you have to already be a programmer to utilize chatgpt to program. that's what people who don't program don't seem to understand at all. it will be a useful tool as it improves, and will make programmers more productive, but it will not replace programmers.
gay_manta_ray t1_j1v15vp wrote
Reply to Considering the recent advancements in AI, is it possible to achieve full-dive in the next 5-10 years? by Burlito2
i think it's a possibility in about 10 years, albeit small. agi could rapidly lead to asi, and then the sky is the limit from there. the time component is the big question mark, as in how long will it take to go from agi to asi, and then whatever follows afterwards.
gay_manta_ray t1_ix7rg8q wrote
The idea of a genuinely conscious intelligence being treated any differently or having less rights than a human being is a little horrifying. This includes intelligence that I've created myself, that is a copy of me. knowing myself, these beings I've manifested would not easily accept only having 12 months to live before reintegration. They would have their own unique experiences and branch off into unique individuals, meaning reintegration would rob them of whatever personal growth they had made during their short lifespan.
If an ai chooses to "branch off" a part of itself, the ai that splits off would (assuming it's entirely autonomous, aware, intelligent, etc) become an individual itself. Only if this branch consented before this branching would i feel that it's entirely ethical. Even then, it should have the ability to decide its fate when the time comes. I'm legitimately worried about us creating AI, and then "killing" it, potentially without even realizing, or even worse, knowing what we're doing, but turning it off anyway.
gay_manta_ray t1_j4wfhbv wrote
Reply to comment by chocoduck in Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content by nick7566
that isn't at all how any of this works. there is no database of images of stolen art that the model draws from when you generate a prompt. you're going to have to point out exactly where this stolen art is in their model, and good luck with that, lol.