OldWorldRevival
OldWorldRevival OP t1_j0mrnb6 wrote
Reply to comment by gantork in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
> By your logic all artists are unethical and exploitative.
"All."
That's an absolutely asinine conclusion that you stated just to be inflammatory because you're probably kind of an asshole troll type.
Some artists are absolute asshats and do rip people's ideas off rapidly, or collage people's work into photoshop other people's work and paint over it (which is highly frowned upon). Doing stuff like that can destroy your reputation and cost you your professional career.
AI art is basically an automated version of what is already considered bad form.
You seem like you'd be one of these asshats if you actually took up art.
OldWorldRevival OP t1_j0mfjh1 wrote
Reply to comment by DaggerShowRabs in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
I think you might not be up to date on the topic.
Corridor crew did an excellent video where they showcased the new tech, and credited the artist whose name the used in the prompt, but it was very very much like that artists images.
I think this may more be an awareness issue.
It is absolutely able to copy styles, and with high accuracy. I think artists on artstation are particularly angry because they're very ip to date on the styles and tends of artists, so many of them can also see whose work is being used.
OldWorldRevival OP t1_j0md448 wrote
Reply to comment by Wassux in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
> It's not exploiting. Are humans exploiting when they learn from others?
Lets say someone is a very technically talented artist, but isn't very visionary. There are a good number of people like this, where they paint pretty boring subject matter, just do it well but in a very derivative way.
Now say that some artist is friends with someone who is developing an art style, and then this person, who is very creative, comes up with a powerful, unique art style.
But then this artist copies the style and becomes famous for developing the style, even though their friend did it.
This is what AI art does at scale - it does something that is equally unethical when a person does it. It's just that for the human element of it, usually people are protected by a de facto copyright system where you can trace who originated an art style by seeing publishing dates, posting online, that sort of thing. Reputation, basically. AI gives people the ability to steal style before someone develops reputation.
So, yes, sometimes humans are exploiting others when they learn from them.
OldWorldRevival OP t1_j0mculy wrote
Reply to comment by Wassux in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
This is really an ignorant take.
I find that people who take this perspective really don't understand how derivative the works are, or understand how AI webcrawling basically destroys people's ability to develop an artistic style and get credit for it because the AI will gobble up and spit their style out with lightning speed without crediting them at all.
To say that this is exactly how humans do it too is absolutely insane. We have so many different types of much more complex, highly developed mental functions and a conscious experience.
You just want to play with this tool because you have no impulse control to wait for few months for more ethically developed tools to come around.
Supporting this nonsense is exactly how people will exploit legal loopholes to take advantage of you.
OldWorldRevival OP t1_j0mcbzs wrote
Reply to comment by AsheyDS in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
One of the potential scenarios I envision is that the only good way we end up discovering to solve the control problem is to tie control of the AI to one person controlling it, in that the AI constantly models the person's thoughts through a variety of different methods, some that exist (such as language) and others that do not.
Then it continuously does scenarios with this person.
The reason it's one rather than two is because two makes the complexity and nuances of the problem a lot more difficult from a human perspective.
The key to understanding AI is to understand that its abilities are lopsided. It's very fast at certain things and cannot do other things, and is not organized in the way that we are mentally (and doing so would be dangerous because we're dangerous).
OldWorldRevival OP t1_j0mbtee wrote
Reply to comment by WarImportant9685 in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
These people are going to lead us into slavery or being made irrelevant because they think that technology is magic.
Magic-minded fools.
Most of them probably aren't even in technical areas of study...
OldWorldRevival OP t1_j0lo52s wrote
Reply to comment by Wassux in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
Also... people seem to want to exploit others data - such as artwork - and then assume that such an attitude won't come around to bite them in the ass.
OldWorldRevival OP t1_j0lnvgo wrote
Reply to comment by Wassux in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
Perception of public opinion is powerful.
People think that they don't matter, that their voice doesn't matter but it absolutely matters.
North Korea and China will forever be behind the USA on the AI front, even if we do it more ethically. North Korea is a joke.
Submitted by OldWorldRevival t3_zo9v5x in singularity
OldWorldRevival t1_j07qalb wrote
Reply to comment by CarlPeligro in Billy Corgan says AI systems will completely dominate music. by Aljanah
I think predicting what such a superintelligence will discover is fruitless.
Our biological systems are in a sense error reduction systems like AI are. This is a simplification, of course. But, the way this manifests itself is that whatever our present ideology is has no error tells to us internally. So, we see the world through that lens
Likewise, we project our belief systems onto these superintelligent AIs. "It will discover/prove X."
What AI will lack in art are two very important things: conscious experience and limitation.
That is, unless we make human 2.0 with suffering and limitation like we have, AI will not produce real art. It will produce eye candy, and it will push our buttons. But it won't be real, not in the way that matters.
This is going to create chaos and existential crises in people until they eventually understand this.
I do have some suspicions about what AI is going to demonstrate. I think it is going to psychologically unmake people.
See Derek Parfit's discussion on identity, as well as Buddhism more or less noticing the same thing about it before he formalized the illusory nature of identity.
Also, consider the experience of ego death that people experience when taking psychedelics.
That is, I suspect AI that shows people the thing that it discovers to be true is their own illusory nature as individuals. It will render onto people the understanding that they are nothing.
And people will seek this out because humans are curious.
OldWorldRevival OP t1_izynvth wrote
Reply to comment by sticky_symbols in An Odious but Plausible Solution to the Alignment Problem. by OldWorldRevival
Lmao Dwayne the Rock God Emperor of the Universe Johnson
OldWorldRevival OP t1_izy77m5 wrote
Reply to comment by sticky_symbols in An Odious but Plausible Solution to the Alignment Problem. by OldWorldRevival
I think people will get wind of this basically being the plan and we might end up picking someone democratically.
That is, I don't see AI researchers being the ones in control. Politically intelligent people have the highest chance.
Political intelligence tends to follow social intelligence, which also follows general intelligence, but it seems to contradict technical intelligence. I.e. the more technically adept someone is, the less social they are, and the degree to which they can be both reflects their general intelligence. That's my hypothesis anyways...
OldWorldRevival OP t1_izy65n6 wrote
Reply to comment by sticky_symbols in An Odious but Plausible Solution to the Alignment Problem. by OldWorldRevival
To me it seems like a sort of inevitable solution, really.
It's like, no matter what we do, we will fail, so the goal is to fail as optimally as possible
OldWorldRevival OP t1_izxpd17 wrote
Reply to comment by AsheyDS in An Odious but Plausible Solution to the Alignment Problem. by OldWorldRevival
I think this realization has made me think that this is also how it is inevitably going to pan out.
Just as Mutually Assured Destruction MAD was the odious solution to keep nuclear warfare from happening, Singularly Assured Dominion is going to be the plan for AI, unless we can be really clever in a short time span.
People's optimism hasn't worn off yet because these systems are only just getting to a point where people realize how dangerous they are.
I'm planning to write a paper on this topic... probably with the help of GPT3 to help make the point.
OldWorldRevival OP t1_izxficr wrote
Reply to comment by turnip_burrito in An Odious but Plausible Solution to the Alignment Problem. by OldWorldRevival
I also believe that AI takeover is not only plausible, but inevitable, whether or not it is a machine or person at the helm.
It is inevitable because it is fundamentally an as race. The more real, scary and powerful these tools get, the more resources militaries will put into them.
Non killer robots as a treaty is simply a nonstarter because unlike nuclear weapons, there is no stalemate game.
We still have nukes. We stopped developing new ones, but we still have nukes precisely because of this stalemate.
AI has no such stalemate. There will be no stalemate in AI.
I find it funny that we announced fusion power positive energy output just as AI starts getting scary... unlimited power for machines.
Submitted by OldWorldRevival t3_zjplm9 in singularity
OldWorldRevival t1_ixdppbx wrote
Reply to comment by JustAPerspective in TIL singer songwriter Leonard Cohen claimed to have written approximately 150 draft verses of his most famous song "Hallelujah", a claim substantiated by his notebooks containing manifold revisions and additions, and by contemporary interviews. by big_macaroons
No need to be hostile.
I was merely making the point that despite the existence of delete and undo, the process is still iterative. People still use papers to write lyrics, now it's just that different versions of melodies are kept in DAWs. But there's still a history to the development of a song that is kept.
OldWorldRevival t1_ixdjpyk wrote
Reply to comment by JustAPerspective in TIL singer songwriter Leonard Cohen claimed to have written approximately 150 draft verses of his most famous song "Hallelujah", a claim substantiated by his notebooks containing manifold revisions and additions, and by contemporary interviews. by big_macaroons
That's not the point I was making - I was just saying that iteration is still very much a part of the process.
OldWorldRevival t1_ixd53z4 wrote
Reply to comment by JustAPerspective in TIL singer songwriter Leonard Cohen claimed to have written approximately 150 draft verses of his most famous song "Hallelujah", a claim substantiated by his notebooks containing manifold revisions and additions, and by contemporary interviews. by big_macaroons
Nah.
I have drafts and drafts of a single thing I am working on on my computer. Same process, different storage medium.
OldWorldRevival OP t1_j0msct1 wrote
Reply to comment by Fluffykins298 in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
I am actually not against AI at all - in fact, I considered going into it at one time (and still might, especially because the danger seems to be growing and there are some philosophical technical talents I might be able to apply to a lot of specific AI problems).
> Criticism is great when its well founded and comes from genuine concern rather than people attacking the whole concept AI because it lacks a “soul” or is “stealing” their job
So... it's going to replace all of our jobs. The other thing we need to get ahead on is actually getting UBI pushed through.
I'd be willing to fear-monger to get UBI pushed, especially with the way that conservatives tend to act.