OldWorldRevival

OldWorldRevival OP t1_j0msct1 wrote

I am actually not against AI at all - in fact, I considered going into it at one time (and still might, especially because the danger seems to be growing and there are some philosophical technical talents I might be able to apply to a lot of specific AI problems).

> Criticism is great when its well founded and comes from genuine concern rather than people attacking the whole concept AI because it lacks a “soul” or is “stealing” their job

So... it's going to replace all of our jobs. The other thing we need to get ahead on is actually getting UBI pushed through.

I'd be willing to fear-monger to get UBI pushed, especially with the way that conservatives tend to act.

0

OldWorldRevival OP t1_j0mrnb6 wrote

> By your logic all artists are unethical and exploitative.

"All."

That's an absolutely asinine conclusion that you stated just to be inflammatory because you're probably kind of an asshole troll type.

Some artists are absolute asshats and do rip people's ideas off rapidly, or collage people's work into photoshop other people's work and paint over it (which is highly frowned upon). Doing stuff like that can destroy your reputation and cost you your professional career.

AI art is basically an automated version of what is already considered bad form.

You seem like you'd be one of these asshats if you actually took up art.

−3

OldWorldRevival OP t1_j0mfjh1 wrote

I think you might not be up to date on the topic.

Corridor crew did an excellent video where they showcased the new tech, and credited the artist whose name the used in the prompt, but it was very very much like that artists images.

I think this may more be an awareness issue.

It is absolutely able to copy styles, and with high accuracy. I think artists on artstation are particularly angry because they're very ip to date on the styles and tends of artists, so many of them can also see whose work is being used.

2

OldWorldRevival OP t1_j0md448 wrote

> It's not exploiting. Are humans exploiting when they learn from others?

Lets say someone is a very technically talented artist, but isn't very visionary. There are a good number of people like this, where they paint pretty boring subject matter, just do it well but in a very derivative way.

Now say that some artist is friends with someone who is developing an art style, and then this person, who is very creative, comes up with a powerful, unique art style.

But then this artist copies the style and becomes famous for developing the style, even though their friend did it.

This is what AI art does at scale - it does something that is equally unethical when a person does it. It's just that for the human element of it, usually people are protected by a de facto copyright system where you can trace who originated an art style by seeing publishing dates, posting online, that sort of thing. Reputation, basically. AI gives people the ability to steal style before someone develops reputation.

So, yes, sometimes humans are exploiting others when they learn from them.

1

OldWorldRevival OP t1_j0mculy wrote

This is really an ignorant take.

I find that people who take this perspective really don't understand how derivative the works are, or understand how AI webcrawling basically destroys people's ability to develop an artistic style and get credit for it because the AI will gobble up and spit their style out with lightning speed without crediting them at all.

To say that this is exactly how humans do it too is absolutely insane. We have so many different types of much more complex, highly developed mental functions and a conscious experience.

You just want to play with this tool because you have no impulse control to wait for few months for more ethically developed tools to come around.

Supporting this nonsense is exactly how people will exploit legal loopholes to take advantage of you.

−3

OldWorldRevival OP t1_j0mcbzs wrote

One of the potential scenarios I envision is that the only good way we end up discovering to solve the control problem is to tie control of the AI to one person controlling it, in that the AI constantly models the person's thoughts through a variety of different methods, some that exist (such as language) and others that do not.

Then it continuously does scenarios with this person.

The reason it's one rather than two is because two makes the complexity and nuances of the problem a lot more difficult from a human perspective.

The key to understanding AI is to understand that its abilities are lopsided. It's very fast at certain things and cannot do other things, and is not organized in the way that we are mentally (and doing so would be dangerous because we're dangerous).

0

OldWorldRevival t1_j07qalb wrote

I think predicting what such a superintelligence will discover is fruitless.

Our biological systems are in a sense error reduction systems like AI are. This is a simplification, of course. But, the way this manifests itself is that whatever our present ideology is has no error tells to us internally. So, we see the world through that lens

Likewise, we project our belief systems onto these superintelligent AIs. "It will discover/prove X."

What AI will lack in art are two very important things: conscious experience and limitation.

That is, unless we make human 2.0 with suffering and limitation like we have, AI will not produce real art. It will produce eye candy, and it will push our buttons. But it won't be real, not in the way that matters.

This is going to create chaos and existential crises in people until they eventually understand this.

I do have some suspicions about what AI is going to demonstrate. I think it is going to psychologically unmake people.

See Derek Parfit's discussion on identity, as well as Buddhism more or less noticing the same thing about it before he formalized the illusory nature of identity.

Also, consider the experience of ego death that people experience when taking psychedelics.

That is, I suspect AI that shows people the thing that it discovers to be true is their own illusory nature as individuals. It will render onto people the understanding that they are nothing.

And people will seek this out because humans are curious.

4

OldWorldRevival OP t1_izy77m5 wrote

I think people will get wind of this basically being the plan and we might end up picking someone democratically.

That is, I don't see AI researchers being the ones in control. Politically intelligent people have the highest chance.

Political intelligence tends to follow social intelligence, which also follows general intelligence, but it seems to contradict technical intelligence. I.e. the more technically adept someone is, the less social they are, and the degree to which they can be both reflects their general intelligence. That's my hypothesis anyways...

2

OldWorldRevival OP t1_izxpd17 wrote

I think this realization has made me think that this is also how it is inevitably going to pan out.

Just as Mutually Assured Destruction MAD was the odious solution to keep nuclear warfare from happening, Singularly Assured Dominion is going to be the plan for AI, unless we can be really clever in a short time span.

People's optimism hasn't worn off yet because these systems are only just getting to a point where people realize how dangerous they are.

I'm planning to write a paper on this topic... probably with the help of GPT3 to help make the point.

1

OldWorldRevival OP t1_izxficr wrote

I also believe that AI takeover is not only plausible, but inevitable, whether or not it is a machine or person at the helm.

It is inevitable because it is fundamentally an as race. The more real, scary and powerful these tools get, the more resources militaries will put into them.

Non killer robots as a treaty is simply a nonstarter because unlike nuclear weapons, there is no stalemate game.

We still have nukes. We stopped developing new ones, but we still have nukes precisely because of this stalemate.

AI has no such stalemate. There will be no stalemate in AI.

I find it funny that we announced fusion power positive energy output just as AI starts getting scary... unlimited power for machines.

2

OldWorldRevival t1_ixdppbx wrote

No need to be hostile.

I was merely making the point that despite the existence of delete and undo, the process is still iterative. People still use papers to write lyrics, now it's just that different versions of melodies are kept in DAWs. But there's still a history to the development of a song that is kept.

1

OldWorldRevival t1_ixd53z4 wrote

1