LoquaciousAntipodean OP t1_j57mxo0 wrote
Reply to comment by kimishere2 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
What exactly is 'problematic' about anthropomorphising AI? That is literally what it is designed to do, to itself, all the time. I think a bit more anthropomorphising is actually the solution, and not the problem, to ethical alignment. That's basically what I'm trying to say here.
Reject Cartesian solipsism, embrace Ubuntu collectivism, basically. I'm astonished so many sweaty little nerds in this community are so offended by the prospect. I guess individualist libertarians are usually pretty delicate little snowflakes, so I shouldn't be surprised 😅
kimishere2 t1_j57ofpo wrote
Interesting take friend
AsheyDS t1_j57tzsx wrote
>That is literally what it is designed to do
I would like to know more about this design if you're willing to elaborate.
LoquaciousAntipodean OP t1_j58k3kc wrote
I don't know enough about the actual mechanisms of synthetic neural networks to venture that kind of qualified opinion; I'm a philosophy crank, not a programmer. But I do know that the whole point of generative AI is to take vast libraries of human culture, and distill them down into mechanisms by which new, similar artwork can be generated based on algorithmic reversing of gaussian interference patterns.
That seems to me like a machine designed to anthropomorphise itself; is there something that I have missed?
turnip_burrito t1_j5841mx wrote
You're acting like an asshole, that makes people less likely to listen to you. If your goal is convince people, then your tone is actively working against that.
LoquaciousAntipodean OP t1_j58ngs3 wrote
I'm acting like an arsehole? Really? Gosh, I was doing my best not to, sorry. 😰 I just don't react well to libertarian fools trying to gaslight the hell out of me.
Viewing a single comment thread. View all comments