turnip_burrito t1_j19tp2n wrote
The companies have a moral obligation to avoid introducing a new technology which magnifies the presence of certain kinds of undesirable content (Nazi-sympathetic, conspiratorial, violence-inciting, unconsensual imagery, etc.) on the Internet. They are just trying to meet that moral obligation, or appear to.
alexiuss OP t1_j1bv8lg wrote
I understand why they're doing it.
this article is a just an explanation WHY google's lamda, openai's gpt3 chat and characterai's chatbots keep breaking down. The way they're filtering things is simply not a sustainable strategy.
Viewing a single comment thread. View all comments