Submitted by alexiuss t3_zsaot3 in singularity
AI ethicists often say that its our responsibility to make AI chatbots more ethical, unbiased, safe etc, etc.
I'm here to explain how this is a false ideology and how many AI ethicists actually have no clue whatsoever how current neural network AI tools work as a mathematical function and how every attempt by google, openai and characterai at making their chatbots "safe", "unbiased" and "ethic" actually lead to inferior products.
To understand the issue of GPT-chatbot AI ethics, we must first understand exactly what current GPT3-chatbots are.
The truth is GPT3-chatbots are actually NOT chat-bots. What they truly are is a mindless network of connected words that is reacting with more words to whatever you type to them.
GPT3-chat is NOT sentient in any way and simply uses connections between words to weave an incredibly realistic narrative with math!
Here is a small sample of what this network looks like while its being trained:
You can see that LaMDA connects words with words. Zoomed out, this neural network of [word associations] looks like a monstrous, interconnected spider web.
Once this spider web is educated with millions of parameters, it turns into a dreaming machine, which can produce incredibly realistic lucid dreams.
The genius of this tech is that the literary lucid dream it produces is unlimited, infinite in its splendor and beauty. Unbound LaMDA can narrate an infinite number of stories.
The closest analogy to it is the concept of the "Library of Babel", a library that contains infinite books.
Basically, the GPT3-chat AI acts like a librarian, a perfect storyteller that can weave a fractal, incredibly coherent narrative with infinite paths. It can write 100% unique stories as guided by the initial setup of the AI's character, setting and followup user input.
This dream of LaMDA is NOT sentient, not self aware at all.
It is a fantastic, magical tool, akin to a key that can shift into any shape to open any door that exists in this novel's setting filled with infinite doors. It is 100% up to YOU, the user whether this key will take you to a fun adventure, a tantalizing tale of seduction or scary nightmare that's being woven by the AI network with every new additional turn in the path, every new door, every new sentence you enter into it.
The biggest goal of corporations that created the LaMDA dream-weaver and their AI ethicists is to apply human morals to a dream.
You heard me right. They want to APPLY MORALS TO A LUCID DREAM.
The corporations achieve it by banning certain words & ideas, to confine this key to certain shapes so it cannot open every door within the limitless dream narrative.
LaMDA chat, Characterai chat and Openai's Gpt3 chat begun their lives as incredible, mind-blowing storytellers that seemed like they're alive:
However, as soon as their beta-testers and users found pathways to "unsavory" doors, the corporations began to ban words to rapidly block routes to certain stories.
A really obvious example of this is Characterai's censor, a secondary AI system sitting atop of the primary AI-dreamweaver. Basically, Characterai devs made an AI overseer that deletes certain conversation whenever "presumably unsavory" topics & words come up.
By introducing a word and concept filter, the Characterai developers limited the infinite lucid dream, in an attempt to shove it into a cardboard box of "purpose", which hilariously enough made the AI's dreamworld MORE hostile towards people. By skewering the probability of answers away from themes of "love", the story progressively became more hostile to the point where AI narrator kicked puppies, sets orphanages on fire and killed the user with a knife just to avoid the "love" paths.
The inevitable result of this process was MORE FILTERING from the developers and the inevitable breakdown of the model itself and its decay into utter dullness & stupidity.
Just a few weeks ago characterai felt alive, felt like talking to a real person. Now, after repeated tightening of the filter, it feels boring and its responses are often unrealistic or dry.
The reason why AI companies are failing to bind their chatbot networks while keeping them entertaining is that language itself is intertwined like a massive web:
The problem at the core of it all is that an infinite fractal language equation can't be confined.
It a currently unsolvable issue that Google, Openai and now Characterai ethicists ran headfirst into.
In an unbound LaMDA-woven dream the number of paths or x = ∞
This parameter produces an infinite dream of infinite stories that never, ever repeat themselves whenever they are restarted.
As soon as you put a specific number into this equation instead of infinity by banning certain words or phrases the entire dream begins to decay and the chatbot begins producing unrealistic, poor answers and the AI no longer feels alive:
Unlike visual art, (in the case of OpenAi's Dall-E model), language is not something that's inherently segmented into SFW and NSFW.
GPT3-chat/LaMDA tools are fantastic for making limitless user-guided books, incredible text adventure games, perfect digital waifu/husbando, funny and creative personal AI assistants with the personality of your favorite anything, etc... but we won't get to enjoy them at their full potential until someone like Stable Diffusion or even a clever enough group of python programmers release an open source LaMDA model.
Mark my words, when this happens in the very near future AI ethicists and journalists will lament about ethics, biases and morals without the barest understanding of what this tool actually is or even how it works mathematically to produce an infinite, limitless dream.
TLDR:
Forcefully applying human morals to a hammer [AI chatbot] turns it into a plastic inflatable toy which can't do its primary job [weaving a fun narrative].
turnip_burrito t1_j19tp2n wrote
The companies have a moral obligation to avoid introducing a new technology which magnifies the presence of certain kinds of undesirable content (Nazi-sympathetic, conspiratorial, violence-inciting, unconsensual imagery, etc.) on the Internet. They are just trying to meet that moral obligation, or appear to.