KDamage t1_j5plf7r wrote
Reply to comment by ---nom--- in "By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it."- Eliezer Yudkowsky. by KiwiTechCorp
While I get your point, Artificial Intelligence doesn't mean perfect (or humanly equal intelligence), it just means a relatively independant, artificially created form of intelligence. As in being able to decide its own direction, for what it is able to produce or to output, by itself. William Gibson for example likes to call the actual internet some kind of artificial intelligence, as it has an inertia on its own. Which is very different from classical scifi narrative.
On top of that, it is also the ability to learn by itself (Machine Learning should be the real name instead of AI, which is based on the tools, or algorythms, it has been given)
Around that concept, there are indeed varying degrees of autonomy, with its (abstract) tipping point being singularity. ChatGPT, Dall-E, etc are, technically, are organically growing, but for now their model is just in its infancy compared to what they'll become with time.
showturtle t1_j5qa1a9 wrote
I don’t know about the others you mentioned, but I wouldn’t necessarily call ChatGPT an “organically growing AI”. It’s architecture and hyperperameters are pretty restricted and it is entirely incapable of real-time “learning” or the incorporation of new data into its decision-making paradigm as a language model. It actually has not been “trained” on any new data sets since 2021.
Regardless, I love ChatGPT and I think what it can accomplish as a language model are amazing- what I think truly restricts it from real, “organic growth/learning” is that it is not “aware” or “present” - it has no perception of circumstances and therefore no ability to acquire and incorporate new data to fill the gaps it it’s incomplete understanding. It can’t handle ambiguity- period. Once it is capable of real-time incorporation or data from it’s environment, THEN organic growth and true learning are possible.
KDamage t1_j5qha1p wrote
I see what you mean, which is true for the dataset. Are we sure OpenAI has not incorporated some sort of auto-annotator based on user interaction ? Like the kind of cleverbot where it was growing its dataset from user-to-bot conversation ? Modern chatbots all do this, which was feeding my assumption for chatGPT. There is some room for two models actually, one for the knowledge database, which has stopped training, one potential other one for the interaction, which is growing
showturtle t1_j5qi0j1 wrote
It can use some of the information provided within the current conversation it is having to help contextualize the responses it gives so that they are more appropriate- but, it does not store the information to its database of knowledge or incorporate any new data in the discussion to help it make decisions. It simply helps it recognize patterns in the conversation so that it can make more appropriate responses.
KDamage t1_j5qighl wrote
ok, good to. know thanks !
MEMENARDO_DANK_VINCI t1_j5ppp61 wrote
Yeah like you tell any human on the planet to do some free form writing assignment and you’ll see lots of the problems the commenter above you listed
Viewing a single comment thread. View all comments