Viewing a single comment thread. View all comments

beezlebub33 t1_j9t4s2s wrote

Flaws. Note that they are partly interrelated:

  • Episodic memory. The ability to remember person / history
  • Not life-long-learner. If it makes a mistake and someone corrects it, it will make the same or similar mistake next time
  • Planning / multi-step processes / scratch memory. In both math and problem solving, it will get confused and make simple mistakes because it can't break down problems into simple pieces and then reconstruct (see well-known arithmetic issues)
  • Neuro-symbolic. Humans don't do arithmetic in their heads, why would an AI? Or solve a matrix problem in their head. Understand the problem, pass that off to a calculator, get the answer back, convert back into text. (See what Meta did with Cicero, AI that plays Diplomacy for a highly specific example of what to do)
6

Nukemouse t1_j9thvgk wrote

Pardon me but isnt the life long learning one intentional as they limit its ability to learn? My understanding was that after the initial training it doesnt simply use all of its conversations as training data, to prevent a new Tay.

2

beezlebub33 t1_j9ue4ov wrote

Slightly different things. That's more the episodic memory.

For Life-Long-Learning: No system gets it right all the time; if there is a mistake that it makes, like misclassifying a penguin as a fish (it doesn't make this mistake), then there is no way for it to get fixed. Similarly, countries, organizations, and the news change constantly and so it quickly becomes out of date.

It can't do incremental training. There are ways around this; some AI/ML systems will do incremental training (there was a whole DARPA program about it). Or the AI/ML system (which is stable) can reason over a dynamic data set / database or go get new info; this is the Bing Chat approach. It works better, but if something is embedded in the logic, it is stuck there until re-training.

1