Viewing a single comment thread. View all comments

Thebadmamajama t1_j6bajh2 wrote

I think this is why ChatGPT (and LLM transformer models generally)is dangerous. It is a probability machine, not some form of generalized intelligence.

You give it a question or instruction, and it's highly capable of producing a response that is the most probable response based on billions of articles, forum posts, and writings across the internet. Nothing more, no magic. It doesn't understand what you are asking, and it can reason about the words. It's just picking the highest probability words that come next.

Now, could a realtime AI be created to look for the probability of fake news. Maybe. The issue with fake news is the truth is not always immediately available. So an AI (like humans) might be in a position to say "I'm cannot confirm this is real or fake" for a while before the lies spread out of control. Solve that problem, and we can automate it later.

1