Submitted by deadlyklobber t3_10klfwj in singularity
I get that we should be cautious of hyping up ChatGPT and LLMs too much. They obviously aren't AGI or anything even approaching it. However, it seems like the pendulum has swung in the opposite direction. Any time someone tries to talk about how impressive this technology is they're met with a chorus of "it's just a glorified autocomplete/text predictor" or "it's not 100% accurate so it's useless". First of all, transformer models don't simply predict the immediate next word sequentially; and even if they did, would it still not be very impressive for a simple text prediction algorithm to perform everything ChatGPT is capable of doing? And with regards to the factual accuracy, people seem to be setting an almost impossibly high standard - if it's not 100% accurate in all contexts, it's useless. Just seems like we're seeing another example of the AI effect.
Practical-Mix-4332 t1_j5rgat9 wrote
Good. Less people to compete against my chatbot business