Submitted by Andriyo t3_11wc24a in singularity
Hey fellow Redditors - first post here! :)
Isn't it mind-blowing how easy it turned out to mimic human intelligence and creativity with models like LLM? I didn't think much of it myself - just some statistical fancy for Twitter post sentiment analysis or some other limited use case. Really, if you think about it, LLM is just a lossy compression algorithm for text data, with a lot of data, right? Yet, it works surprisingly well. It identifies statistical patterns to produce responses that seem so human-like. When faced with an unusual prompt, LLMs even show creativity by generating lower probability tokens (but still something that makes sense in the context). Or, they can hallucinate if they pick up on some pattern in the prompt that isn't obvious even to us.
The fact that LLMs can generate human-like texts suggests we're getting closer to cracking the nature of human art too. It's like how a person “trains” from books, art, and personal experiences, and then a real-life "prompt" (say, unrequited love) inspires the creation of art. We can even define beauty now as just patterns that our brains pick up when we look at a work of art. The more patterns we recognize (relate to), the more we like it.
I expect there will be a lot of psychology/sociology papers that will use LLMs to essentially model human behaviors for experiments (kind of like how we do modeling in physics for things like star explosions). So it will be like using AIs backward - to study humans and societies.
Lately, I've been feeling like I'm an LLM, really, with some added evolutionary drivers for survival. I mean, what if we could record our entire lives - conversations, sights, everything - and use it to train an AI model? Wouldn't it be essentially the same as us? No need for complex brain scans to achieve digital immortality, or at least some form of it (like those brains in jars in Futurama).
And I managed to run an LLM on my home PC tonight - scary and exciting!
​
Edit: I recognize that "easy" is the wrong word here. I didn't want to diminish all the hard work by individuals, academia, industrial R&D complex and the government that went into this.
I should have said that specifically LLMs and bottom-up approach to AI in general is relatively simple if you compare it to top-down approach where we would have to write a bazillion of IF statements to get the same results. In that sense it's surprising to me that LLMs produce remarkably good conversations if we just feed it enough data and fine-tune it a bit (again, comparing to writing it all by hand).