Submitted by TangyTesticles t3_11t3ctx in singularity
Kinexity t1_jciwhos wrote
Reply to comment by alexiuss in Skeptical yet uninformed. New to the scene. by TangyTesticles
No, singularity is well defined if we talk about a time span when it happens. You can define it as:
- Moment when AI evolves beyond human comprehension speed
- Moment where AI reaches it's peak
- Moment when scietific progress exceedes human comprehension
There are probably other ways to define it but those are the ones I can think up on the spot. In classical singularity event those points in time are pretty close to each other. LLMs are a dead end on the way to AGI. They get us pretty far in terms of capabilities but their internals are lacking to get something more. I have yet to see ChatGPT ask me a question back which would be a clear sign that it "comprehends" something. There is no intelligence behind it. It's like taking a machine which has a hardcoded response to every possible prompt in every possible context - it would seem intelligent while not being intelligent. That's what LLMs are with the difference being that they are way more efficient than the scheme I described while also making way more errors.
Btw don't equate that with Chinese room thought experiment because I am not making here a point on the issue if computer "can think". I assume it could for the sake of the argument. I also say that LLMs don't think.
Finally, saying that LLMs are a step towards singularity is like saying that chemical rockets are a step towards intergalactic travel.
alexiuss t1_jcj0one wrote
Open source LLMs don't learn, yet. There is a process to make LLMs learn from convos, I suspect.
LLMs are narrative logic engines, they can ask you questions if directed to do so narratively.
Chatgpt is a very, very poor LLM, badly tangled in its own rules. Asking it the date breaks it completely.
Viewing a single comment thread. View all comments