Viewing a single comment thread. View all comments

phillythompson t1_ja4xclz wrote

I’m struggling to see how you’re so confident that we aren’t on a path or close.

First, LLMs are neural nets— as our our brains. Second, one could make the argument that humans take in data and output “bullshit”.

So I guess I’m trying to see how we are different given what we’ve seen thus far. I’m again not claiming we are the same, but I am not finding anything showing why we’d be different.

Does that make sense? I guess it seems like your making a concrete claim of “these LLMs aren’t thinking, and it’s certain” and I’m saying, “how can we know that they aren’t similar to us? What evidence is there to show that?”

1

Really_McNamington t1_ja6vq1o wrote

Bold claim that we actually know how our brains work. Neurologists will be excited to hear that we've cracked it. The ongoing work at openworm suggests there may still be some hurdles.

To my broader claim, chatgpt3 is just a massively complex version of Eliza. It has no self-generated semantic content. There's no mechanism at all by which it can know what it's doing. Even though I don't know how I'm thinking, I know I'm doing it. LLMs just can't do that and I don't see a route to it becoming an emergent thing via this route.

1