Viewing a single comment thread. View all comments

strongaifuturist OP t1_j9uo718 wrote

Well to your point one, if it’s unclear whether the systems lack sentience (and I’m not saying your position is unreasonable), a big part of that lack of clarity is due to the difficulty in knowing exactly what sentience is.

1

jdmcnair t1_j9usu83 wrote

True. The meaning of the word "sentience" is highly subjective, so it's not a very useful metric. I think it's more useful to consider whether or not LLMs (or other varieties of AI models) are having a subjective experience during the processing of responses, even if intermittently. They certainly are shaping up to model the appearance of subjective experience in a pretty convincing way. Whether that means they are actually having that subjective experience is unknown, but I think simply answering "no, they are not" would be premature judgment.

1

strongaifuturist OP t1_j9v08bt wrote

You can’t even be sure I’m having subjective experiences and I’m a carbon based life form! It’s unlikely we’ll make too much progress answering the question for LLMs. It quickly becomes philosophical. Anyway even if it were conscious it’s nit clear what you would do with that. I’m conscious most of the time but I don’t mind going to sleep or being put under anesthesia. So who knows what a conscious chat bot would want (if anything).

1

jdmcnair t1_j9v5fet wrote

Of course. Yeah, we have no way of knowing anything outside of our own individual existence, when it comes right down to it.

But, though I don't have ironclad certainty that you actually exist and are having an experience like mine from your perspective, the decent thing to do in the absence of certainty is to treat you as though you are. And that distinction is not merely philosophical. To behave otherwise makes you a psychopath. I'm just saying until we know more, it'd probably be wise to tread lightly and behave as though they are capable of experience in a way similar to what we are.

1