williamfwm
williamfwm t1_j35h7s5 wrote
Reply to comment by eve_of_distraction in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
This is the Other Minds Problem. You don't have to be a solipsist to recognize The Problem Of Other Minds (though that's one position you could take)
But consider this: It's common to suppose that consciousness is "caused by some physical, biological process", yes? Well, take a good look at the way nature operates....we constantly find that, for any feature you can imagine, biology will sometimes fail to give it to some organisms. People are born without the expected physical features all the time, and if consciousness is caused by some physical bit of biology, everybody consistently receiving it is the LEAST likely outcome. The more likely consequence of that assumption, the more reasonable expectation, is that some people have consciousness, and some people don't, as an accident of birth.
Furthermore, if people without consciousness are nearly identical except for having a different philosophy then they probably have the same fitness (or close) and little selection pressure working against them. A large segment of the population could be p-zombies - they could even be the majority.
williamfwm t1_j35an7m wrote
Reply to comment by ArgentStonecutter in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Sorry to hear that you might be a zombie, but at least for me, I definitely have a kind of subjective experience that transcends all possible external description; even having a total accounting of the state of my brain, all 100T synapses at a particular nanosecond, wouldn't allow you to penetrate into my experiences. Consciousness - real consciousness, Hard Problem consciousness - is a first-person phenomenon, and words are a third-person tool. It's just a logical impossibility (it's nonsensically incoherent) for this third-person thing to pierce into the first-person, so a satisfactory third-person description can never be given, but suffice to say, seeing actually looks like something (it's not merely informational, it's not merely knowledge I get access to when I see), and hearing actually sounds like something, and pain actually hurts, and if you don't experience it yourself, then you'll just never know what I mean by those seemingly hopelessly ineloquent statements
(and lest you think I'm some kind of wishy-washy woo-woo lover.....nope! I'm a diehard atheist with a list of "supernatural" things a mile long I don't believe in. But consciousness is....just there. I can't shake it even if I want to....except, perhaps, by dying. But maybe not even then)
It's actually computationalism that is "nonsense". To suggest that computation can give rise to consciousness is to suggest that you can "hop off the number line". Because computation means "thing you can implement on a Turing machine", and a Turning machine is an imaginary infinite tape, which can be thought of as one big number (if you like - and, in fact, always an integer, if you make that interpretive choice), so any time you do a computation, you are simply transitioning from one (usually very, very large) number into another. Proposing that computation gives rise to consciousness is proposing that certain integers are privileged, and cause internal experience disjoint from the Turing machine. Certain integers are conscious. And if there are infinitely many distinct conscious experience, then there are infinitely many conscious integers. But when are the integers conscious, and for how long? Integers are just ideas....are they conscious all the time, within the abstract integer realm? Or do they have a kind of Platonic "real" existence, where they are conscious? If I utter a long integer, does consciousness happen? Does it happen when I finish uttering the whole integer, or is the conscious experience spread ever-so-slowly over the entire utterance
And most importantly how does the Universe know where to put the consciousness?. When I utter integers, I'm using a whole system that's only relative to others, who understand certain sounds as certain symbols, etc. Language is a whole, mostly-arbitrary construction of mutual agreement. How does the universe objectively know that those are integers, and they're computation-integers, and consciousness should go along with them?
But maybe you think all the above is too abstract and you want to stick to talking about transistors (I mean, you're wrong to think that, since computation as understood by the Church-Turing thesis is abstract and transistors are in no way privileged, but fine, I'll humor you)
Again, how does the Universe know where to put the consciousness. How many silicon atoms does the Universe recognize as a proper transistor? And you may be aware of "Universal Gates" - NAND and NOR - which are the only gates you need to build a UTM that can do all conceivable computations. How does the Universe know when I've built a gate? I can build it by so many different chunks of atoms of different sizes - Moore's Law, ongoing miniaturization, etc - and the thing that makes it a gate is its function within the circuit, its relation to what I've defined as the inputs and the outputs. How does the Universe know it should honor my intentions? And what about if I build gates out of other materials - water (fluidic computing is a real field), dominos, legos, etc? How does the Universe peer into the molecules of plastic or porcelain, etc etc, and know that it's looking at a gate constructed out of such material, and place consciousness inside?
(as an aside: How does it know to put consciousness in neurons, for that matter? For that reason, I'm sympathetic to Lucas-Penrose, and neurons may indeed be non-privileged too, but that's derailing too much....)
If you're an eliminativist, this all means nothing. It's a non-challenge. Consciousness is just a high-level label for a physical process, a word like "concert" or "government".
But I'm sorry to inform you that consciousness is a real thing all its own, and if you don't believe in it, you may not be in the club
And, it being a real thing, computationalism is an incoherent non-answer that doesn't explain anything
williamfwm t1_j350zmd wrote
Reply to comment by eve_of_distraction in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
That's because Dan Dennett is a p-zombie. He's never experienced consciousness, so he can't fathom what it is. Same goes for a number of other eliminative materialists such as the Churchlands, Graziano, Blackmore, etc
Interestingly, Richard Dawkins the mega-reductionist-Uber-atheist is not one, and neither is Kurzweil, who believes in computationalism (functionalism); you'd be hard pressed to find it in his books, but he slipped and all but admitted that consciousness is something that transcends reductionism in a reply he wrote to Jaron Lanier's One Half A Manifesto in the early 2000s
It would help the discussion if we could steal the terminology back, because it's been zombified by Dennett (continuing what his mentor Ryle started) and his ilk. I think we ought to distinguish "Dennettian Consciousness" (where 'consciousness' is just a convenient, abstract label for the bag of tricks the brain can perform) and "Chalmerian Consciousness" (the real kind of consciousness, the reduction-transcending-ineffable, for people who believe in the Hard Problem)
williamfwm t1_j34zoq1 wrote
Of course GPT is an eliminative materialist
williamfwm t1_j3xqqb1 wrote
Reply to comment by eve_of_distraction in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
> As a side note, just out of interest, do you really believe there are humans without subjective inner experience?
I do. I also believe the other minds problem is unsolvable in principle, so I can't ever be certain, but I've come to lean strongly on that side.
I haven't always thought so. It was that essay by Jaron Lanier I linked above that started me on that path. I read it a few years ago and started to warm to the idea. Lanier once described that 1995 essay has having been written "tongue firmly in cheek", that he believes in consciousness "even for people who say it doesn't exist", and he also has a sort of moral belief that it's better pragmatically if we fall on the side of assuming humans are special, but, he has also teased over the years[1] since that deniers may just not have it, so it's hard to tell exactly where he falls. I may be the only one walking around today taking the idea seriously....
For me, I feel like it's the culmination of things that have been percolating in the back of my mind since my teen years. Taking that position brings clarity
The main point for me, as referenced, is that it clarifies the "talking past" issue. People do mental gymnastics to rationalize that both sides are talking about the same thing in consciousness debates, yet appear to be talking at cross-purposes. They always start these discussions by saying "We all know what it is", "It couldn't be more familiar", etc But do we all know? What if some don't, and they lead us into these doomed arguments? Sure, one can take up any goofy position for the sake of argument and try to defend it as sport, but people like Dennett are so damn consistent over such a long time. He himself is saying "I don't have it" [and nobody does] so maybe we should just believe him? Maybe it is true for him?
I also can't wrap my head around why it doesn't bother some people! I've been plagued by the consciousness problem since my teen years. And before that, I recall first having the epiphany of there being a problem of some sort in middle school; I remember catching up with a friend in the halls on a break period between classes and telling him about how I came to wonder why does pain actually hurt (and him just giving me a what-are-you-talking-about look). I'm sure it was horribly uneloquently phrased, being just a kid, but the gist was....why should there be the "actual hurt" part and not just....information, awareness, data to act on?
Some people just don't think there's more, and don't seem to be on the same page on what the "more" is even if you have long, drawn out discussions with them trying to drill down to it. It would make a lot of sense if they can't get it because it isn't there for them.
I also realized that we take consciousness of others as axiomatic, and we do this due to various kinds of self-reinforcing circular arguments, and also due to politeness; it's just mean and scary to suggest some might not have it (back to Lanier's pragmatism). I call it "The Polite Axiom". I think we're free to choose a different axiom, as after all axioms are simply....chosen. I choose to go the other way and choose some-people-don't-have-it based on my equally foundation-less gut feelings and circular self-reinforcing observations and musings.
Lastly, I'm basically a Mysterian a la McGinn etc, because I don't see any possible explanation for consciousness that would be satisfactory. I can't even conceive of what form a satisfactory explanation would take[2]. I also came to realize in the past few years that even neurons shouldn't have authority in this issue. Why should it be in there compared to anywhere else? (Why do sloshing electrolytes make it happen? If I swish Gatorade from glass to glass does it get conscious?). And, unlike McGinn, I don't think we know that it's in there and only there. Nope! We know[3] that it's one mechanism by which consciousness expresses itself, and if we're being disciplined that's the most we can say.
Bonus incredibly contentious sidenote: Penrose's idea, though often laughed off as quantum-woo-woo, has the advantage that it would solve the issue of Mental Privacy in a way that computationalism fails at (the difficulty of coherence would keep minds confined to smaller areas)
[1] One example: I absolutely love this conversation here from 2008, the bit from about 20:00 to 30:00, where Lanier at one point taunts Yudkowsky as being a possible zombie. A lot of the commenters think he's a mush-mouthed idiot saying nothing, but I think it's just brilliant. On display is a nuanced understanding of a difficult issue from someone's who's spent decades chewing over all the points and counterpoints. "I don't think consciousness 'works' - it's not something that's out there", and the number line analogy is fantastic, so spot on re:computationalism/functionalism....just so much packed in that 10 mins I agree with. Suppose people like Yudkowsky gravitate to hardnosed logical positivist approaches because they don't have the thing and so don't think there's any thing to explain?
[2] The bit in the video where Lanier just laughs off Yudkowsky's suggestion that "Super Dennett or even Super Lanier 'explains consciousness to you'". It is "absurd [....] and misses the point". There's just nothing such an explanation could even look like. There's certainly no Turing machine with carefully chosen tape and internal-state-transition matrix that would suffice (nor, equivalently, any deeply-nested jumble of Lambda Calculus. I mean, come on)
[3] "Know" under our usual axiom, at that! We assume it's there, then see it the "evidence" of it there, but we've axiomatically chosen that certain observations should constitute evidence, in a circular manner....