Submitted by timscarfe t3_yq06d5 in MachineLearning
trutheality t1_ivq6265 wrote
Reply to comment by Nameless1995 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
>"Why not?" Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
This, I think, again falls into the same fallacy of trying to ellicit understanding from the rule-following machinery. The rule-following machinery operates on meaningless symbols, just as humans operate on meaningless physical and chemical stimuli. If understanding arises in a program, it is not going to happen at the level of abstraction to which Searle is repeatedly returning.
​
>The point Searle is trying to make is that understanding is not exhausively constituitive of some formal relations but it also depends on how the formal relations are physically realized
This also seems like an exceptionally weak argument, since it suggests that a sufficiently accurate physics simulation of a human would, after all, hit all of Searle's criteria and be capable of understanding. Again, even here, it is important to separate levels of abstraction: the physics engine is not capable of understanding, but the simulated human is.
One could, of course, stubbornly insist that without us recognizing the agency of the simulated human, it is just another meaningless collection of symbols that follows the formal rules of the simulation, but that would be no different from viewing a flesh-and-blood human as a collection of atoms that meaninglessly follows the laws of physics.
Ultimately in these "AI can't do X" arguments there is a consistent failure to apply the same standards to both machines and humans, and, as you point out, a failure to provide falsifiable definitions for the "uniquely human" qualities being tested, be it understanding, qualia, originality, or what have you.
Nameless1995 t1_ivq9tkh wrote
> If understanding arises in a program, it is not going to happen at the level of abstraction to which Searle is repeatedly returning.
There is a bit nuance here.
What Searle is trying to say by "programs don't understand" is not that there cannot be physical instantiations of "rule following" programs that understands (Searle allows that our brains are precisely one such physical instantiation), but that there would be some awkward realizations of the same program that don't understand. So the point is actually relevant at a more higher level of abstraction.
> Ultimately in these "AI can't do X" arguments there is a consistent failure to apply the same standards to both machines and humans, and, as you point out, a failure to provide falsifiable definitions for the "uniquely human" qualities being tested, be it understanding, qualia, originality, or what have you.
Right. The point of Searle becomes even more confusing because on one hand he is explicitly allowing "rule following machines" can understand (he explicitly says that instances of appropriate rule-following programs may understand things and also that we are machines that understand), at the same time he doesn't think mere simulation of functions of a program with any arbitrary implementation of rule-following is not enough. But then it becomes hard to tease out what exactly "intentionality" is for Searle, and why certain instances of rule-following through certain causal powers can have it, while the same rules simulated otherwise in the same world correspond to not having "intentionality".
Personally, I think he was sort of thinking in terms of hard problem (before the hard problem was made: well it existed in different forms). He was possibly conflating understanding with having phenomenal "what it is like" consciousness of certain kind.
> consistent failure to apply the same standards to both machines and humans
Yeah, I notice that. While there are possibly a lot of things we don't completely understand about ourselves, there also seems to be a tendency to overinflate ourself. As for myself, if I reflect first-personally I have no clue what is it I exactly do when I "understand". There were times, I have thought when I was younger, that I don't "really" "understand" anything. Whatever happens, happens on it own, I can't even specify the exact rules of how I recognize faces, how I process concepts, or even are "concepts" in the first place, or anything. Almost everything involved in "understand anything" is beyond my exact conscious access.
Viewing a single comment thread. View all comments