PDA

View Full Version : Anyone up for the Chinese Room argument?



yasashiku
September 20th, 2010, 08:08 PM
So this is originally a concept from philosophy about artificial intelligence, but I think it would have an interesting application if we were to bring it into the religious field.

I'm tempted to start a thread with this idea, but I'm not totally sure how I'd want to go about wording it, and I'm looking for feedback as to how to make it a good debate topic (ie keep people interested, maybe discover a thing or two as we discuss...)


Background on the Chinese Room argument:

The Chinese room is a thought experiment (http://en.wikipedia.org/wiki/Thought_experiment) by John Searle (http://en.wikipedia.org/wiki/John_Searle) which first appeared in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences (http://en.wikipedia.org/wiki/Behavioral_and_Brain_Sciences) in 1980.<sup id="cite_ref-FOOTNOTESearle1980_0-0" class="reference">[1] (http://en.wikipedia.org/wiki/Chinese_room#cite_note-FOOTNOTESearle1980-0)</sup> It addresses the question: if a machine can convincingly simulate an intelligent conversation, does it necessarily understand? In the experiment, Searle imagines himself in a room acting as a computer (http://en.wikipedia.org/wiki/Computer) by manually executing a program (http://en.wikipedia.org/wiki/Computer_program)<sup id="cite_ref-1" class="reference">[2] (http://en.wikipedia.org/wiki/Chinese_room#cite_note-1)</sup> is able to create sensible replies, in Chinese, by following the instructions of the program; that is, by moving papers around. The question arises whether Searle can be said to understand Chinese in the same way that, as Searle says, "according to strong AI, . . . the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."<sup id="cite_ref-2" class="reference">[3] (http://en.wikipedia.org/wiki/Chinese_room#cite_note-2)</sup> that convincingly simulates the behavior of a native Chinese speaker. People outside the room slide Chinese characters under the door and Searle, to whom "Chinese writing is just so many meaningless squiggles",
The experiment is the centerpiece of Searle's Chinese Room Argument which holds that a program cannot give a computer a "mind" or "understanding", regardless of how intelligently it may make it behave.<sup id="cite_ref-FOOTNOTESearle1980_0-1" class="reference">[1] (http://en.wikipedia.org/wiki/Chinese_room#cite_note-FOOTNOTESearle1980-0)</sup> He concludes that "programs are neither constitutive of nor sufficient for minds."<sup id="cite_ref-FOOTNOTESearle1990_3-0" class="reference">[4] (http://en.wikipedia.org/wiki/Chinese_room#cite_note-FOOTNOTESearle1990-3)</sup> "I can have any formal program you like, but I still understand nothing."<sup id="cite_ref-4" class="reference">[5] (http://en.wikipedia.org/wiki/Chinese_room#cite_note-4)</sup> The Chinese room is an argument against certain claims of leading thinkers in the field of artificial intelligence (http://en.wikipedia.org/wiki/Artificial_intelligence),<sup id="cite_ref-5" class="reference">[6] (http://en.wikipedia.org/wiki/Chinese_room#cite_note-5)</sup> and is not concerned with the level of intelligence that an AI program can display.(Wikipedia Article (http://en.wikipedia.org/wiki/Chinese_room))<sup id="cite_ref-6" class="reference"></sup><sup id="cite_ref-6" class="reference"></sup>


A common criticism of the Chinese Room Argument is that if our brains are strictly biological (ie there is no "soul"/dark matter/etc that is involved in its function), then no human could be said to have a mind, either.


I'm tempted to go a couple ways with this:
Is it possible that our brains are influenced by some currently unknown force (ie, given that spirits exist, how could they have any relation or control of our bodies - or is this something that is supernatural?) (likely too specialized and would involve a lot of speculation about neuroscience)
We could debate the argument itself (we would likely repeat criticisms that have been done elsewhere, and I'm not sure people would be very interested)
If we are simply biological robots with purely stochastically behaving brains, then what is the ethical - or even religious - ramifications of, say, destroying an equally intelligent mechanical robot (given that it could be created)? (again, very speculatory and probably too hypothetical)


My question is: does this idea spark anyone's fancy? Would this be a discussion you would participate in? And does anyone have an idea of how to make this into a meaningful debate?

Lukecash12
September 20th, 2010, 08:31 PM
I would probably debate on the definition of a human, from a religious standpoint.

Dr Gonzo
September 20th, 2010, 10:11 PM
It's an interesting topic, to be sure.

What I have a problem grasping in the AI argument is the "understanding" part - does the AI really understand? Let's switch our Chinese characters out for words printed on a card in a language with the Latin alphabet, and a native Chinese speaker who has only learned how to phonetically pronounce words in this language (not necessarily their meaning). Now, if the word is spoken, there are many, many different queues, some verbal, some tonal, some with basic pronunciation, even some body language and eye contact queues that could alter the "conversation" being had with the speaker/reader. For example, if I were sitting across from this guy, and he were reading off responses, I would be able to tell, through various queues, that he was not a native speaker of the language and could probably not understand what I was saying in response. I would change my delivery, or perhaps even the responses themselves.

While that guy may not be understanding our conversation, he would be understanding other things during our interaction, such as possibly noticing my reaction and subsequent change in my delivery. If a computer could be encoded with such reactions, and even more senses (to be able to pick up on all the different queues), and a way to integrate those senses into the "response program," then it's quite possible it would "understand" what was going on, every bit as well as I would in that situation. I mean, that's the basic function that's going on in our brains, isn't it?

Lukecash12
September 20th, 2010, 11:25 PM
It's an interesting topic, to be sure.

What I have a problem grasping in the AI argument is the "understanding" part - does the AI really understand? Let's switch our Chinese characters out for words printed on a card in a language with the Latin alphabet, and a native Chinese speaker who has only learned how to phonetically pronounce words in this language (not necessarily their meaning). Now, if the word is spoken, there are many, many different queues, some verbal, some tonal, some with basic pronunciation, even some body language and eye contact queues that could alter the "conversation" being had with the speaker/reader. For example, if I were sitting across from this guy, and he were reading off responses, I would be able to tell, through various queues, that he was not a native speaker of the language and could probably not understand what I was saying in response. I would change my delivery, or perhaps even the responses themselves.

While that guy may not be understanding our conversation, he would be understanding other things during our interaction, such as possibly noticing my reaction and subsequent change in my delivery. If a computer could be encoded with such reactions, and even more senses (to be able to pick up on all the different queues), and a way to integrate those senses into the "response program," then it's quite possible it would "understand" what was going on, every bit as well as I would in that situation. I mean, that's the basic function that's going on in our brains, isn't it?

Well, it's much more complex than that actually. The way you actually go about it isn't as straight forward as you are depicting. Here a list of things that are highly variable in that situation (and I am confident you would get a distinctly different response from each individual if you were to test this scenario):

1. Level of confusion.
2. Interpretation of verbal queues.
3. Recognition of the problem, and the time taken to recognize it.
4. Consistency of response (you would have more than one thought running through your head, so whatever you are trying to project to the person won't come out consistently).
5. Ultimate effectiveness.

Now that was actually a very basic diagnostic of the situation. Suffice to say, a computer would have difficulty handling that much in a human like fashion, that is I mean using it's own, completely unique variation of the 5. If I were to remember everything I learned in sociology, I probably would have been able to construct a 12 point list. And that is still rudimentary compared to what is actually going on. So, it would take a remarkable computer to emulate a human in this specific situation.

Dr Gonzo
September 21st, 2010, 01:04 AM
Well, it's much more complex than that actually. The way you actually go about it isn't as straight forward as you are depicting. Here a list of things that are highly variable in that situation (and I am confident you would get a distinctly different response from each individual if you were to test this scenario):

1. Level of confusion.
2. Interpretation of verbal queues.
3. Recognition of the problem, and the time taken to recognize it.
4. Consistency of response (you would have more than one thought running through your head, so whatever you are trying to project to the person won't come out consistently).
5. Ultimate effectiveness.

Now that was actually a very basic diagnostic of the situation. Suffice to say, a computer would have difficulty handling that much in a human like fashion, that is I mean using it's own, completely unique variation of the 5. If I were to remember everything I learned in sociology, I probably would have been able to construct a 12 point list. And that is still rudimentary compared to what is actually going on. So, it would take a remarkable computer to emulate a human in this specific situation.

Yeah, kind of what I was getting at... only, "If a computer could be encoded with such reactions, and even more senses (to be able to pick up on all the different queues), and a way to integrate those senses into the "response program," then it's quite possible it would "understand" what was going on, every bit as well as I would in that situation. I mean, that's the basic function that's going on in our brains, isn't it?"

I mean, yeah, that would be a remarkable computer. I'm just saying, if it were possible (and it may not be, I haven't a clue) to create a program sophisticated enough to deal with all of that input, that would basically emulate the human brain on some level. As it is, we respond to stimuli. We just happen to respond to much, much more of it than any computer is able to do at present.

Lukecash12
September 21st, 2010, 06:31 PM
I mean, yeah, that would be a remarkable computer. I'm just saying, if it were possible (and it may not be, I haven't a clue) to create a program sophisticated enough to deal with all of that input, that would basically emulate the human brain on some level. As it is, we respond to stimuli. We just happen to respond to much, much more of it than any computer is able to do at present.

However, one trait would be missing: Individualism. For a computer to engage in such thinking, it would have to programmed to do so. Now, if one were to get a computer to engage in complex reasoning of it's own accord, that is, to create a machine that can be both introspective and retrospective, then that would make it "human", more or less.

Dr Gonzo
September 21st, 2010, 10:03 PM
However, one trait would be missing: Individualism. For a computer to engage in such thinking, it would have to programmed to do so. Now, if one were to get a computer to engage in complex reasoning of it's own accord, that is, to create a machine that can be both introspective and retrospective, then that would make it "human", more or less.

I'm glad to see we are in agreement. That is what I have been saying.

SharmaK
September 23rd, 2010, 04:48 AM
Dennett's Consciousness Explained (http://www.amazon.com/Consciousness-Explained-Daniel-C-Dennett/dp/0316180661/ref=pd_sim_b_3) is a great book that has a more convincing argument about this. He explains that our reactions, and our reactions to our reactions and all that brain baggage is an emergent property of all the neuronal connections. The different parts all operate sort of independently until some tipping point when something external to the brain happens.

It's a much better idea than the Chinese room because it also explains multi-lingualism and how people can get words and even grammar mixed up together - like my kids did when they were growing up.

yasashiku
September 29th, 2010, 11:22 AM
Thanks for the input, everybody.

I would probably debate on the definition of a human, from a religious standpoint.
Nice. I like it a lot - there are a lot of ways someone could come at it. I think I'd like to pin down the variables as much as possible to make the discussion interesting, but not too much to keep it open to a lot of perspectives. Maybe if we could find a controversial definition somewhere and go from there...
Consciousness Explained looks really interesting. And controversial. The problem with a book, though, is it's not generally accessible (unless there's a free version somewhere I'm unaware of), and so debate would be a little harder.