A commonly raised question brought up when analyzing the methodology behind the "The Turing Test", is can a computer's "thinking" exist with both syntax and semantics? And are both these required to "think"? John Searle's "Chinese Room Experiment" sets out to prove that although a merely syntactic computer may be able to pass "The Turing Test" it's understanding of the issues being posed to it are non-existent. In this essay I shall argue how Searle quite effectively illustrates his argument, as well as raising the objections to his argument. I shall conclude that Searle's objection to Artificial Intelligence rests on nothing more than chauvinism and that a computer or robot could possess understanding and intelligence in the future.
In recent years, as complex computer technology has become more advanced, many believe that the gap between a human's ability to "think" and a computer's is beginning to close. The concepts presented in the prolific "Turing Test" are often cited by those who support this belief as reason to endorse the fact that computers will one day have a mind similar to a human's.
In this widely used test designed by Alan Turing to determine whether or not computers can "Think", a human interrogator plays "The Imitation Game" by trying to distinguish a human's replies to questions from a computer's within a specified amount of time. Turing argued that if the interrogator is unable to distinguish the computer from the human, then the computer has achieved the ability to "think".
One of Searle's key arguments refuting the "Turing Test" theory is his claim that "Brains cause minds" . Thinking can not exist without an organic brain which purpose is to ensure the body's proper functioning and survival. All animals which are born with brains are capable of "understanding" in order to...