Philosophical Analysis
As technology advances to such an extensive extent, many may fear that digital computers can replicate human brains. In“Can Computers Think?”Philosopher John Searle argues that no matter how much they advance, computers are simply syntax machines and are incapable of thinking or making meaning out of their programming. To shape his argument, Searle highlights the distinction between syntax (structures of words or codes, attached to no meaning) and semantics (the meaning drawn from those words). He draws the claim that humans are capable of semantics and syntax, for they can obviously think and make meaning, whereas computers are only capable of executing syntax programs. To further his point, he suggests that a computer may have a coding series of 0s and 1s which create the images on a computer screen, however, the computer does not understand or make meaning out of those 0s and 1s, for it “knows” is the code, or the syntax, but not the semantics. Searle’s premises follow such that: computers cannot have a "mind", "understanding" or "consciousness", regardless of how human-like the program may seem through its processes, as their lack of consciousness does not allow them to attach true meaning to their coding. Secondly, one cannot retrieve semantics from syntax, where he illustrates this notion in the Chinese Room Experiment: He imagines a person in a room where they receive papers with Chinese symbols, and have a book in English that translates what the Chinese symbols are, so he can slip back the correct symbols. As he gets
better and better at manipulating the Chinese symbols into others, he advances so much to the point where he can memorize those that come in and knows what to send out quickly. Searle proposes that the person is mimicking a computer, simply memorizing syntax and outputting symbols, without truly understanding the meaning behind the language, grammar, and general linguistic properties. The Chinese room experiment suggests that computers may make it appear like it knows the language (by using rules of syntax to manipulate strings of symbols) when in reality there is no understanding of semantics. Ultimately, Searle proposes that since computers don’t have semantics, they, therefore, cannot think.
Searle creates arguments to defend how computers cannot think, specifically with The Chinese Room experiment. He designs a scenario to replicate syntax machines by having a human’s brain replicate the program of a digital computer, simply by assigning Chinese symbols (which are analogous to computer codes). This serves as an analogy for people to see this premise in a real-life example, further strengthening his claim for how computers cannot think. One shortcoming I see, however, is that the person in the Chinese room is technically still thinking. The fact that he has the capacity to memorize symbols and assign them to their meaning proves that he can think. He gets faster and faster at this task, eventually seeming automatic, but still, the initial steps of him assigning symbols with the meanings demonstrate his conscious ability and self-awareness; for a computer program wouldn’t have been able to do. While a human brain initially processes these tasks and improves over the course, computers are highly automated and process codes instantly, even from the start. While Searle’s scenario lays the groundwork for his premise, he could have thought more critically about the shortcomings of what it truly means to “think”, and how accurate it is for this person’s actions to mimic a computer’s program.
If he argues that this person is analogous to a computer, then he should reframe his position to be “computers cannot make meaning” rather than the fact that they cannot “think”. It’s not as though the person in the Chinese Room was empty of thoughts. There is a difference between thinking and assigning meaning and understanding to codes and words, which Searle seems to muddle together. If he clarified this distinction, and instead expressed that the person was “incapable of meaning and understanding” rather than “thinking”, it would have been a more accurate depiction of what syntax machines are like. To prove his analogy, he could go on to explain that computers will generate codes but not make meaning out of them, likewise to the person in the Chinese Room. Perhaps this is what he intended, however, the wording of “thinking” may come across as ambiguous to the reader. Thus, it could potentially weaken his argument depending on the way readers interpret it.
Comments