Strong AI In John Searle’s Work, Minds Essay

John Searle is an American philosopher, known for creating the Chinese Room thought experiment to challenge the notion of strong Al. Searle’s work, Minds, Brains and Programs, introduces the Chinese Room and provides answers to many of the replies that came from presenting the thought experiment to the public. According to Searle, Al is a rigorous tool used for solving problems that will be more precise than any human can be. Strong Al, however is not just a tool.

Rather “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states,” (Jacobsen, 147). Searle’s Chinese Room is meant to refute the claim that the programs, which a strong Al uses can be employed for the purpose of explaining human cognition. The Chinese Room was created in order to refute the idea of strong Al and the functionalist theory of mind. Imagine a computer program that can be told a story and can then answer questions about implicit details within the story.

For example: “Kyle ran into his house, slamming the door behind him. He threw his book bag on the floor and plopped on the couch. After six hours of playing Grand Larceny VII, he ate some pizza and fell asleep with a slice in on his belly and his feet on his book bag. When Kyle came home from school the next day, he was noticeably distraught. He balled up his report card and placed it inside of a soup can in the garbage. He then flipped the soup can upside down and relocated garbage from other parts of the can, arranging over the soup can.

He then plopped down on the couch and picked up his controller. ” (E-Reading Worksheet, Inferences Worksheet 2) If you are asked “Why did Kyle hide the report card in the soup can? ” your answer will be something similar to “Kyle has some bad grades on his report and he doesn’t want his parents to find out. ” A person will presumably know why Kyle hid his report card. Like a regular person, this program will be able to provide answers to questions about stories despite the answers not being explicitly given in the stories.

These answers will be indistinguishable from any other person if they are given the same story: “Partisans of strong Al claim that in this question and answer sequence the machine is not only simulating a human ability but also (1) that the machine can literally be said to understand the story and provide the answers to questions, and (2) that what the machine and its program do explains the human ability to understand the story and answer questions about it. ” (Jacobsen, 148) Searle believes that both claims above are false and challenges them using his Chinese Room thought experiment.

Suppose that a man is placed in a room with no windows or doors. This room has a slit that allows him to receive and return letters, but does not allow anyone to see into the room. Now, suppose he is given three batches of Chinese writing. Also, assume that he knows no Chinese and cannot even distinguish Chinese from Japanese or any other logographic language. This first batch of Chinese writing is plain Chinese script. The second and third batch each come with a set of rules written in English which he can read and understand as well as any native English speaker.

The rules accompanying the second batch, allow him to correlate the second batch to the first batch and allow him to identify each symbol based on its shape. The second set of rules allows him to correlate the third batch with the first and second batch as well as how to respond using written Chinese symbols. Suppose that he becomes extremely proficient in manipulating the Chinese symbols and the rules he is given become so extensive such that he can respond to any query and his responses become indistinguishable from a native Chinese speaker.

Examining the claims made by supporters of strong Al, we can see that the man inside does not understand Chinese at all, thus does not understand the story either. He is just taking an input, following the instructions that he understands and producing an output using the rules given to him. He himself does not understand the story, the question or even his answers. Therefore, the first claim is false. The second claim, the belief that the program explains the human ability to understand, is also false.

Searle argues against this claim because “a human will be able to follow the formal principles without understanding anything,” (Jacobsen, 149). Although it is true that a person following the same procedure as the man in the room will not be able to understand Chinese, I believe the claim to be false due to the fact that there is an entirely different process taking place in our minds. The claim was that the program explains how we understand and our ability to understand.

Therefore, the program should explain the thought process that goes on inside our minds or at the very least, result in understanding if we were to go through the same process. Searle focused on the lack of recognition, but I will discuss the difference in the processes we go through to achieve it. Computers and people are different. Our minds are complicated, unique and can vary from person to person. A program cannot explain how people understand by using a completely different method from the one we use in addition to one that results in a lack of grasping the concept itself.

Computers solve problems differently than we do and cannot explain the way we think. (Maybe make this paragraph into two paragraphs? ) Consider the game of chess. A chess engine or chess Al will look at a board and evaluate what the next move should be based on a sequence of moves. It will then decide which path leads to the best possible scenario. However, the Al looks at the board differently than we do. It evaluates the board using a score. It does not know strategy. It just knows that going along a certain path is better than all the others because it returns the best possible score.

It is true that the program does know which moves are possible and how the pieces work, but that is because they were programmed into it as restrictions. But in terms of understanding, it knows that a queen is more valuable than a pawn because it’s worth more points if you lose it. Depending on the engine and the method of evaluation, the program calculates a score for the state of the board and checks which move leads to the best possible situation. It does this for several moves depending on the strength of the engine.

The more moves it considers, the better the Al will be. However, this does not explain how people play chess. We do not calculate scores for the board to determine our next move. We consider pieces, strategies, traps and sacrifices. Of course we consider what our opponent can do, but not to an extent that we can think more than two or three moves ahead. The number of possibilities is too much for a person to grasp. At the beginning of a chess game, there are 20 possible moves you can make on the first turn.

With each player going once, that is 400 different possibilities only for the first move. As the game progresses, there can be so many possible moves it is almost impossible to think more than a couple moves ahead. (Kind of long, maybe separate into two paragraphs again? ) The systems’ reply is one of the most renowned attempts to challenge the Chinese Room. It states that although “the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story,” (Jacobsen, 150).

Searle offers a rebuttal to this point by modifying his thought experiment such that the man inside the room memorizes all the rules and can convert audible Chinese words to symbols and respond in Chinese verbally: “Let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system… We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of Chinese… either does the system, because there isn’t anything in the system that isn’t in him,” (Jacobsen, 150).

Searle accurately provides a rebuttal for the systems’ reply. Take away the room, the ledger and add in the ability to convert audible Chinese speech to symbols and vice-versa, you have a system that is functionally equivalent to a native Chinese speaking person without any understanding. Take for example, Nigel Richards, a man from New Zealand who memorized the francophone scrabble dictionary before Francophone Scrabble World Championships despite not knowing how to speak French (Willsher, 2015).

If he could learn how to respond to different questions in French and reply with the same proficiency as a native French speaker, he would still not understand French, despite being functionally equivalent. Searle effectively refutes strong Al using his Chinese Room thought experiment. Although objects may be functionally equivalent, the idea of strong Al is not possible. A program will never be able to understand and explain the human ability to grasp concepts like we do. Our minds are too convoluted to be replicated using a computer program.