Mind, Brains and Programs by John R. Searle discusses the idea of AI, specifically Strong AI. I will argue that his comments on AI being impossible to develop are incorrect. Searle had discussed many points but thinks too low of what makes up the intentionality he supports. John R. Searle discusses AI with several points. They can be summed up into five points. The 1st point as well as 2nd point compare living animal’s ‘intentionality’ being based on the causal features and comparing them to computer programs not being sufficient enough to have ‘intentionality’. Searle illustrates this with his “Chinese room” experiment.
The room contains a person who speaks English (say code) if the person knows no Chinese and is given Chinese letters and asked for a response he would not be able to respond. If he is given instructions on how to describe the letters in English he can spit out responses based on the instructions. But he is none the wiser on knowing more Chinese. This experiment is his basis for his argument and I believe it has its problems to be discussed later. The 3rd point is the explanation which states that the brain does not produce intentionality by installing a program.
I think this is largely because the author does not give any credit to the programs and the lack of definition on what is a program as an issue of its own. With this said, this brings the author to the 4th point that decides that for a mechanism to create intentionality they must be equal to a brain. Therefore to create Strong Ai must not be just by designing programs. The author further elaborates on the conclusion of the 5th point argument that the creation of the brain must be the brain and not something like it, for example chlorophyll against solar energy.
Strong AI is possible and not only as human brain copy as Searle states. The cognition of human beings can also be described as intentionality is the ability to go beyond an input and response. For example when you are given 1 apple and told to eat it you will not just do it but will go on a thought pattern if and why you should eat as well as why. Searle would argue ( In his 1st point) that a machine less than a human or biological copy cannot do more than eat or not eat based on if apple > good > eat algorithm string of directions such as the Chinese room example with deciphering the symbols.
The difference is how we human brain have evolved is by learning not just a specific set of instructions but to overlay them. Factors of how many generations of efficient knowledge as well as personal experience plays a large role. For the same reason the 2nd point which is the reverse argument of why machines cannot express cognitive traits is also flawed. This means the ability is not the biggest hurdle for Strong AI but the talent of the creators to overcome.
When we follow rules or programs one can call those programs senses which report overlaid information that are then exchanged with our memory archive. The robot example Searle uses a robot with a camera and claims until the organic brain replaces the AI. He does not note that there is a learning process for true Strong AI and that things like mobility and creating of ways to obtain information. To say that information can only come in a way of a single path is reducing the process to a single possible method of exchange and therefore giving bias on what AI can be made to do.
The brain once it receives its information has to exhibit cognitive action. The Ai or Machine can too create such action if it has the appropriate tools and knowledge base. To support his 3rd point, Searle brings up objections that state that once a robot is similar to us we may not know it is a robot because it functions and acts very similarly. But what he is decisively saying is the robot is still responding to a more complex rule book and that once we knew the rulebook he loses its perceived intentionality.
But the human brain is no different and it’s still a matter of complexity of the program. We simply have very fragmented and divided programs. In our society we seek out physiatrists which do just that and try to understand our specific rule books. One difference here is this robot and Searle’s robot is our robot must not be aware of all the possibilities in the rule book, he must combine information in order to create new outputs. A recent experimental way to learn Chinese was to dress up the symbols based on what they mean.
If the symbol for tree looks like a tree then we know it means tree, if it is a symbol for fire and it’s encapsulated in fire we may assume it means fire or burning. So instead of knowing English AI can know things, objects images and use them to learn Chinese. In fact this is how Google search engine does its image searching. While this is not an expression of AI that can be compared to us as it shows Intentionality but not cognitive action, it does demonstrate a path of learning information and disproving the Chinese Symbol experiment as being the biggest possible accomplishment for a machine.
The step at which Causal powers mentioned in 4th point is when the machine can make correlations between tree and fire and know that one can burn the other. This last bit can only be achieved if it has a memory of fire as we as toddlers are told it burns or have experienced pain from fire. Therefore it goes back to the overlaid information with use of inputs and history. In conclusion, Strong AI is possible as Searle has broken it down to a very simple lines of code and used that as basis for all his arguments.
Our Causal response relies on our ability to learn, absorb and respond to information. The depth and possibility of this ability is so large our brains don’t use it unless it’s on the task at hand, where we use our sensory and memory to exhibit cognitive response. AI can reach such a state without needing to be a brain but using a similar method of sensors, knowledge, and ability to add not directly related information to make a decision. Therefore Strong AI is possible but it must be built up as we were built up as organisms.