AI - Artificial Intelligence or Automated Imbecility?
Can machines think and feel?

Valdemar W. Setzer
Dept. of Computer Science, University of São Paulo, Brazil
vwsetzer@usp.br - www.ime.usp.br/~vwsetzer
Written in Nov. 2002 - current version 3.2, Oct. 7, 2016
An enlarged version with the same title was published as a Kindle book on Jan 27, 2020, edited by Frederick Amrine, of the Univ. of Michigan at Ann Arbor

Summary

The standard current scientific view of the world is materialistic: its fundamental hypothesis is that the universe is purely physical, and all its phenomena are due solely to physical processes. This paper assumes the opposite hypothesis: there are also non-physical processes. This is proposed as an enlargement of current scientific views, without indulging in religious thinking, mysticism, dogma and sectarianism. Evidence is given in support of this enlarged view. Some evidence is universal, such as the origin of matter and energy; some evidence is personal, such as the unity of sensory perception, the dependence of perception on associating percepts to concepts (considered as non-physical "objects" in the Platonic world of ideas), the possibility of self-determining one’s next thought and free will, the subjectivity of feelings and particularities of human memory. Mind and its main inner activities of thinking, feeling and willing are examined using their general characteristics as well as the main hypothesis, expounding why they cannot be fully inserted into a machine. The paper also covers the essential role of the physical brain in the thinking process, and the dependency of consciousness on thinking and feeling, with emphasis on self-consciousness. New classifications of intelligence and Artificial Intelligence are proposed. Searle’s Chinese Room, Turing’s Test, Electronic Chess and Kurzweil’s prophecies are described and discussed. Two recent films where robots are played by human actors are discussed, as well as their influence upon laymen and children. The paper concludes with considerations that regarding humans and living beings as machines represents a great danger to humanity and the world, and shows how the main hypothesis may help to reverse this trend.


1. Introduction

The increasing computational power of modern computers has permitted the implementation of tasks that would have seemed almost impossible ten years ago. Some examples were the defeat of the world chess champion by IBM's Deep Blue, voice, handwriting and pattern recognition, etc. This has raised the question on the limits of computers: are they going to replace every intellectual, perhaps manual human activity? Are they going to reveal intelligent behavior, and replace humans in creative tasks? Will computers exercise the same kind of thinking and feeling that humans do? Will robots perform every task that humans do? Will they become indistinguishable from humans? These questions left the domain of academia with the exhibition of two recent films, The bicentennial man and Artificial Intelligence.

Much has been written on these questions. What I do here is to introduce a different way of covering them. Many readers will find my arguments very strange. I want to make it very clear that they are not based upon any mystical or religious thinking. The reader will recognize that my arguments are conceptual and not emotional, and are directed to common understanding.

I have a recommendation to the interested reader. When facing new ideas, one should take one attitude and three actions: 1. Having no prejudice, independently on how strange the ideas or information are. For instance, if someone had told me before Sept. 11 that the WTC in New York was just hit by two commercial planes, I would have thought: "This seems very strange; it has never happened before in the world. But I will investigate." This also means being completely open to receive the information, leaving criticisms for a later step, after realizing what the whole picture of the new ideas are. 2. Verifying that the new ideas are consistent, that is, if there are no logical contradictions among them. 3. Verifying that the new ideas do not contradict what can be observed in the world, outside and inside the observer. For instance, if the new ideas include the fact that any object that is left in still air does not fall, they contradict what everyone can observe. If someone tells me that I cannot decide what my next thought is going to be, this contradicts my own inner experience. 4. Verifying that the new ideas seem attractive, that is, they "ring a bell". This condition is very important, because it is impossible to construct a physical theory that is absolutely encompassing and explains everything about something, a complete theory.

A special comment on item 3 above. It is very important to separate scientific facts from scientific judgements. For instance, it is a fact that one sees the sun moving around the sky during a clear day. The conclusion that the Earth remains still and the Sun moves around it, or that the Sun is still and the Earth rotates around its axis, are judgements (the latter corresponding to very "strong" scientific theories and experiments).

If the four items are fulfilled, then one should take the new ideas as working hypotheses, and not as questions of faith or dogma. This is what I expect from readers that will have the courage to read my ideas up to the end, with an unprejudiced mind.

Artificial Intelligence is based on modern computers. So in chapter 2 I describe what a computer is, from a logical point of view, characterizing its data processing features and the fact that this is a syntactic, and not a semantic processing. Thinking is the central point when speaking about intelligence, and is covered in chapter 3. Connected to thinking I examine intuition, sensory perception, the role the brain may play in the thinking process, understanding and learning, and the problem of the brain being or not a computer. Chapter 4 deals with intelligence, expounding what is generally understood under this term and various types of intelligence, as well as my own classification. The question "can machines be intelligent?" is covered in chapter 5. I describe the Turing Test and its extensions, and the traditional types and aims of research in Artificial Intelligence, expounding my own additional types. Chapter 6 deals with the problem of the possibility of machines having feelings. For this, I compare feeling with thinking, showing that the former is always subjective and individual, and the latter may be objective and universal. The problem of machines having consciousness is handled in chapter 7, and chapter 8 deals with the central question of humans - and living beings in general - being or not machines. Chapter 9 criticizes two recent successful films depicting robots having feelings, Artificial Intelligence and The Bicentennial Man, and their possible influence upon the way people regard themselves. Finally, in chapter 10 I present my conclusions, expounding the concerns I have with the current view of many scientists doing research in Artificial Intelligence that humans are machines, and thus we will be able to introduce into machines all human capabilities.

2. What is a computer?

Modern digital computers are mathematical, logical-symbolic, algorithmic machines. This means that the processing and effect of any machine-language instruction interpreted by the computer (rigorously, a computer never executes an instruction, but interprets it) can be mathematically described, that is, it represents a mathematical function. Moreover, the mathematics involved is a restricted one: it only deals with symbols taken from a finite, discrete set, to which one may always assign a numbering system.

A program is a set of instructions, and can always be associated with a mathematical function which maps elements of a set of input data into elements of a set of output data. I define data as a representation of quantified or quantifiable symbols. What I mean by quantifiable is that when some object is quantified and a representation is made out of this quantification, it is not possible to distinguish this representation from the original object. For example, if a printed picture is scanned and introduced into a computer, which then prints it, and the resulting figure looks the same as the original, the latter is quantifiable, because inside a computer every object is represented using a numbering system, that is, through quantities. Other examples of data are texts, recorded sounds and animation.

I've said that computers are algorithmic machines. An algorithm is a finite sequence of mathematically defined actions that ends its execution for any set of values of the input data. A computer program may be a sequence of well-defined, that is, valid instructions. But if this program enters during its execution an infinite loop of actions in which there is no input of new data, then it is not an algorithm. Thus, not every program is a description of an algorithm.

Therefore, a program is a set of mathematical rules on how to transform, transport and store data. Inside the computer, data is represented as strings of quantified symbols. Thus, the rules may be regarded as a syntax that is applied to those strings. The string themselves always follow a certain structure. For instance, an address string is composed of three parts: street and number, city and Zip code. The Zip code must follow a certain pattern - a string of a certain number of decimal digits. As programs and data follow syntax rules, one may say that a computer is a syntax machine.

2.1 Data and information

I have defined what data is. I characterize information as an abstraction that exists only in some human mind, and has significance to that person (for further details, see my paper "Data, information, knowledge and competency" at my web site). This is not a definition: it is not possible to define what "abstraction", "mind" and "significance" are. An example may help understanding the difference between data and information.

Suppose we have a table with two columns representing names of cities and their current temperature. There is a header line with titles, texts in the cells of the first column, and numbers in the cells of the second column. Suppose titles and texts are written in some language which uses special symbols for its alphabet, say Chinese. For a person who does not know Chinese and Chinese ideograms, that table is pure data. If there is no grid, eventually the person does not even recognize that it is a table. This does not prevent the person from formatting the table, like changing the ideograms fonts - a data processing action. Recognizing that it is a table, but still not understanding what it means, and given a collating sequence for the ideograms, this person may order the lines according to the text column or the number column, also data processing actions. All these actions follow exact rules, structural - and thus syntactical - rules.

Now suppose the person understands Chinese. In this case, she will recognize what cities are being described, and if its is cold, mild or hot in each one. The table has significance to that person; she attributes semantics to the table contents. I say that the person has incorporated the data, the table representation, as information. So the same piece of data may be just data for someone, but may represent information for another.

Given this definition of data and characterization of information, we may say that computers are data processing machines. They are not information processing machines, because they have absolutely no understanding of what they process. John Searle has developed an interesting "thought experiment" to illustrate this point.

2.2 Searle's Chinese Room

Searle [1991, p. 32] describes a room with a person in it, the operator. Many baskets with Chinese ideograms are in the room, as well as a book of rules written in English, on how to combine Chinese ideograms. The person receives a sequence of ideograms, and using the book of rules, combines the incoming ideograms and some of those from the baskets, composing a new sequence, which is then passed out of the room. The operator does not know what he is doing, but in fact he is answering questions in Chinese. Searle argues that there is a sharp distinction between such an operator and another person who reads and understands Chinese, and answers questions without using a book of rules. The former is just following syntax rules, but the latter is associating semantics to what he is doing. Searle states that the second person is doing more than the first one, because he understands what each question and its answer mean. He correctly says that computers are purely syntax machines, combining symbols following predetermined rules, thus a computer could replace the room’s operator. But humans do more, they may associate significance, semantics, to what they observe and think. As he says, "There is more to having a mind than having formal or syntactical processes." [p. 31.] Thus, computers will never be able to think, because thinking involves semantics. Programs are not sufficient to ascribe minds to computers. Unfortunately, he takes significance and semantics in a naïve way, and does not elaborate on them. I don't agree with a premise of his: he says that "brains cause minds" [p. 39], that is, minds are pure outcomes, consequences of our physical brains. We will see that, once we depart from this point of view, it is possible to further elaborate on what understanding, significance and semantics may be. The important point now is that this premise does not invalidate his Chinese Room argument. According to this argument, computers will never be able to think. Thinking is, thus, a central activity for determining whether machines will be able to do whatever a human is capable of doing, including having intelligence.

3. Thinking

The current scientific view of the world states that there are only physical and chemical processes in the universe, and they happen due to purely physical laws. Let us call this view materialism. Searle represents this view very well. He says:

"Suppose we ask the question that I mentioned at the beginning: 'Could a machine think?' Well, in one sense, of course, we are all machines. We can construe the stuff inside our heads as a meat machine. And of course, we can all think. So, in one sense of 'machine', namely that sense in which a machine is just a physical system which is capable of performing certain kinds of operations, in that sense, we are all machines, and we can think. So, trivially, there are machines that can think." [p. 35, my emphasis]

There is a big linguistic problem here. When we say "machine", we mean a physical/chemical device that has been designed and constructed by humans (eventually, using other machines or their products). But humans - and plants and animals, for that matter - have not been designed and constructed by humans! (DNA manipulation is neither designing nor constructing a whole living being.) So I consider absolutely inaccurate the phrasing "humans are machines"; a more proper one along this desired line should be that humans are purely physical beings ("systems", for Searle). In chapter 8 I'll give explicit reasons for not considering humans as being machines or purely physical beings. Anyhow, what matters here is that it is clear, from his words and premise that brains cause minds, that Searle is essentially a materialist.

Another materialist is Antonio Damasio. His main argument is that brain plus body, and mind, are just the same, that is, human beings have only brains and bodies and no separate minds:

"What I am suggesting is that the mind arises from activity in neural circuits..." [1994, p. 226]. "The truly embodied mind I envision, however, does not relinquish its most refined levels of operation, those constituting its soul and spirit. From my perspective, it is just that soul and spirit, with all their dignity and human scale, are now complex and unique states of an organism." [p. 252].

John L. Pollock is absolutely clear in this position:

"My general purpose in this book is to defend the conception of man as an intelligent machine. Specifically, I will argue that mental states are physical states and persons are physical objects." [1989, p. 1].

John Haugeland writes:

"The fundamental goal of this research [Artificial Intelligence] is not merely to mimic intelligence or produce some clever fake. Not at all. 'AI' wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves. That idea - the idea that thinking and computing are radically the same - is the topic of this book." [1987, p. 2, his emphases].

Here is what evolutionist Richard Dawkins says:

"We are surviving machines - robots which are blindly programmed to preserve the egotistical molecules known as genes. This is a truth the makes me full of admiration." [1989, p. 23, my translation; see also pp. 55, 75, 105]. "The argument of this book is that we, and all other animals, are machines created by our genes." [p. 29]

These are just five examples; I conjecture that most of the current scientists, in all fields, follow the materialist view of the world. There are honorable exceptions, though. For instance, none less than Roger Penrose admits the existence of a non-physical, Platonic world of mathematical ideas [1991, pp. 97, 428]. One may hear many scientists saying, "I believe in God". I don't consider such a position as characterizing such a person as a non-materialist. What matters is the way a person thinks. If her whole reasoning is based upon physical/chemical concepts, then I classify her as being a materialist. Along this line, I also consider many, probably most religious people as being materialists. Just look how many of them give much importance to their special garments or haircuts, or to a physical place of worship. Any religious fundamentalism, with its typical lack of physical respect for people outside the faith, with the extreme case of killing heretics, can only be based upon the crassest materialism.

One of the consequences of the materialist view of the world is that thinking is considered to be a consequence of, and determined by neural processes inside the brain. It is important to recognize the fact that the only knowledge we presently have of the thinking processes is that certain areas of the brain are more active than others when certain kinds of thoughts, feelings, perceptions or memories are exercised. There is no idea how memories are stored, or how we perform (or remember the result of) 2+3. So there is no sound scientific basis to consider thinking as a process generated by the brain.

3.1 Points of view on the mind-machine problem

As we have already seen, Searle is against the idea that computers will ever have the full capabilities of mind. His main argument is that understanding is an essential feature of minds, and his Chinese Room allegory has shown that complex processing is done by computers in a purely syntactical way. Being syntactical machines, computers will never have a mind.

There are other points of view against the mind being a machine. One of them is represented by James Fetzer, who bases his considerations on Peirce's semiotics, the theory of signs [2001, pp. 117, 137, 156]. He does not agree with the conception that the mind is a computational device [pp. 125, 176].

Another important author that considers minds not being machines is Roger Penrose. His main argument is that "there seems to be something non-algorithmic about our conscious thinking." [1991, p. 412; see also p. 439].

>

3.2 My point of view regarding thinking

My point of view is that thinking is a non-physical process, that is, it cannot be reduced to purely physical-chemical processes. There are special features of the thinking process that lead me to this conclusion. Many of them were pointed out by the thinker Rudolf Steiner, in his masterpiece The Philosophy of Spiritual Activity [1963]. This title in English was given by himself, the literal translation of the original in German being "The Philosophy of Freedom", initially published by the end of the 19th century, as an extension of his doctoral dissertation [1962]. Steiner was the first one to assign to the thinking activity the most fundamental importance of modern human life. In that book, he makes a deep analysis of this process. For instance, he calls attention to the fact that thinking is self-reflective: it is possible to think about thinking [1963, p. 66]. All our other actions involve some other object, as for instance digestion: we digest food, and not digestion itself. We see some object, but the process of vision itself. In general, we do not think about our thinking process, that is, we are not aware of our thinking. This comes from the fact that our thoughts are generally directed to the objects we perceive with our senses, or we are associating concepts [p. 61].

Thinking is self-sustained: it is not necessary to use any other inner activity when thinking about thinking:

"All other things, all other events are present independent of me. Whether they are there as truth or illusion or dream I know not. Only one thing do I know with absolute certainty, for I myself bring it to its sure existence: my thinking." [p. 65]

According to Steiner, it was this feature that led Descartes to formulate his famous "cogito, ergo sum":

"Thinking is the inner activity which is absolutely independent of any other, and is a firm point ... from which ... one can seek for the explanation of the rest of the world's phenomena" [p. 64]. "The simplest assertion I can make about something is that it is, that it exists." [p. 65]

Based upon the self-reflectivity and self-sustainability of thinking, I have formulated as self-determination another one of its unique features: the fact that we may decide what our next thinking is going to be, and in fact exercise it. Everyone must come to this conclusion by her own experience: it is not possible to prove it, because nobody can know what another person is thinking. For this, I propose the following thinking experiment. Sit in a quiet place, and close your eyes. Try to produce an inner calm, that is, without being attracted to external sensory perceptions, ignoring urgent problems and immersing inwardly in yourself. There is a special kind of feeling related to this inner state of calm. Once feeling this inner calm, imagine a display in which red numbers can be "seen". Now imagine the number 100 displayed there, then inwardly say "hundred", then 99 saying "ninety-nine", 98, etc. down to 0. The exercise consists of concentrating on imagining and "saying" these numbers, and avoiding any other inner image or sound. A counter-example could be to imagine the numbers down to 97, and suddenly remembering that in 1997 something very important happened to your life: you changed your job. Then you could think how awful you felt in your previous job, and how satisfying your present job is: your salary has increased, so you decided to buy your present house, where you had plenty of space to install your home study with a couple of nice paintings, etc. One sees that the initial objective, concentrating only on those numbers, has disappeared. The nice feature of this exercise is that one may repeat it with some frequency and notice how one's mental concentration improves, and with it the ability to control one's thoughts.

At this point, the main reason for trying this exercise is to recognize that it is possible to control, at least for some seconds, one's own thoughts, that is, determining one's next thought. Then we become aware of the fact that in thinking we posses self-determination. There is no machine, not even an abstract one, that has this self-determining feature. Machines inexorably follow their programs or mechanisms - otherwise they would not do what we expect from them.

Abstract machines are formal, mathematical, digital machines. All of them can be reduced to (or simulated by) a Turing machine (TM), a machine that can be in one of a finite number of different states. It uses an infinite tape divided into cells. Each cell may be blank or contain a symbol from a finite alphabet. The machine has a read/write head which scans the tape. The head reads the symbol at the tape cell under it; based upon this symbol and its present state, the machine writes with the head another (or the same) symbol on the cell under it. Then the machine moves the head (or the tape) to the right or to the left, and changes to another state. According to the so-called Church-Turing Thesis, any effective procedure - including the calculation of all mathematical functions - may be executed via a Turing machine. Any real computer may be simulated by a TM, that is, a TM may be programmed to accept a program in the computer's machine language and some input data, and produce exactly the same output that the computer would generate for that data. Furthermore, a TM can simulate any other TM; this is the reason it is called a "universal machine".

Obviously a TM, and any other digital machine, be it abstract or concrete, follows its program, that is, it is not self-determined. One could object that, when we mentally perform the number display exercise, we are also following a program. The answer to this question is that it is not possible to say that there is a program stored in the brain - we do not even know how we "store" a number such as 2 -, and, most important of all, we do not feel like we are following a pre-determined program. Our sensation is that we have decided to perform the mental concentration exercise, and that at each moment we decide to stick to it, determining our next thought. There is nothing that forces us to follow the pattern of the exercise, because there is absolutely no need to perform it.

So, recognizing that we are able to determine our next thought, we have to conclude that our thinking is not performed by some purely physical system, because such a system would have to follow its physical principles, leading to determinism or, at most, some random effects. But, as we have seen, we may recognize that we are not always determined, but may self-determine some of our inner activities; moreover, we do not have the sensation that our thinking is always random - in this case we would not be able to choose our next thought. There are then three possibilities: either the brain produces our thoughts, but is not a machine, or the brain may be a machine but it does not produce our thoughts, or the brain is not a machine and does not produce our thoughts. In my opinion, the last hypothesis is the most probable one: considering that our thinking is not physical and is not generated by the brain, but obviously influences its activity, the brain has to be influenced by some non-physical process, thus it cannot be fully subjected to physical laws. In section 3.5, I will expound on why the brain is necessary at all.

The three possibilities contradict Searle's statements above on the brain being a purely physical system. But there is a further argument that I may use to come to the same conclusion.

I hope the reader has performed the mental exercise, and has convinced herself that in thinking we can be free, at least for a couple of seconds. This sensation of freedom comes precisely from the fact that we recognize our ability to determine our next thought. If our brain is a machine it should to be subjected to physical laws. But physical laws are inexorable: they cannot lead to freedom. So we conclude again that our thoughts are not produced by the brain, or the brain is not a machine, or both.

Freedom requires self-awareness. A drunkard is not free, because he is not fully conscious of what he is thinking and doing [Steiner 1963, p. 39]. A careful observation may show that animals have consciousness, but do not have self-consciousness. Only humans can be self-aware, due to their capacity of thinking. Humans are constantly introducing novelties in the world; animals and plants just follow their inner "programs" and conditioning from the environment. No bee has ever stopped to think that it could try a different form for the beehive cells, other than the hexagonal one.

The mental concentration exercise shows us another important point: the freedom we may experience in our thinking does not come from itself, but from the decision of thinking about something. Decisions come from our will, so what we may have is in fact free will. It may be clearly experienced using our thinking, that is, thinking can be used an instrument for experiencing free will. We are not free to jump 4 m above the ground, but we are free to think what we decide to think.

The essence of our present question is what is thinking. Let us see how it manifests itself.

3.3 Electronic chess and intuition

In 1997, the IBM Deep Blue machine (DB) won a chess match against world champion Kasparov (K). DB won two games, K won 1 game, and there were three draws (a total of 3.5 vs. 2.5 points). Many people celebrated, saying that machines had supplanted humans. But if one examines the conditions of the match, a quite different conclusion must be drawn. For further details, see my essay on computer chess [Setzer 1999].

The first consideration is that chess is a mathematical game. The board, the figures' positions and the rules of the game can all be mathematically described. In fact, everyone is familiar with the description of the figures' position, which uses a mathematical coordinate system in two dimensions.

DB was a highly parallel machine, specialized in playing chess, which could test 36 billion moves in the general three minute allotted time for each move. Furthermore, it had Kasparov's previous games recorded in its storage, and tried to match each board position with those games, making then adequate moves, which had been planned ahead. Usually, computers play games like chess calculating all possible moves the machine can make, then testing all possible answers the opponent may make for each of those moves, then its possible moves for each one of these answers, and so on. Such games may be described by a tree; from the root (the present position) the possible machine moves come out represented as branches. The end of each of such branches represents the game situation after the single move described by the branch. From this end, further branches are drawn, representing the opponent's possible moves after that machine's move, etc. This tree grows exponentially, so the problem is to use some strategy to "prune" it, eliminating some branches that lead to clearly losing moves, so that fewer combinations have to be tested. DB was so powerful that its designers decided not to use pruning strategies and let the machine test all possible moves. The reasoning behind this decision was that the machine was so fast, that it would be better to let it test every combination than allowing it to avoid (prune) some moves which could turn out to be good ones later on.

How many moves could K mentally test before making a move? Perhaps twenty, maybe fifty. How was it possible that a mathematical machine, playing a mathematical game, being able to figure ahead zillions of moves more than a human, could possibly have lost a game and obtain just a draw in three others? The answer is clear: K, as a grand master, was not testing each possible move. He was not calculating. He was using his intuition to rapidly figure out a couple of good moves and eventually tested them mentally.

Intuition is an insight - an inner view of a situation that is not being seen in the physical world. One of the interesting aspects of intuition is that it cannot be described. It corresponds to having some thoughts coming from nowhere. In this sense, intuition is absolutely anti-scientific - in the common understanding of this word.

3.4 Sensory perception and thinking

Everybody may notice that sensory perception is always accompanied by thinking. But the connection between the two is deeper than appearance may show. I would like to suggest that the reader perform an experiment. Please look at the entrance of your room. What do you perceive with your eyes? Please try to answer this question before proceeding to the next paragraph (I am leaving some blank lines before it). It could be interesting to write down your answer before reading the sequel.

 

 

 

 

When I give lectures on the subject of this paper, I point to the entrance of the room and ask the audience to answer that question. Invariably, everyone answers that s/he is perceiving a door. I repeat the question, and ask if anybody is not perceiving a door. Nobody raises her/his hand. Then I tell the audience that unfortunately everybody was wrong. Nobody is perceiving a door; what everyone is perceiving are light impulses, different colors, maybe some differences in depth due to stereoscopic vision. "Door" is a concept. And concepts cannot be perceived with sensory organs, because they do not exist in the physical world. They are perceived with thinking. It was Rudolf Steiner who called the attention to the fact that thinking may be regarded as a bridge between our innerly perception (representation) of external objects in the physical world, and concepts in the world of ideas [1963, p. 112]. But the latter is not a physical world. This is absolutely clear for mathematical concepts, as recognized by Roger Penrose [1989, p. 428]. For instance, nobody has ever seen a perfect circle (e.g. the locus of all points which are equidistant to a given point, the center). But this concept is the same for all people; in fact it does not depend on anybody, and is eternal. It objectively resides in the Platonic world of ideas, and is perceived by different people with the same objectivity that they perceive some object in the physical world. In fact, it seems that the objectivity in the perception of ideas may be even greater than in the perception of objects in the physical world because, for instance, sensorial organs differ from one person to the other (e.g. my two eyes perceive slightly different colors). But the concept of a perfect circle is exactly the same for everybody that has acquired the ability of perceiving mathematical concepts.

Many people, certainly all materialists, would say that concepts are not realities in the Platonic world of ideas, but are "stored" or "generated" somewhere in the brain. For instance, Ray Jackendoff puts it this way:

"... I think of word meanings as instantiated in large part in a particular subsystem of the brain's combinatorial organization that I call conceptual structure." [1993, p.54, his emphasis]

Unfortunately for materialists, this is not a scientific fact, it is a speculation, because they cannot show where and how a simple concept such as "two" is stored in the brain. There are many evidences for the existence of that Platonic world. For instance, how could Darwin and Russel Wallace, who were almost antipodes (the latter lived in New Zealand), have developed in the same time span the idea of natural selection? Such and other "coincidences" may be explained by the fact that both were perceiving the same idea in the world of concepts. Rupert Sheldrade, acknowledging such phenomena, uses the notion of a "morphogenetic field", which pervades everything, from atoms to organs in living beings [1987, pp. 12, 76]. This field would be, for instance, responsible for organic forms, in what he called "formative causation" [pp. 13, 88, 116]. I appreciate his effort in departing from the traditional paradigms of current science, but I don't agree with his basic principle that his morphogenetic field is physical [p. 115], a result of the materialistic view of the world expressed in the cited book.

I am aware of the fact that materialists will argue that my supposition for the existence of a "real" Platonic world of ideas is also a speculation. Fortunately, no one can prove that the other is incorrect, otherwise people with a spiritual view of the world could prove the existence of the non-physical world, or materialists could prove that the non-physical world does not exist. I think this involves a mystery, connected to what I call the fundamental existential hypothesis: assuming either the hypothesis that there exists a "real" non-physical world, or that this world does not exist. I used the word "fortunately" above, because we are free to choose either one; this choice seems to me to be the most fundamental existential hypothesis that should be made by everyone. To me, the evidences for the first hypothesis - many of which are confirmed by my own inner experiences, and everybody has to make her own - are overwhelming. Please recall what I wrote in the introduction, considering how one should face new ideas.

According to Steiner, thinking can be regarded as an organ for perceiving the non-physical, Platonic world of concepts or ideas [1963, p. 148]. Thinking completes the instantaneous and partial perception we have of external objects [p. 109], connecting us to Kant's "das Ding an sich", the thing in itself, his "noumenon", the essence of the perceived object as opposed to the observed phenomenon. Kant wrote that we could never attain the "Ding an sich", because our thinking is a mechanical process, therefore it is limited:

"The critique of the pure understanding ... does not permit us to create for ourselves a new field of objects beyond those which are presented to us as phenomena, and to stray into intelligible worlds; nay, it does not even allow us to endeavor to form so much as a conception of them. ... There remains a mode of determining the object by mere thought, which is really but a logical form without content, which, however, seems to us to be a mode of the existence of the object in itself (noumenon), without regard to intuition which is limited by our senses." [Kant 1952, p. 107] "... speculative reason can never ... pass the bounds of possible experience ... it ought not to attempt to soar above the sphere of experience, beyond which there lies nought for us but the void inane." [p. 209]

Steiner undid the limitations of thinking, calling attention to the fact that it can perceive the essence, the concept of every object, and that this essence is not in the physical world. But, for this, it is necessary to admit the hypothesis that there is a non-physical world. Unfortunately, materialism has prevented scientists and academics to formulate this simple hypothesis, which would enormously enlarge scientific research, the interpretation of history, theoretical and applied sciences, and so on. Many scientists have preconceptions - which is contrary to the true scientific spirit. Others fear that by leaving the trough of materialism they could lose their objectivity and become subject to faith and dogmas. Steiner has shown that this is not the case. His monumental conceptual work has resulted in practical applications being exercised for more than 100 years in many aspects of everyday life (education, curative education, medicine, therapy, agriculture, art, architecture, social organization, etc.). It is worth leaving aside materialist prejudice and trying to immerse oneself in his vast work and applications, in order to better judge the validity of his ideas, mainly in terms of what I wrote in the introduction.

In the non-physical world of concepts, some concepts are linked to others; our thinking is also able to perceive this connection, through association.

Back to sensory perception, it is important to note that even if our senses are transmitting us something, we do not perceive anything if we are not able to associate the perception to some concept. The fascinating book on light by Arthur Zajonc describes the fact that, once operated, a person who has never seen does not see any object [1995, pp. 3, 183]: this person has to learn to make the association between the inner perception and the corresponding concept. He tells of a published case of S.B., a fifty-year old male, blind since he was ten months old, who worked repairing boots, and was an independent and intelligent man. On Dec. 9, 1958 and Jan. 1, 1959 he received cornea transplants.

"Examining him about a month after his operations, [the authors, psychologists] Gregory and Wallace asked him about his first visual experience following the operation. S.B. responded by saying that he had heard a voice, the voice of his surgeon, coming from in front of him and to one side. Turning toward the sound, he saw a "blur". ... Faces, even long after the operation, were "never easy", S.B. reported. Gregory and Wallace's research with S.B. (and similar research before and since) has made it clear that learning to see as an adult is not easy at all. After his release from the hospital, Gregory and Wallace took S.B. to a museum of technology and science. S.B. had a long-standing interest in tools and was clearly excited at the prospect of seeing what he had until now only handled or heard described. They took him to a fine screw-cutting lathe and asked him to tell them what stood before him. Obviously upset, S.B. could say nothing. He complained that he could not see the metal being worked. Then he was brought closer and allowed to touch the lathe. He ran his hands eagerly over the lathe with his eyes shut tight. Then he stood back a little and, opening his eyes, declared, 'Now that I've felt I can see it.' In the case of S.B. the slow process of learning to see continued for the next two years, until his death." [3]

In my lectures on this subject, I use the well-known example of the figure of a perfect hexagon divided through the three diagonals into six equilateral triangles. I show it to the audience, and ask what everybody is seeing. Usually, people answer "a hexagon", "six triangles", some recognize a pyramid seen from above. I call attention to the fact that these are all concepts, and once a concept is acquired, one can easily "see" the corresponding figure. In general, only those that know the "trick" say that they see a cube. Then, changing slides, I show them one of the two possible cubes, and then the other. I ask them to do an exercise of looking at the original hexagon figure and shifting from one cube to the other. Performing this exercise, the reader will also notice a very peculiar phenomenon. It is necessary to imagine, through thinking, one of the cubes to "see" it; then, one has to make an inner effort to imagine the other, and then one can "see" it. It becomes clear that there is a thinking process of "switching" from one cube to the other, that is, between the concepts of a cube seen in two different positions. A student of mine was so enthusiastic with it that he drew a large hexagon and glued it on the ceiling above his bed, so he could repeat the exercise many times every time he went to bed.

Zajonc also mentions the fact that the perception of linear perspective in pictures gives an impression of reality due to a purely cultural effect (linear perspective began to be studied and largely used in the beginning of the fifteenth century) [p. 64]. I use the example of railroad tracks. If an adult draws a pair of train rails, she draws two lines that are not parallel, but keep converging closer and closer; horizontal line segments, decreasing in length, unite the rails, to give the impression of cross ties. The widest distant between the rails in the drawing is in general at the bottom of the drawing.

Why do most adults draw rails in perspective? Because this is the way they see them. It is an optical illusion. The reality is that the tracks never meet; if one walks between them, one sees that they keep their constant distance forever. In order to draw the tracks as converging lines, it is necessary to abstract oneself from this experienced reality, or from its mental image, and stick to the optical illusion. But this requires a certain capacity for abstraction. A small child, who is not precociously intellectualized, or a person from a primitive tribe, will not draw converging tracks, but parallel ones. Conversely, many times these people are not able to recognize from the drawn perspective what it represents. Zajonc describes an interesting experiment of showing to primitive people in Africa a drawing of an elephant seen from above [p. 63]. The animal was only recognized if the drawing showed the elephant's legs extended to its sides ("split" elephant). Until the 15th century one rarely sees linear perspective in paintings - see for instance the eastern icons or medieval paintings with roads, rooms, houses, tree alleys, etc. Linear perspective requires the ability of associating the optical illusion of the drawing to the true concept of the reality being represented. This ability is not innate, it is acquired through observation of pictures and drawings using perspective, and also drawing it under the guidance of somebody who knows the technique. In other words, as we have seen with the cube, it requires developing the ability to associate the perception to the true concept.

So we see that sensory perception has to be accompanied by a mental perception of the true concepts that are the essence of what is being perceived, and this is done by our thinking.

Our most important hypothesis is that thinking is not a physical process - if it were, it would be impossible to reach with it the non-physical world of ideas. But what is, then, the role of the physical brain?

3.5 The role of the brain

The fact that people who have some damage in their brain lose certain mental abilities has led to the conclusion that the brain produces these abilities. The well-known history of a young worker constructing a railroad in the nineteenth century, who was hit by an iron rod in his head and lost part of his brain is perhaps the most famous and studied case, described at length by A. Damasio [1994, p. 3]. The young man changed his social behavior completely. Clearly, there is some connection between mental functions and the brain.

Steiner gives an interesting analogy to explain that connection [1968, p. 162]. Suppose a lady is looking at her face into a mirror. Through the mirror, she becomes aware of her form. Without some mirroring image, nobody can be aware of what his own face looks like. Now break the mirror. Her face is still there, bur she is no longer aware of it. Similarly, the brain works as a mirror. Due to it, we become aware of our own thoughts and thus we can control and direct them. Without the brain, we still think, but are not conscious of what we are thinking, and cannot choose what to think. The wisdom or our natural language reveals an ancient, intuitive knowledge of this fact: "reflecting" is a synonym for "thinking".

According to this point of view, the brain activity is a consequence, and not a cause. If scientists would make this hypothesis, research on brain activity would be enormously expanded. According to my hypothesis, looking for the origin of thought processes in the brain is like the drunkard looking for his keys at the illuminated area under a lamppost, and not where he really lost them. Scientists are using the limited lampposts of materialism and refuse to look at other areas, building new lampposts and other means of investigation.

Now the question is: how can a non-physical activity like thinking interact with the physical world? This is the old mind-matter question.

I have a couple of possible explanations for this effect. One of them is that in the brain there may be lots of elements, maybe inside the neurons, that are in an unstable equilibrium. In such a state, an infinitesimal amount of energy would be sufficient to produce a change of state. Penrose has made the hypothesis that quantum effects at the atomic level, in microtubules, may be responsible for that effect [Kurzweil 1999, p. 117].

Physicist Amit Goswami tries to explain the action of the non-physical upon the physical using quantum non-locality [see, for instance, Goswami 1995]. Unfortunately, he seems to mix the physical with the non-physical, using for the latter the same reasoning he uses for the former. But maybe his explanation could be another possibility for our problem.

There is another field where the action of the non-physical upon the physical may perhaps be better understood: growth and regeneration of living tissues. Given a living cell, there are three possibilities for it: either it remains as it is, serving the purpose of tissue differentiation; or it subdivides (mitosis or meiosis) or it dies (apoptosis). I conjecture that the decision as to which of these three paths to take does not spend energy. The non-physical model followed by the tissue regulates this process. That there is a model behind all living beings seems to be clear; just observe, for example, how a pine tree regulates the growth of its branches to preserve its cone form. Another example is the symmetry of our hands, ears, etc., permanently preserved during growth and regeneration. An extreme case is the reasonably permanent form of the fingerprint, even when the skin is damaged ("Injuries such as superficial burns, abrasions, cuts and blisters, which affect only the epidermis, do not alter the ridge structure, and the original pattern is duplicated in the new skin." [E.Britannica 1966, Vol. 9, p. 277.]) Living cells are very imprecise elements, so if the process would be controlled from within e.g. by their DNA, the result would not show a fairly precise symmetry. To preserve the symmetry, it would be necessary for one ear to send a message to the other one telling how much it has grown in some direction, and wait for the other ear to catch up - which does not make sense. A much more reasonable explanation is that the growing process of all living beings' parts whose form we may recognize, as well as that of symmetric elements in the body are regulated from without, by their model, that is, by a non-physical concept. Obviously, this non-physical model interacts with the physical constituents, which are also subjected to environmental differences. Richard Lewontin called this third factor "development noise" [2000, p. 36]; to me the process is not random, but controlled by the active model, which is in the Platonic world of ideas.

3.6 Vision

The process of vision gives further indications that there are non-physical processes going on during perception and cognition.

The eye divides the field of vision in four parts at the retina's fovea, corresponding to the four quadrants; let's call them a, b (right side of the image), c and d (left side). Parts a and b formed by each eye are combined in the optical nerve (which, as far as I remember, transmit complex electric signals, which do not correspond to the images in electrical form), as well as c and d, each pair going to a brain hemisphere, where a and b (c and d) are separated in regions divided by the "Sulcus calcarinus". Furthermore, in the visual cortex there is a separation of space perception, movements in the visual space, and optical remembering which activate different areas in the right hemisphere, and form perception and color perception in the left hemisphere [Rohen 2000, pp. 17, 19]. How then does a person see just one field of vision, as a complete image? My explanation is that perception itself is not a purely physical process. Somewhere along the optical nerve and in the brain, non-physical processes take place, and the complete sensation one has of a perceived object is produced by some of our non-physical constituents.

3.7 Understanding and learning

Supposing that thinking is a non-physical organ for observing the world of ideas, it is possible to grasp what understanding means. Let us take the case of the perception of a certain object through our senses. We understand this object if we are able to use our thinking and make the bridge between the perception of the object to its essence, to its "Ding an sich". This is the case when, for instance, we look at some object at a distance and cannot distinguish what it is. We are conscious that our perception is not good enough for a complete understanding. We get closer, and recognize what the object is, that is, now our perception is clear enough to permit our making, through thinking, the association between the perception of the object and its essence.

This applies also for perceived phenomena. For instance, we are looking at a tree, and suddenly we notice that one of its branches is moving, but all other branches are still. We become unquiet, because we recognize that there is no wind. Suddenly, we see a bird flying from the tree; now we realize that the bird must have been resting on the branch, and when it flew it made the branch move. This example also illustrates the fact that understanding may involve the association of various perceptions and thoughts.

John Searle was not able to characterize what understanding means (cf. 2.2). I conjecture that, from a materialistic point of view, it will never be possible to make this characterization.

It is possible to make, with thinking, incorrect associations between perceptions and concepts which do not correspond to the essence of objects, or to make incorrect mental associations. This may come from imperfect sense perceptions, or incorrect thinking. The former is in general clear: for instance, if we see a person at a distance, and we cannot recognize if it is a man or a woman, we become aware that our eyes are not sharp enough to see clearly at that distance. For a person with healthy sensory organs, they are extremely faithful, and she recognizes when her perception is not clear. Otherwise, we would not be able to trust our senses, and we would all be schizophrenic or mad.

Incorrect thinking often comes from not examining all possible associations and not choosing the one with the most reasonable evidence. This was the case with the heliocentric planetary model. Everybody sees the sun and stars moving across the sky. The judgement of considering the earth as remaining still and the sun and stars rotating around it, or the sun and stars being still and the earth rotating along its axis, is a question of judgement, or correct thinking, and not of perception. This example illustrates a very important point: correct thinking depends on its development. For millennia humans just did not have enough capacity for abstraction to get rid of the strong sensory impression of the sun and stars moving around the sky. It is important to observe that the heliocentric model was generally accepted when Newton established in 1687 a sound theory, gravitation, to explain the movement of the planets, and their elliptic orbits around the sun. But this happened much earlier than the first experimental demonstration of that fact with Foucault's pendulum in 1851. Up until then, the acceptance of the heliocentric model was a matter of accepting the most convincing abstract theory. A fascinating account of this development of the capacity of human thinking was given by Arthur Koestler [1964]. It shows how humanity acquired the ability to reach correct concepts of the planetary system.

Learning may also be understood with this model. It just means developing the ability of perceiving certain concepts through thinking, making the correct associations between various thoughts that are inherently connected in the world of ideas, or making the correct bridge between a perception of an object or phenomenon and its concept. Memorizing perceptions and concepts is obviously an essential part of the learning process.

As a matter of fact, one should never say that a computer program "learns". What it does is calculating parameters (as in the case of the - wrongly - called "neural nets"; the only similarity between them and our neurons is that the latter also make a net), or storing some data. There is no scientific knowledge on how our brain learns something, so I conjecture that this is also a non-physical process. This does not contradict known scientific facts (but certainly contradicts the judgement of most scientists working with cognition).

3.8 Is the brain a computer?

As I have already said, there is no scientific basis for stating that the brain works as a digital computer. On the contrary, there are many indications that this is not the case.

Digital devices only work if the signals that traverse the various circuits are synchronized. A logical gate simply does not work if there is no synchronization of the incoming signals. This synchronization is obtained by a central pulse generator, the so-called "clock" (when it is said that a computer has a 2 ghz clock, it means that its central pulse generator generates two billion pulses per second). Clearly, there is no central pulse generator and signal synchronization in the brain. Recently, novel designs of asynchronous logical circuits, that is, without a central clock, have been developed (up to now, only small parts of apparatuses using digital circuits are in the market using this technique):

"Without a clock to govern its actions, an asynchronous system must rely on local coordination circuits instead. These circuits exchange completion signals to ensure that the actions at each stage begin only when the circuits have the data they need." [Sutherland 2002, p. 50]

Therefore, even without a synchronizing pulse generator, it is necessary to have local synchronization and exchange of synchronizing signals, but neurons do not seem to follow this pattern. Apparently, neurons may fire (that is, produce an electric output signal at their axons) or not fire under the same circumstances ("The same stimulus does not produce always the same result." [Penrose 1991, p. 396]). So, as with most, if not all processes of living beings, there is in the brain no strict determinism such as with digital machines. Determinism is one of the essential features that provide computers with their power, and make them useful. It would be a disaster if, for a certain program and a certain set of input data, the resulting data would differ from one processing to the next. If there are non-deterministic processes, such as those that occur in computer networks (e.g. the path a message is going to traverse in a large network is unpredictable), for each machine everything happens as there is a strict determinism.

Connectionism (see, for instance, [Ramsey 1991]) has introduced the idea that the brain is a highly distributed system, and is based upon the conception that its activity arises from the connections among the neurons. Leaving apart the fact that the so-called neural nets have almost nothing in common with the net formed by neurons (e.g., the latter do not have a fixed topology [Penrose 1991, p. 396], the parameters for combining the inputs of each neural net cell is calculated through very complex algorithms, these inputs have to be synchronized, etc.), there is a strong argument against the distributed model: we do not have the impression that our mental activities are separated into many parts [p. 398]. On the contrary, as we have seen through vision there seems to be a distribution of functions along the brain but our perception is of a whole. Unconscious or semi-conscious activities may be done in parallel. Thus, we may do different things with our members, for instance we may use our hands while walking. But conscious thinking is always concentrated on just one thought.

Thus, any comparison between the functioning of the brain and a digital machine is absolutely improper. I also consider it dangerous, because it reduces the image the human being makes of herself to that of a machine. The danger comes from the fact that there is no ethics involving our relation to the machines themselves, only how we use them in relation to other people and the world. There is no sense in having compassion towards a machine, for instance having pity in switching it off. Unfortunately, children have been conditioned to having such feelings, as it was the case with the awful Tamagochi.

3.9 Criticizing Kurzweil

Ray Kurzweil is one of the exponents of the idea that humans are machines, and thus machines will be able to do whatever humans do. His best-selling book [1999] is full of prophecies, based upon the following statement:

"The human brain has about 100 billion neurons. With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation. That's rather massive parallel processing, and one key to the strength of human thinking. A profound weakness, however, is the excruciatingly slow speed of neural circuitry, only 200 calculations per second." [p. 103]

This statement is absolutely unjustified. He does not say what kind of calculations are done by each neuron connection, and as we have pointed out before, he cannot even say how data are stored in the brain. Based upon the number above, he multiplies it by the 100x1012 connections existing in the brain, coming to the conclusion that we are able to perform 20x1015 "calculations" per second. He does not even consider the possibility that there may be different functions for different connections; for him this capacity to perform calculation is the most important factor. He uses the same type of reasoning to come to the conclusion that our memory has 1015 bits.

In his classical book, John von Neumann writes: "the standard receptor would seem to accept 14 distinct digital impressions per second". He supposes that there are "1010 nerve cells" each one of them working as "an (inner or outer) receptor". Then, "assuming further that there is no true forgetting in the nervous system", and a normal lifetime of 60 years or about 2x109 seconds, he comes to the conclusion that our memory capacity is 2.8x1020 bits [1958, p. 63].

It is astonishing that such brilliant people can do these sorts of calculations, without knowing how our memory works, taking into account that our nervous system is not a digital machine, etc. However, let us proceed with Kurzweil.

Extrapolating the increasing capacity of new computers he comes to the conclusion that in 2020 a US$1,000 computer will have the same calculation capacity of the human brain, in 2023 the same memory capacity. He prophesies that in 2019 there will be "reports of computers passing the Turing Test", and in 2029 they will have in fact passed it. To criticize these statements, we now have to turn to intelligence and the Turing Test.

4. Intelligence

Up to now I have spoken about perception and thinking. In order to cover the old question "can machines be intelligent?" we have to deal with the question of what intelligence is and what it means considering a machine to be intelligent.

4.1 What is intelligence?

The answer to this old question depends on what is understood by "intelligence". For instance, if someone thinks that playing a good chess game is a demonstration of intelligence, then we have to ascribe intelligence to computers. But not every intelligent person plays chess, so this characterization is not broad enough.

It is important to realize that it is not possible to formally define what intelligence is. American Heritage Dictionary, fourth edition, has the following explanations for "intelligence":

a. The capacity to acquire and apply knowledge.
b. The faculty of thought and reason.
c. Superior powers of mind.

One sees that these characterizations are quite vague, the third one being almost of no utility. The same dictionary says that knowledge, among other things, is "specific information about something" (see my paper "Data, information, knowledge and competency", on my web site, for characterizations of these concepts). As we have seen on section 2.1, I make a clear distinction between data and information. In my sense, information requires significance, understanding. Computers do not process information, but just data. In this sense, computers cannot be intelligent. But let's extend this characterization of intelligence, covering its various manifestations.

4.2 Types of intelligence

Howard Gardner has been the pioneer in advancing what he called a "pluralistic view of the mind" [1995, p. 13] and classifying various types of intelligence. He initially recognized seven types [pp. 15, 22]:

a) Linguistic
b) Logical-mathematical
c) Spatial
d) Musical
e) Corporal-kinesthetic
f) Interpersonal
g) Intrapersonal

He says that usually schooling and testing emphasizes only the first two [p. 107], but all of them are important for a healthy, fruitful life. (c) has to do with "forming a mental model of the spatial world, maneuvering and operating", and is typical of seamen, engineers, surgeons, sculptors and painters. (e) deals with "forming an accurate model of oneself and being able to use it to operate in life". (f) is the capacity of understanding other people and working in groups. Typical of salespeople, politicians, teachers, therapists, etc. (g) is the "capacity of solving problems and making products using the body or its parts". Typical of dancers, athletes, surgeons, craftsmen, artists.

Later on, he expanded the seven types to twenty [Goleman 1995, p. 51], for instance subdividing (f) into leadership, maintaining social relations and preserving friends, solving conflicts, and being able to do social analysis.

Daniel Goleman has concentrated on Gardner's interpersonal intelligence, calling it "emotional intelligence" [1995]. He says that this type of intelligence is the most important one in social and professional life [p. 165].

Now I am going to give my own classification.

4.3 Incorporated and creative intelligence

I classify intelligence into two types: incorporated and creative.

If we examine our body we can notice an infinite intelligence, an infinite wisdom. In fact, I consider the human body the most marvelous thing of the physical world, its primary marvel. I think it is impossible to imagine something physically superior to our body. I call this type of intelligence an incorporated intelligence. But we are not the only ones to posses such intelligence. Animals also have it, not just in their body, but also in their innate physical abilities. (Probably the only innate abilities we have are crying and sucking, maybe even rolling and crawling - all other have to be conquered through social relations, such as walking, speaking and thinking.)

Plants also have an incorporated intelligence. The way they are born, grow and reproduce reveals an enormous intelligence. But I also consider that minerals have such a type of intelligence. If the Earth were not the way it is, life would not be possible. For instance, if our planet were closer to or farther from the Sun, or its axis did not have its inclination in respect to the ecliptic, we would not be here. We would also not be here if water would not have its highest density at four degrees Celsius, if the atmosphere did not have its present balance of oxygen and nitrogen, if the soil would not be as it is, adequate for the plant world, etc.

Machines have also lots of incorporated intelligence: part of the intelligence of their designers, manifested through their form and functionality, as well as part of the intelligence of their constructors. In particular, a computer programmer has incorporated parts of his intelligence into the program he developed. In other words, any computer program reveals an incorporated intelligence.

My second type of intelligence is what I call creative intelligence. To characterize it, I have to expound on what creativeness may be. For this, I am going to use a characterization I heard in a lecture by Italian sociologist Domenico di Masi. He said that creativeness is composed of fantasy and concreteness (transliterating his original in Italian, it would be "concretivity"). I understand fantasy as the ability of having new ideas; they may be absurd, "castles on the air", or may correspond to reality or truth. Di Masi characterized concreteness as the ability to produce something concrete, or accomplish something socially or personally useful. With humor, he said that a person who keeps fantasizing without realizing anything useful with his ideas is a dilettante. On the other hand, a person who keeps producing concrete things or actions without any fantasy is a bureaucrat.

Therefore, creative intelligence is the type of intelligence that involves having new ideas and doing something useful with them.

Only humans have creative intelligence. No other living beings have it. Animals follow their instincts, a kind of "program". Their instincts or "program" may change under the influence of the environment. But at any given time, an animal is automatically following its instincts. The human being is able to recognize an inner driving force to do something, but she may think about the consequences of acting according to that force, and just not realize what her drive was pushing her to do. For instance, a person may recognize that he is a bit fat, and decide to go on a diet for esthetic purposes. He fantasizes himself thinner, more elegant. His hunger instincts will drive him to eat, but he may refrain from doing so, just to lose weight. No animal goes on a diet for esthetic purposes. Observing the world, one may see that humans, not animals, are changing it (unfortunately, in general for worse).

Computers cannot fantasize. They merely generate new data from previously given or calculated data, in a process of combination. I do not consider that humans have new ideas just as a combination of previously known facts or ideas. Maybe the zipper is a demonstration of this phenomenon. Sticking just to technical achievements, probably the transistor and the laser were also completely new ideas, because they were theoretically devised before their construction. Social situations present lots of new ideas which are not based upon previous experience.

On the other hand, concreteness requires common sense. This is another capacity that computers cannot have, because it transcends mere abstractions, theories.

5. Can machines be intelligent?

I will cover what some people have said about this subject, and then I will give my own analysis.

5.1 The Turing Test and extensions

Alan Turing was the first one to characterize when a computer could be called "intelligent". In 1950, he devised what became known as the famous Turing Test (TT) [1950]. He modified a social game involving three people who are in different rooms: a man (A), a woman (B), and an interrogator (C) who asks questions to the first two. A fourth person may be the go-between, transmitting questions formulated by C directed to A or B, and returning to C the given answers; alternatively, he recommended using a "teleprinter". The game consists of C discovering who is the woman and who is the man. Furthermore, "It is A's object in the game to try and cause C to make the wrong identification. ... The object of the game for B is to help the interrogator" [p. 434].

In what he called "the imitation game", Turing substituted a computer for A and a person of any gender for B. In his own words,

"We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think?' ... The new problem has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man." [p. 434]

So, Turing replaces the question of the possibility of machines being intelligent by the question "Can machines think?" and then replaces the latter by his "imitation game", the TT. Interestingly enough, besides the title of his paper, I have found only one spot where he mentions the word "intelligent" and another for "intelligence".

"Intelligent behaviour presumably consists in a departure from the completely disciplined behaviour involved in computation, but a rather slight one, which does not give rise to random behaviour, or to pointless repetitive loops." [p. 459]

His conviction was that computation would be enough for intelligent behavior, because he was sure that through computations machines would some day be able to pass his test:

"I believe that in about fifty years time it will be possible to programme computers with a storage capacity of about 109 to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after a five minutes of questioning. The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." [p. 442]

Not all of his prophecies came true. Indeed, we have now computers with that much storage capacity. But there is no computer that is close to passing his test. There is an yearly TT contest at the University of Manchester, and the best programs presented there are far from passing the test: with just a few questions it is possible to detect which participant is the machine. One may even doubt that such a program will ever exist. Kurzweil has absolutely no firm ground to make his prophecy on the year 2029 (cf. section 3.9).

Turing had an intuition that our thinking is a very special inner activity, and that it would be eventually impossible to describe its process scientifically:

"May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection." [p. 435]

His hopes were not just for his test:

"We may hope that machines will eventually compete with men in all purely intellectual fields." [p. 460]

This is already true for many fields, like playing chess, recognizing written and spoken texts, etc. Turing mentions that for many people playing chess is the best starting point. But he did not stop there:

"It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc." [p. 460]

It is astonishing that Turing does not recognize that what the computer would do in such cases would be to syntactically (that is, structurally) associate a string of letters to a stored image. At best, this association could be made to a description of common features of similar images. As we have seen, according to our working hypothesis what we do is to associate a name to the essence of an object, and not to a particular image, set of images or an objective description of common features. Another objection is that when we recall the image of something we have experienced, we make a mental representation which is never as clear as the perceived thing - except for geometric figures, whose drawn representation is very close to their essence. In general, we do not stick to the details, because what matters is the essence. The fact that computers can store and work with the detailed picture of any object, while our memory concentrates on the essence of the object, is an indication that we are not machines. I will return to this subject on chapter 8.

An essential part of learning is perfecting the perception of the concept which is that essence. We conjecture that if Turing's procedure were followed, soon the machine would not be able to recognize anything, because it would have stored an enormous amount of conflicting data. Many people think that just by storing every written text in the world, a computer would acquire our "knowledge" - and more. I am absolutely sure that the result would be catastrophic. It has been said that there is no position so absurd that some philosopher has not held it [Fetzer 2001, p. 8]. For instance, how would a computer reconcile my ideas with the current scientific view of the world? Maybe it would statistically dismiss mine, in favor of those that appear more often. Novel ideas would never be accepted... Suppose that, counting from antiquity up to now, more people have written on the geocentric planetary model than the heliocentric since Copernicus. The computer would then statistically admit that the former is the correct model.

Furthermore, giving names is just part of learning a language. How about actions, qualities, etc.? This leads to the need of extending the TT.

Turing's is a behavioristic, linguistic test, and thus poses stringent limits to what one may regard as thinking and intelligence.

On the other hand, Searle's Chinese room (see 2.2) is not behavioristic. The behavior of the room's operator is supposedly the same as that of a person who knows Chinese and understands the questions. The difference is based upon the concept of "understanding", which is an inner activity.

James Fetzer calls the attention to the fact that the TT is also reductionistic [2001, p. 19], because it deals only with the capacity of manipulating symbols [p. 10]. He describes Stevan Hanard's extensions:

a) The Total Turing Test (TTT) [p. 8] - TT plus non-verbal behavior (robotic capacities):

"Harnad introduces the TTT as a test of non-symbolic as well as symbolic behavior, where symbols can be grounded by means of the verbal and non-verbal behavior that the system displays. When two systems exhibit similar behavior in identifying, classifying, sorting, and labeling objects and properties of things, that provides powerful evidence that those systems mean the same thing by the marks they use." [p. 9]

One sees that there is an inference on "meaning". I consider that Searle's Chinese room could be also extended to robotic behavior. The robot would still be following strict syntax rules, without understanding what is going on. Thus, Searle's objections can also be extended to the robot. Because of this and other objections, Harnad proposed yet another extension:

b) The Total Total Turing Test (TTTT): TTT plus bodily indistinguishability.

There is no evidence that a computer will ever pass the TT, much less the TTT, so imagining that it will be possible to construct a machine that would be absolutely indistinguishable from a human seems to be pushing the philosophical discussion too far into the science fiction realm. But even if this were possible, as Fetzer puts it,

"If two machines were TTTT indistinguishable, that would not show that they had minds. And even if a robot and a human were TTTT distinguishable, that would not show that they did not." [p. 14, his emphasis].

One sees here the problem with considering humans as purely physical systems. There is no satisfactory theory that can guarantee that computers will not have intelligent behavior.

5.2 Artificial Intelligence or Automated Imbecility?

The field of Artificial Intelligence (AI) is usually divided into two categories: strong and weak AI.

John Searle has characterized strong AI as follows:

"The prevailing view in philosophy, psychology, and artificial intelligence is one which emphasises the analogies between the functioning of the human brain and the functioning of digital computers. According to the most extreme vision the brain is just a digital computer, and the mind just a computer program. One could summarise this view - I call it 'strong artificial intelligence' or 'strong AI' - by saying that the mind is to the brain, as the program is to the computer hardware." [1991, p. 28].

This view is represented, e.g., by Alan Newell, "who claims that we have discovered ... that intelligence is just a matter of physical symbol manipulation." [p. 29]. Newell and Herbert Simon, in a famous statement, said in 1976 that a physical symbol system is both necessary and sufficient for what they called "general intelligent action" [Fetzer 2001, pp. 43, 74, 156]

For James Fetzer,

"Most students of artificial intelligence tend to fall into two broad (but heterogeneous) camps. One camp maintains the 'strong' thesis that AI concerns how we do think. The other maintains the 'weak' thesis that AI concerns how we ought to think. And there are grounds to believe that the predominant view among research workers today is that the strong thesis is correct." [p. 74] "According to (what is known as) the thesis of strong AI, computers actually possess mentality when they are executing programs. ... According to (what is known as) the thesis of weak AI, by comparison, computers are simply tools, devices or instruments that are or may be useful, helpful or valuable in the study of mentality but do not possess minds, even when they are executing programs." [p. 159, his emphasis]

John Pollock's view deals with constructing machines that have human capacities:

"Strong AI is the thesis that one can construct a person (a thing that literally thinks, feels and is conscious) by building a physical system endowed with appropriate 'artificial intelligence'." [1989, p. ix]

According to the ideas expounded before, I do not agree with these two categories or theses. The first one was sufficiently rebutted by Searle and Fetzer. As we have seen, following my working hypotheses, it does not make sense because intelligence and thinking are not physical processes. To the second, one may apply one of my "laws": "A computer program which simulates some human behavior is a demonstration that humans do not 'function' that way." (See other funny "laws" on my web site.) That is, in my opinion computers may be valuable as counter-examples, but not as a basis for studying intelligence or thinking, because the latter are not digital or algorithmic processes.

I add to these two categories other two: practical and humble AI. Practical AI deals with the simulation of complex human actions, for instance for the replacement of human labor by machines. I totally agree with this view, as long as that labor degrades the human being and that a more dignifying labor is provided for the displaced people. For example, I am totally against any use of computers in education of small children, for many reasons [Setzer 1989, 2001]. Practical AI just applies my notion of incorporated intelligence. Humble AI is just a collection of interesting techniques and algorithms in computer science, such as game simulation, pattern recognition, logical programming, "expert" systems, etc. With this spirit I organized the program of the first course on AI for the B.Sc. degree in computer science at my university, almost 30 years ago.

The problem of machines having intelligence, that is, if there may exist an artificial intelligence, obviously depends on the adopted concept of intelligence. Let us first consider my two types of intelligence. Obviously, computers, as any machine, have an incorporated intelligence. But they cannot have creative intelligence. As far as the latter is concerned, computers are absolute idiots: they do only what they are commanded to do by their programs; they are the utmost in bureaucracy along di Masi's characterization (see section 4.3). Here one should more properly speak about automated imbecility. Considering Howard Gardner's multiple intelligences (cf. 4.2), it is clear that the only incorporated intelligence computers may have is the logical-mathematical one, and even that one should be restricted to algorithmic, discrete mathematics. Some of the other types may be simulated to some extent, but always through the former. In particular, as we will see in the next chapter, every intelligence that requires feelings, like the interpersonal (Goleman's emotional) and musical (related to art), cannot be incorporated into a computer. The same applies to the intelligences that require self-consciousness (see chapter 7). So I conjecture that a machine will never pass the TTT (5.1).

6. Can a machine have feelings?

This question has become an important one, because it has left academia, through the films that will be discussed in chapter 9. Many scientists have expressed their view that machines can have - or even already have - feelings. For instance, John McCarthy, the inventor of the expression "Artificial Intelligence", and the introducer in computer science of the field of formal "semantics" of programming languages, has written the following:

"Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance." [1979, Introduction]

He characterizes "beliefs" in the following way:

"'The room is too cold', 'The room is too hot', and 'The room is OK' - the beliefs being assigned to states of the thermostat in the obvious way." [His section 4.1; see also Searle 1991, p. 30]

I used this example because for a human, the three phrases represent feelings, or rather the special kind of feelings usually called sensations. John Haugeland presents an interesting, detailed taxonomy of feelings, sensations being the first category [1987, p. 232]. He is careful on the question of assigning feelings to machines:

"It is surprisingly difficult to gauge the bearing of these matters [the various kinds of feelings] on Artificial Intelligence. Even sensation, which ought somehow to be the easiest case, is deeply perplexing. There's no denying that machines can "sense" their surroundings, if all that means is discrimination - giving symbolic responses in different circumstances. Electric eyes, digital thermometers, touch sensors, etc. are all commonly used as input organs in everything from electronic toys to industrial robots. But it's hard to imagine that these systems actually feel anything when they react to impinging stimuli. Though the problem is general, the intuition is clearest in the case of pain: many fancy systems can detect internal damage or malfunction and even take corrective steps; but do they ever hurt? It seems incredible; yet what exactly is missing? The more I think about this question, the less I'm persuaded I even know what it means (which is not to say what I think it's meaningless)." [p. 235]

So let's begin with the question, what is "feelling"?

6.1 What does it mean having feelings?

Why does Haugeland - and probably everyone - has so much difficulty in characterizing what "feeling" means? Let us compare this inner activity with thinking. If I think of a concept, for instance that of a circle as the locus of all points equidistant from a given point (its center), this concept and the corresponding geometrical figure is absolutely clear in my mind. If I look at the entrance of my room, I see some colors accompanied by some visual senses of depth, and I recognize the perception as a "door". This concept comes to my mind also in an absolutely clear way - if I would not be able to associate those perceptions to this concept, I would be or become schizophrenic. On the other hand, the least I can say about my feelings is that they are "fuzzy". For instance, at this moment of writing I am quite enthusiastic with the formulation I am giving to this subject; I'm excited. But how can I perceive these feelings? They are not clear. To begin with, notice how every mentally healthy adult recognizes that thinking has something to do with the head and the brain. On the other hand, what part of our body do we associate with our feelings? I'm speaking here about general feelings, like joy, sadness, compassion for another person's suffering, etc., and not about pain in some organ, like a headache. My excitement at the moment I'm writing these lines is a general feeling; it seems to me that it has something to do with my thorax, my lungs and heart. Rudolf Steiner has associated thinking with the waking consciousness, and feeling to a dream consciousness [1968a, lecture of Aug. 27, p. 98]. In fact, it is possible to be fully conscious of one's thoughts, and control them - at least for a while, as demonstrated by the exercise I proposed in section 3.2. But feelings are not so clear: one may, for instance, confuse the feelings of frustration, depression and sadness. Moreover, it is impossible to control them: if I eat something, either it is tasteful or distasteful, and I cannot immediately change this feeling - it takes time to learn to like something I presently dislike, and this result is not guaranteed. What we can control are our actions based upon our feelings, but not the feelings themselves.

Clearly, feeling means having an inner reaction. It may be accompanied by some physiological change, like blushing, smiling, accelerated heartbeat, etc. But certainly a facial expression is not the feeling, it is one of its consequences. Probably many people would say that feelings are felt by the brain; in fact, it is possible to detect increased neural activity in some parts of the brain when someone is having certain kinds of feelings. But the present scientific knowledge does not lead to a sure statement that feelings are generated by the brain; as with thinking, certain brain activities may also be a consequence of feelings, and not their cause (see chapter 3).

It is very important to recognize that this inner reaction is of an absolutely different kind than a physical reaction, such as a piece of metal undergoing dilation due to increase in temperature. A piece of iron just doesn't feel anything. In the same way, the thermostat - which may be constructed with a dilating/contracting piece of metal when subjected to different temperatures, or with a thermoelectric pair which generates current when its two ends have different temperatures, or with a resistor which changes its resistance with temperature, etc. - also doesn't feel anything. Along these lines, McCarthy is absolutely wrong in saying that thermostats have beliefs. He is distorting the concepts of belief and feeling. Furthermore, if we interpret "belief" as signifying "opinion", it has to do with understanding, with semantics. As we have seen, machines cannot have semantics; in particular, computers are purely syntactical machines.

There are situations where feelings are not due to physical changes - at least, it is not possible to detect what kind of change it is, and where it is occurring. A typical situation occurs when someone blushes out of shame. The cause is here a moral, not a physical one (such as feeling pain due to an injury). If one follows the causes for the physical changes, one gets to something that is not physical: blushing is caused by dilation of blood vases, due, e.g. to some hormone, which was produced by some gland, excited by some electrical impulse, which was generated somewhere in the brain - but why did this part of the brain generate this impulse? My conjecture is that moral reactions are not physical reactions.

As for feelings clearly generated by physical changes, like pain, it is important to remember that there is no precise idea why we feel pain, and how painkillers exactly work.

So I may formulate the hypothesis that, similar to thinking (cf. 3.2), feeling is not a physical process. Thus a machine, a purely physical system, will never have feelings. This metaphysical reasoning is similar to the one I used for thinking. Now I'm going to give another, purely logical, Aristotelian reasoning based upon self-observation that everyone can do.

6.2 Subjectivity and individuality of feelings

When we think of concepts (not images) like "circle" and "door", we may describe them to other people. In the case of mathematical concepts, the description is exact. The other person is then able to think the same concepts I thought. But this is not the case with feelings. Typical feelings are those of joy and sorrow. I may state that I am joyful, perhaps also why I am having such a feeling, but both are not precise. Furthermore, the same reason may make a person who hears me become sad; in any case, she won't be able to feel my feeling. For example, compassion is the social ability of suffering when we see another person suffering. But each one feels her own suffering.

Obviously, everyone has to exercise her own thinking. But thinking may be directed to something universal. Feelings are not universal, they have to be felt by everyone. Steiner puts it this way:

"It is characteristic of the nature of thinking that it is an activity directed solely upon the observed object and not upon the thinking personality. This can be seen from the way we express our thoughts, as distinct from the way we express our feelings or acts of will in relation to objects. When I see an object and recognize it as a table, generally I would not say: I am thinking of a table, but: This is a table. But I would say: I am pleased with the table. In the first instance I am not at all interested in pointing out that I have entered into any relationship with the table, whereas in the second it is just this relationship that matters." [1963, p. 60] "Thinking is the element through which we take part in the universal process of the cosmos; feeling, that through which we can withdraw into the narrow confines of our own life. Our thinking unites us with the world. Our feeling leads us back into ourselves, and this makes us individuals. If we were merely thinking and perceiving beings, our whole life would flow along in monotonous indifference. If we could only cognize ourself as a self, we would be totally indifferent to ourself. Only because with self-knowledge we experience self-feeling, and with the perception of objects pleasure and pain, do we live as individual beings whose existence is not exhausted by the conceptual relations in which we stand to the rest of the world, but who have a special value for themselves as well." [p. 125, his emphases]

Thus, thinking may be absolutely objective. The act of thinking is personal, but the object of thinking may be universal: everybody can see the table I'm seeing and think of it. But feeling is always subjective. I may like the table I'm seeing, another person may dislike it. But we will both agree that it's a table. Thinking of that table, I'm being universal. Feeling my sympathy or antipathy for the table, and how I feel it, I'm being absolutely individual.

The conclusion is that feelings are totally subjective and individual.

Computers are not individual and subjective. There is no individuality in a computer: another computer may emulate its machine language instructions, and interpret a certain program in exactly the same way as the former. Computers, given enough storage capacity and time, as well as Turing Machines, are universal machines (see 3.2). Similar analog machines are certainly slightly different from each other. For instance, if I set the thermostat of my empty refrigerator to a certain position, another empty refrigerator of the same brand and type, with its thermostat set to the same position will produce a slightly different internal temperature. But one should not ascribe individuality to such machines. It is possible to describe each part of a machine and its function, as well as the function of the whole machine. It is possible to foresee how a machine will react in each condition, and this gives it a universal character. As we have seen in chapter 3 every machine has a designer. A living being has no human designer and it is impossible to fully describe the functioning of a part of it, because the whole influences each part. If a part is taken from the whole, it does not function as it did when it was part of the living being (e.g. a cell under the microscope is not the same as it was in the body). For instance, it is impossible to foresee how much a plant is going to grow [Lewontin 2000, p. 20]. Each plant has a sort of individuality, an animal a much higher degree of individuality, and the highest degree, incomparable to any animal, is that of a human.

Subjectivity also cannot be ascribed to any machine. When an animal or a human has a perception, there is an inner reaction which is not just a physical transformation. Something else occurs, which cannot be precisely described. The reactions of a machine can always be precisely (exactly, in the case of well-functioning computers, from a logical point of view) described.

Thus, any machine is an objective and a universal construct. We have seen that feelings are subjective and individual. Therefore, I conclude that machines will never have feelings. Only animals and humans can have feelings.

6.3 Thinking, feeling and willing

As I have covered thinking and feeling, it may be interesting to complete the picture with the third inner activity present in humans: willing. Actions are always a consequence of conscious or unconscious willing. Our limb movements are one type of actions; when we speak, we have to move our chin, lips and tongue. But when we concentrate our thinking, e.g. in the example I gave in section 3.2, we are also exercising our will.

As we have seen (6.1), we cannot say where our feelings reside in us. But willing is even more obscure. Generally, we move our limbs unconsciously. Just imagine a pianist thinking of each finger, hand and arm movement she needs in order to play each note: their position, the muscles involved, how fast she has to perform each movement, etc. She wouldn't be able to play anything. But even simple movements such as those necessary to walk are usually done unconsciously.

Obviously, when we feel hunger and want to eat, this willing impulse has something to do with our stomach. But how can we locate in ourselves some inner impulses, for instance of pursuing a certain study, a certain kind of life? My wife's grandmother had the impulse to leave Germany in 1936, and her family took the last ship from Holland to Brazil in 1939. Her husband, as many people, just didn't think the situation for the Jews could become so dangerous as she somehow felt - but she convinced him to abandon the excellent economical status they had in the city of Hameln, in Germany. Almost all of those Jews that did not feel the same impulse and remained were killed, as my grandfather's whole family in Poland (he had already emigrated in the '20s).

We mentioned that Steiner associates thinking with a state of waking consciousness, feeling to a dream consciousness (see 6.1). He also associates willing with a deep sleep consciousness [1968a, lecture of Aug. 27, p. 98]. In fact, we may become conscious of our will if we think of it. But the will itself comes from the deep unconscious.

Steiner's considerations on these three inner activities have wide applications, for instance in education - the main goal of the cited series of lectures, given to the teachers of the first Waldorf school [1968a].

It's interesting to observe that willing is indirectly connected to freedom. Willing a will is not a tautology: we may wish to will something, and educate our willing. For instance, suppose someone is a smoker, and recognizes that smoking is not good for his health, as well as for the other people who are forced to inhale the smoke he produces. Furthermore, suppose he recognizes that it is not fair to disturb non-smokers, who in general are very sensitive to the fine smoke produced by cigarettes. Consequently, he decides to quit smoking. For a while, he will have great impulses to smoke. But he refrains from doing it and tries to forget the impulse. His desire is not to have that impulse, that willing. With time the willing to smoke will subside; he may even find smoking distasteful. He has then freed himself of the impulse that came perhaps from his physiological conditions. Steiner says:

"A free being is someone who is able to will what he considers right. One who does something other than what he wills, must be driven to it by motives which do not lie within himself. Such a man is unfree [sic] in his action." [1963, p. 215, his emphases]

Thus, there are two types of will: the one that comes from our unconscious, and the other which comes from a conscious decision ("... what he considers right."). Conscious decisions stem from our thinking. In fact, it is of utmost importance to recognize that we are not free in our feelings, neither in the will impulses that come from our unconscious. It is not possible to control a feeling. For instance, either we like some food, or we dislike it, or are indifferent to it. It is impossible to force a will that comes from the deep unconscious. For example, when we feel thirsty we have a will to drink. It is not possible to avoid it. What we can do is to become conscious of our feelings and willing, and prevent some action due to them. Thus, we may force ourselves to eat what we dislike, or not to drink if the available liquid is not healthy. But we prevent or do these actions after we have thought about the feelings and willing. On the other hand, we may choose and produce our next thought, as I have expounded on section 3.2. Thus, we are free to think what we decide, or what we "consider right". After having chosen what to think in complete freedom, we may decide to do something in the world based upon that thinking, that is, we have created a will - but one that came from our conscious, and not from our unconscious. This is, I think, what Steiner wanted to say with his first phrase. It is also necessary to understand what he wanted to express with "motives which do not lie within himself". I think he is not referring here to our body, or to our memories, feelings or unconscious will. He is referring to what could be briefly described as our "superior I" or "self", our true essence - our non-physical constituent that makes it possible for us to reach the world of Platonic ideas, to be individuals beyond the influences of our genes and the environment, to be self-conscious and free, to exercise unselfish love, and to be creative (see chapter 8 below).

Many people don't recognize that thinking, feeling and willing are separate, different inner activities. This comes from the fact that in normal life they always come intermingled. When we think of something, we immediately react with our feelings. If we feel something, like hunger, thoughts immediately arise connected to it, like our last meal. An impulse of will immediately makes us think of the consequences of our actions if we follow that impulse, and so on.

As we have seen, willing is connected to actions: we wish to do something, we wish to become something - and have to act to accomplish those wishes. Humans can control their actions because they may figure out the consequences of their actions before doing them. Animals cannot; they are impelled by their desires, instincts, and by the conditioning which affected them from outside. If we act out of a sense of duty, or out of fear, we are not free. Steiner called attention to the fact that we only act in freedom if we act out of a love for the action itself [1963, p. 176]. A machine will never be able to act this way. Machines act out of their physical construction or inexorably directed by their programs. As I've already said, machines cannot be free (see 3.3). Obviously, if the notion of willing is distorted, as McCarthy did with feeling, then it will be possible to assign willing to machines.

7. Can a machine have consciousness?

Making machines become conscious is considered one of the hardest problems of Artificial Intelligence.

It is necessary to distinguish two different kinds: consciousness and self-consciousness. Animals can be conscious: if an animal is hit, it becomes conscious, aware of its pain and reacts accordingly. But only humans can be self-conscious. A careful observation will lead to this difference. Self-consciousness requires thinking. We can only be conscious when we are fully awake, and think of what we perceive, think, feel or wish. Animals aren’t able to think. If they could they would be creative as humans are. As I have already mentioned, no bee tries a different shape than the hexagon for its honeycomb. Animals just follow their instincts and conditioning, and act accordingly. Due to their thinking ability, humans may reflect on the consequences of their future actions, and control their actions. As I have mentioned (see 3.2) a drunkard may be conscious, but he certainly is not fully self-conscious - he cannot control his thinking and actions even if he wishes to do so. Then, he acts impulsively.

Thus, animal or human consciousness depends on feelings and human self-consciousness depends on conscious thinking. As I have already mentioned, machines cannot have feelings, and can only simulate a very restricted type of thinking: logical-symbolic thinking. One should never say that a computer thinks. Thus, I conclude that machines will never be conscious, much less self-conscious.

It is interesting to note that in general one reads about machine and consciousness, and very seldom about self-consciousness. Maybe this comes from the fact that most scientists regard humans as simple animals - or, still worse, as machines.

8. Is the human being a machine?

As I have expounded on chapter 3, it is linguistically incorrect to say that humans are machines, because the concept of a machine does not apply to something that has not been designed and built by humans or by machines. But let's use this incorrect popular denomination, instead of the more proper "physical system".

There is much more evidence that humans are not machines. I've already mentioned some of them, such as the fact that humans may self determine their next thought. Fetzer argues against the mind being a machine using the fact that we have other types of thinking than logical-symbolic, such as dreams and daydreams, exercise of imagination and conjecture [2001, p. 105], and shows that logical symbols are a small part of the signs we use, in Peircean terms [p. 60]. He also agrees with Searle that minds have semantics, and computers do not [p. 114]. To me, the fact that we feel and have willing is also evidence that we are not machines. Another strong indication is the fact that we have consciousness and self-consciousness, as explained in the last chapter.

In particular, the evidences that we are not digital machines are overwhelming, as we have seen in section 3.8. I'll give here some more, regarding our memory. If it were digital, why do we remember what we see in a way that is not as clear as our original perception? If our memory were digital, there would be no reason for forgetting - or losing - the details. There is also an evolutionary argument in this direction. Certainly the people who think that humans are machines also believe in Darwinian evolution. But if we were machines, there would be no evolutionary reason for not storing - at least for some time - all the details perceived by our senses, similarly to the capacity computers have of storing images, sounds, etc. It seems to me that storing and retrieving details would certainly enhance the chances of surviving and dominating. It follows, then, that from a Darwinian perspective our imperfect memory makes no sense. This means that either the concept of Darwinian evolution is wrong, or we are not machines - or both.

Furthermore, how is it possible to "store" something, forget it and suddenly, without "consulting" our memory, remember it? This is not a question of access time. A machine either has access to some data or hasn't, and this status can only be changed by a foreseen, programmed action. Accesses may be interrupted either due to random effects or on purpose, directed by the program. This is not our case. Often we make an effort to remember and we can't - but we certainly memorize, in our unconscious, every experience we have. Some people could say that our unconscious has an independent "functioning", and does the "search" for us. But here we come again to the question of consciousness and unconsciousness. Certainly all machines are unconscious, as we have explained in the last section. The reaction of a thermostat is not due to consciousness.

Finally, apparently our memory is infinite; there is no concrete machine with infinite memory.

The capacity of learning is to me also an indication that we are not machines. As I said before, computers don't learn, they store data, either through some input or results of data processing. If we knew how we learn, medical studies in Brazil would not take 6 years. The fact that you are reading this paper shows that you have learned how to read. But notice that during reading you don't follow the whole process you had to go through, in order to learn it. Somehow, just a technique, an end-result of the learning process remains. And this is not a question of having stored some calculated parameters, as in the case of a (wrongly called) neural net.

We share with all living beings an extraordinary capacity for growing and regenerating tissues and organs. As I explained in section 3.5, a clear observation shows that both processes follow models. Models are not physical, they are ideas. The non-physical model is permanently acting upon living beings, so they cannot be purely physical systems.

The fact that we can think of mathematical concepts is an indication that we reach the Platonic world of ideas (see chapter 3). As Spinoza says, "If two things have nothing in common with one another, one cannot be the cause of the other." (Ethics, first part, Prop. 3). Therefore, we must have something in us of the same quality of that non-physical world in order to reach it. That is, we are not purely physical systems.

I respect those that are materialists (cf. chapter 3), as long as they are consequent. It is very important to realize that if we are just physical systems, we have no freedom. Such systems are subjected just to physical laws, but from physical laws or randomness it is impossible to have a free organism. Without freedom, there is no human dignity or responsibility. This was a great dilemma for Einstein, who considered humans as purely physical, deterministic systems and possessing no freedom [Jammer 2000, pp. 62, 71, 173], and thus could not assign them responsibility [pp. 71, 105]. But he had to change his mind when he became conscious of the horrors perpetrated by the Nazis. He even ended up assigning responsibility to the whole German people [p. 71]. So, if a materialist assumes that humans may be free, have human dignity and responsibility, may exercise unselfish love, or there is some purpose in life, he should change fields and become a spiritualist, that is, he has to assume the hypothesis that there are also active non-physical processes. These processes take place in a non-physical world, and may influence the physical universe.

Many scientists assume that there must have been something non-physical acting at the beginning of the material universe. In fact, from the materialist point of view, the origin of matter or energy and the boundaries of the universe do not make sense. From a materialist point of view, not even matter makes sense: why not subdivide it indefinitely? (For example, today the electron is assumed to be unstructured.) Unfortunately, materialism has so much influenced scientists that many, if not most, of those that admit some original "creation" of matter cannot admit that there is still an acting non-physical universe "behind" the physical one. One such example was Einstein [Jammer 2000, p. 97].

It is very important to realize that assuming the spiritualist hypothesis does not lead to mysticism or "bad science". If the basic spirit of modern science is maintained, what happens is that this assumption expands the research space. For instance, one could assume that some evolutionary mutations are non-random, and are directed by the non-physical constituents of certain living beings. This way one would have a proper superset of materialist, Darwinian evolution. Another example is assuming that thinking is not produced by the brain (see 3.2). This would enormously enlarge the field of research in cognition; instead of examining neuron activity as the origin of thinking, we would investigate, for example, how thinking influences neural activity, without discarding the possibility that under certain circumstances neurons may act independently of the non-physical thinking activity, and induce some kinds of thoughts.

Maybe I should be more specific. From a spiritualist point of view, the differences among the various kingdoms of nature are easily justified. Minerals are purely physical systems; let us call their physical body "a member of the first kind". Plants obviously have a physical body, a member of the first kind. But according to the hypothesis we are formulating, plants are not just physical systems. Let us call their non-physical constituent "an individual (non-physical) member of the second kind". This constituent, acting upon the first (the physical body), is responsible for organic form, tissue regeneration, growth, reproduction, etc. - in brief, all vital processes. Animals show more than just life processes - they have something more than plants. Let's call it "an individual (non-physical) member of the third kind". It is responsible for movement, instincts, hollow organs, respiration, consciousness, feelings, willing, etc. Obviously, animals have all sorts of life manifestations, so they also have a member of the second kind, besides a physical body. Humans have essential differences to animals, due to another non-physical constituent, besides the lower two: let's call it "an individual (non-physical) member of the fourth kind". It is responsible for our erect position, thinking, speaking, self-consciousness, unselfish love, freedom, ideals, etc. It is also responsible for what could be called "higher individuality", "superior I" or "self", something more than just our personal physical form, customs, instincts, prejudices, temperament, remembrances, etc. The member of the fourth kind is of the same non-physical "substance" as thoughts - that's why we may reach through our thinking the Platonic world of ideas, and animals and plants cannot (see Spinoza's citation above). This is the reason they are not creative in the human sense. The acting presence of each non-physical member modifies the lower members, including the physical body. That's why the forms of plants differ so much from the mineral forms, animals from plants and humans from animals. As a matter of fact, when observing the four kingdoms looking for differences, one should concentrate on species typical of each kingdom, and not transition species. When a living being dies, its non-physical members "abandon" the physical body, that is, a connection among them is cut. The physical body becomes fully subjected to the laws of nature, and decay immediately follows. In this sense, life is a permanent struggle against the physical world, against death. When animals and humans are in a state of deep sleep, there has been a certain separation between the first two members and the higher ones. Life processes continue, but there is no consciousness. According to this hypothesis, death and sleep are not just physical processes. I conjecture that a materialist view of the world will never make us understand what death and sleep really mean. By the way, an ancient saying is "death is the big sister of sleep".

Please note that the last paragraph contains concepts. It's not based on religion, faith or mysticism. It's just a way of expanding our common concepts to embrace non-physical ones through conceptual hypotheses. But their admission would have tremendous consequences for scientific research, for human relations (also with the other three kingdoms of nature), and morality.

For someone who admits that there is a non-physical "world", and that plants, animals and humans have non-physical constituents, considering that humans (or animals, or plants) are machines makes absolutely no sense, because machines are purely physical systems.

For those who, reading my extremely brief description of the human non-physical members, have recognized them, I should say that I didn't use some of the current names given to them in the specialized literature because I wanted to avoid unnecessary connotations. For those that are interested in knowing more and deeper concepts on this subject, I may recommend reading my paper on it, which has been used as the text of an introductory course given by me and my wife in the last years [Setzer 2000]. Unfortunately, it's in Portuguese, but if I receive enough requests, I would translate it into English.

9. The films The Bicentennial Man and Artificial Intelligence

These recent films, directed by C. Columbus (1999) and Steven Spielberg (2001) respectively, show robots that act like humans, performed by real actors. I'm going to criticize them because in my opinion they exercise pernicious influence upon the public.

In brief, The Bicentennial Man (BM) tells the story of a robot that lasts for about 200 years. His creator and his descendants change this robot, in order to acquire more and more human features. Robin Williams plays the robot. In the beginning he uses a kind of armor, looking like a machine; during the night he connects a plug into the electric outlet to recharge his batteries. Gradually, his appearance becomes more and more humane, to a point where Williams acts as any normal human, with thinking, feeling and willing, that is, the robot would have passed the TTT (cf. 5.1). It is not clear if it would have passed the TTTT. At the end, the woman with whom the robot is in love is going to die, so he decides that he cannot suffer her absence and should also "die".

Artificial Intelligence (A.I.) begins with the story of a family whose boy is in coma. The father works for a robot factory, which produces robots that are so similar to humans that there is a special test to distinguish the former from the latter: a device very similar to a bar-code reader is directed to the forehead of a person, emitting a kind of laser beam. If one sees some circuitry behind the skin, the "person" is a robot. That is, those robots also pass the TTT, but clearly don't pass the TTTT. The factory decides that their robots lack an essential feature: they can't have feelings. So they do some research and program the first robot to have feelings, even being able to love: a boy-robot. He is given as a test to the family whose boy is in coma, as a surrogate son. They receive the boy-robot, which initially behaves in a completely cold way. Then the lady says a series of words in front of the robot, which then gets the ability of having feelings. It becomes very attached to the lady, but at a certain point her real son returns cured from the hospital. With time, there are clashes between the real boy and the robot, and the lady has to get rid of the latter. She does it with extreme pain. The robot experiences longing for the lady. He is on board of a kind of an air- and aquaplane which crashes in New York City; the robot enters a kind of sleep state. A thousand years pass, and one sees again the robot inside the plane in the same place, now under the water. Due to the greenhouse effect, the oceans have covered most cities. Humans have disappeared. The boy-robot "wakes up", and still has the longing for the lady. He is received by some strange people, long and thin, apparently extraterrestrial. They tell the robot that they can reproduce the lady, only for one day, if he has some part of her. He finds some of her hair and from it the ETs reproduce her.

9.1 Absurdities

I never watch science fiction films. Their absurdities irritate me too much. But the members of a weekly study group I participate in asked me to watch A.I. and comment on it, so I did. Later I was told of BM and decided to watch it too. Let's first see some trivial absurdities. In BM, the robot initially recharges itself, but latter on it doesn't anymore. Maybe it has acquired the ability to digest food. In A.I., the robot clearly has no such ability, because in one scene it decides to imitate the real boy and starts to swallow noodles, clogging its circuits. But one never sees it recharging its batteries, or getting power from somewhere. Furthermore, it stays stopped for 1,000 years, is not corroded, and "wakes up" having retained its energy all this time.

But there are deeper absurdities. For instance, in A.I., from a hair of the lady, her whole body with memories, temperament, etc. is reproduced 1,000 years after she died. The DNA does not determine alone the development of an organism [Lewontin 2000, Setzer 2001a], much less the memories and temperament of a human being. But in my opinion the most serious absurdity is that robots will be able to have feelings. I have already argued that machines will never have feelings.

9.2 Messages

I said that those films had pernicious effects upon the public. What I meant is that they give an incorrect and absurd image of machines, influencing the way people regard the world, machines and humans in particular. I've collected the following messages transmitted by the films; when an item applies to both films, I make no indication, otherwise I mention which film it applies to.

1) Robots will be able to behave physically and mentally like human beings (in our terms, they will pass the TTT (cf. 5.1).
2) It will be possible to make machines have human feelings.
3) Robots may have ideals, in the human sense.
4) Human beings will love robots, in the same way as they love each other.
5) Robots having feelings is a tragedy for "themselves", because they don't "die".
6) Robots will need no energy to function.
7) There will be machines that will last indefinitely, without corrosion or malfunction.
8) Humanity will disappear, and only robots will remain on Earth (A.I.).
9) From the DNA of a hair, it is possible to recreate a whole human being, including her memories and temperament (A.I.).

In both films, the fact that all human abilities may be introduced into machines may give laymen the impression not only that machines will become humane, but also that we ourselves must be machines (otherwise we would have some non-machine feature that would not be possible to insert into a machine).

The reader may think that my concerns are exaggerated, because people do not mix fantasy with reality. There are some problems concerning those films, though. This reasoning applies only to adults; children live (or should live) with fantasy, so they should not be able to distinguish precisely the latter from reality. Moreover, laymen, that is, people who have very little scientific education, or even know computers only as users, cannot make a correct judgement on the messages transmitted by the film. This is aggravated by the fact that many scientists really think that those science-fiction scenes will become reality. A.I. was presented as an avant-première at MIT Media Lab. An article written by a science journalist who was at that session described the opinions some scientists had of the film. He tells that Ray Kurzweil said that the film contains "nothing fantastic":

"In 2030 there will be no clear distinction between robots and us. Emotions, especially love, are the deepest and most complex things we are able to do. But, in 25 years, we will know everything about the human brain and we will be able to reproduce it in machines with perfection. They will be able to do everything we do, including loving." [Burgierman 2001, p. 50, my translation].

This is consistent with his book [Kurzweil 1999]. So one sees that undue scientific forecasts present in those films are in full agreement with the point of view of many scientists. In fact, those films came to help some scientists pass to the public their view that we are machines.

9.3 Influences on views of the world

My main concern with those films is that they present an idea - and images - that every human function and characteristic may be inserted into machines. Still worse, machines have some superior features, such as speed, strength and exact memory. So if we insert our characteristics into machines, they will be superior to ourselves. This was recently expressed by a phrase by physicist Steve Hawking published in the newspapers, when he said something like "we should genetically improve the human being, otherwise it will be supplanted by machines". Even if he didn't say this, it doesn't matter: it fits very well the view of the world adopted by many, probably most scientists. The problem is that, being materialists, they cannot specify what they understand by "improvement": should we be three meters high, never have to sleep, never contradict the law? Maybe we would be perfect if we would act as robots.

Secondly, if we may insert every human characteristic into machines, this means that we are machines ourselves.

I fear that if humans regard themselves as machines, a very dark future is awaiting our descendents and us. But I'll digress on that in the conclusions.

10. Conclusions

I've tried to show in this paper that the prevalent view in the field of AI is that we will be able to (1) insert into machines all our functions and characteristics, because (2) we are just machines or, more properly, pure physical systems. I gave many reasons for concluding that these two points of view are false. For this, on the one hand I used some facts that can be accepted by everyone, and on the other hand I used a hypothesis that all living beings have non-physical constituents. I gave many indications for considering both approaches as adequate ones.

I have to confess that I like fighting - windmills, for some people, but real monsters, for myself. The first of these fights began publicly in 1972, against TV. Then came computers in elementary education, then video games, and now it's AI. Or, rather, strong and weak AI, because I have nothing against what I called practical and humble AI (cf. 5.2). My battle against the current philosophical trends in AI is due to the fact that they regard humans as pure physical systems or, popularly, machines. I consider this view extremely dangerous, because if consistent it has to negate human freedom, responsibility and dignity, as well as the possibility of unselfish love. If the latter is admitted, it is in general considered as a feature programmed into us by evolution, as stated by Darwin himself, and elaborated in speculative genetic terms by Richard Dawkins [1989, p. 23]. I regard real unselfish love as an act practiced in full consciousness and freedom, so it is inconsistent with the Darwinian view that we are just animals or, according to strong and weak AI, still worse, that is, just physical systems. As I said, a physical system is completely subjected to physical laws, so it can't have freedom or dignity.

The idea that humans are animals has produced tremendous human catastrophes. This was clearly Hitler's view of the world, because he treated people like animals, transporting them in cattle trains, caging them, gassing them, letting them undergo extreme suffering, making lab experiments with them, etc. One of his ideologies was that the survival of the fittest should be applied to humans: just one nation in the world should dominate it [Haffner 1990, p. 79]. After the invasion of the Soviet Union failed during the winter of 1941-42, he realized that he would not win the war, and decided to destroy his own German people, allowing for the appearance of another nation which should dominate the world [pp. 118, 142-157, see also Toland 1992, pp. 707, 849]. Other catastrophes along the same line were perpetrated by the Turks against the Armenians, by Stalin, Mao and Pol Pot against their own people, at Hiroshima and Nagasaki, and so on. The 20th century was really what I call "the century of barbarism". But all those tragedies were, I think, based upon the idea that humans - or at least part of humanity, the enemies or those of another ethnic group, faith or ideology - were just animals.

What will happen if people will largely embrace the current idea, advocated by many, if not most scientists - especially in the AI field -, that we are merely machines? I fear that the future will be much worse than the catastrophes cited in the previous paragraph. In fact, what we are presently seeing everywhere in terms of social and individual decay may very well be consequence of that view of the world. One sees an increasing overall disrespect for people (with honorable exceptions, see below), increasing psychological distress in individuals, and so on. In a July 2002 issue, Newsweek magazine brought an article saying that 70 million Americans suffer from insomnia (read in the newspaper O Estado de São Paulo, July 22, 2002, p. A8). Religions, which were largely set aside because they didn't follow our development, mainly in the last two centuries, were replaced by faith in science and technology. Generally speaking, traditional religions provided in general for positive moral impulses in social relations - at least among its members; on the other hand, current science and technology do not deal with morality and ethics. They deal with theories, objects and machines; they view living beings as machines. A second factor, tradition, also helped to keep society fairly stable, but at the beginning of the 20th century traditions began to disappear and lost their social cohesive power. A third factor, intuitive social sensitivity, seems also to be disappearing. Nowadays humans are abandoned, left alone to decide what to do with themselves. This is the difficult path we had to be given, in order to acquire our own freedom. But unfortunately we are miserably failing, mainly because we are not recognizing what a living being really is, what being human really means and what human development should mean. I'll give an example of this situation.

The current scientific view of sicknesses is that they have to be eradicated, all of them. But another view of the human being could tell a complete different story: sicknesses are needed for a true individual development. This does not mean that we should go around inoculating every possible sickness in every individual we encounter. But it does mean that we should understand that many sicknesses may be pedagogical processes, acquired by each individual when she needs it. The wisdom of natural language shows it very well: we don't say "the cold caught me", but "I caught a cold" - the person had the predisposition, maybe the need, to catch it. Doctors should help their patients to overcome their sicknesses going through those processes. Instead, they just give medicines and make surgeries as patients were machines and some parts are defective and need repair. I have the impression that modern humans without these needed sicknesses will not be human anymore - they will be automata.

I said that there are honorable exceptions to the increasing disrespect towards people. One of them is the campaign against smoking: what right does an individual have of disturbing other people that don't smoke? (Cigarette smoke, being very fine, is highly irritating for non-smokers.) Human rights, the respect towards hindered people, anti-racism and anti-sexism are other positive signatures of our times. In terms of all living beings, the ecological movement is also very positive. All of these show that humans have evolved also in positive directions, indicating a higher sensitivity, at least in some directions, towards nature and towards other humans.

Unfortunately, we have not developed new, larger, positive social abilities, based upon consciousness, self-consciousness, freedom, individuality, and social sensitivity, responsibility, and action. For example, in many places, like supermarkets and restaurants, one is forced to hear background, "canned" music. Very few people seem to notice that this music impairs freedom in a similar way as smoking does. None of those abilities will be correctly developed if, from the starting point, we have a wrong, partial view of what humanity and the other kingdoms of nature really are. This wrong concept is that all living beings are machines, and we'll produce machines that will be much better than living beings are.

There is a strong reason for the idea that we are just machines, which was explained in chapter 3: materialism. Materialism was developed mainly during the last two centuries, as a necessity of mankind. It has made it possible for us to immerse ourselves into matter to a degree that would have been impossible without it. Without this immersion we would not have developed our capacity for being free and self-conscious. But I think it's now time to consciously overcome this view of the world, without losing everything we have developed, otherwise we will continue to see social misery continuously increasing. We have to develop new ways of organizing the economy, departing from the unfortunate principle that has governed capitalism since the 18th century: Adam Smith's idea that if we satisfy individual ambitions and egotism - in a purely material sense - a mysterious "invisible hand" would regulate society and everybody would be happy. But new social organizations that are based upon materialism cannot produce the essential changes we are in urgent need of, because it ignores the essence of the human being: the fact that it has non-physical constituents (cf. 8). As a consequence it has to ignore the possibility of exercising unselfish love, which is obviously socially constructive, whereas egotism is destructive.

Unfortunately, there are forces which want to avoid the recognition that materialism has to be overcome. Strong and weak AI are part of their manifestations. I hope I have shown that they are wrong. Man is not a purely physical system; our thinking, feeling and willing activities do not originate in our physical parts. So it will be impossible to introduce real human mentality into machines, and studying and developing machines will never reveal our real essence; on the contrary, they deviate our attention from it. The degree of depletion of natural resources, including air, water and agricultural soil (what a paradox: our materialistic age is destroying matter), the increasing social and economic instability and misery everyone can observe makes it absolutely urgent that we change something. I think this has to begin by radically changing the view humans have of themselves and of living beings, the view that they are machines. Unfortunately, academic AI has not contributed to that change, on the contrary, it has contributed to denigrating the image humans make of themselves. It has contributed to the elimination of our human dignity and social responsibility. Films like the ones mentioned above go in the same directions, now in popular terms.

Other scientists also have come to the conclusion that we are endangered, but for other reasons. For example, Bill Joy fears that the way AI (robotics), genetic engineering and nanotechnology are being developed, self-reproducing machines may be introduced that will destroy the world [2000]. I don't agree with him; I don't think these areas will be able to get to that point. But his concerns are those of a scientist who is concerned that science and technology have gotten out of control. I don't agree with his solution either, that is, a moratorium should be declared on research connected to those fields. My point of view is that the means have to be identified with the ends. It is not by restricting freedom that we will attain freedom. I think the solution lies in the individual decision and action of each scientist and technician - they should individually decide what they should investigate and produce. I hope these lines have helped those that are searching for a more responsible science, to become conscious that strong and weak AI are not the fields that should be investigated in order to improve humanity. On the contrary, if pursued, those fields will only contribute to accelerate our increasing misery. Our main problems are not material problems. Only by solving our main problem, derived from what I characterized as the "fundamental existential hypothesis" (cf. 3.4), that is, the way we regard ourselves and the world, we will be able to revert our increasing social, individual, and the world's downfall.

References

Burgierman, D.R. Artificial Intelligence (in Portuguese). In Super Interessante. No. 166. São Paulo: Editora Abril, July 2001, pp. 49-54.

Damasio, A. Descartes' Error - Emotion, Reason, and the Human Brain. New York: Grosset/Putnam 1994.

Dawkins, R. The Selfish Gene. Oxford: Oxford University Press, 1976. My citations are from O Gene Egoítsa, trans. A. P. Oliveira, Lisboa: Gradiva, 1989.

Encyclopaedia Britannica. Chicago: Encyclopaedia Britannica, 1966.

Fetzer, J. H. Computers and Cognition: Why Minds are not Machines. Dordrecht: Kluwer Academic Publishers, 2001.

Gardner, H. Multiple Intelligences - The Theory in Practice. Basic Books, 1993. My citations are from Gardner, H. Inteligências Múltiplas: A Teoria na Prática, trans. M. A. V. Veronese. Porto Alegre: Ed. Artes Médicas Sul, 1995.

Goleman, D. Emotional Intelligence. Brockman, 1994. My citations are from Inteligência Emocional: A Teoria Revolucionária que Redefine o que é Ser Inteligente, trans. M. Santarrita. Rio de Janeiro: Ed. Objetiva, 1995.

Goswami, A. Death and the Quantum: A New Science of Survival and Reincarnation. 1995. Available at http://www.swcp.com/~hswift/swc/Essays/death.html.

Haffner, S. Anmerkungen zu Hitler. Frankfurt a.M.: Fischer Taschenbuch Verlag, 1990.

Haugeland, J. Artificial Intelligence: The Very Idea. Cambridge: MIT Press, 1987.

Jammer, M. Einstein and Religion: Physics and Theology. My citations are from Einstein e a Religião: Física e Teologia, transl. V. Ribeiro. Rio de Janeiro: Contraponto Editora, 2000.

Jackendorff, R. Languages of the Mind: Essays on Mental Representation. Cambridge: MIT Press, 1993.

Joy. B. Why the future doesn't need us. Wired 8.04, April 2000. Available at http://www.wired.com/wired/archive/8.04/joy.html.

Kant, I. The Critique of Pure Reason. In Great Books of the Western World Vol. 42, R.M.Hutchins (ed.). Chicago: Encyclopaedia Britannica, 1952, pp. ix-209.

Koestler, A. The Sleepwalkers - A History of Man's Changing Vision of the Universe. Harmondsworth: Penguin Books, 1964.

Kurzweil, R. The Age of Spiritual Machines - When Computers Exceed Human Intelligence. New York: Penguin Books, 1999.

Lewontin, R. The Triple Helix - Gene, Organism, and Environment. Cambridge, MA: Harvard University Press, 2000.

McCarthy, J. Ascribing mental qualities to machines. 1979. Available at http://www-formal.stanford.edu/jmc/ascribing.html.

Neumann, J. v. The Computer and the Brain. New Haven: Yale University Press, 1958.

Penrose, R. The Emperor's New Mind - Concerning Computers, Minds and the Laws of Physics. New York: Penguin, 1991.

Pollock, J. L. How to Build a Person: A Prolegomenon. Cambridge: MIT Press, 1989.

Ramsey, W., S. Stich and D. Rumelhart (eds). Philosophy and Connectionist Theory. Hillsdale: Lawrence Erlbaum Ass., 1991.

Rohen, J. W. Morphologie des menschlichen Organismus. Stuttgart: Verlag Freies Geistesleben, 2000.

Searle, J. R. Minds, Brains and Science - the 1984 Reith Lectures. London: Penguin Books, 1991.

Setzer, V. W. Computers in Education. Edinburgh: Floris Books, 1989.

Setzer, V. W. Reflections on electronic chess. In The Southern Cross Review. No. 2, Nov.-Dec. 1999, electronic magazine available at http://www.southerncrossreview.org. Also on my web site.

Setzer, V. W. An Anthroposophical introduction to the human constitution (in Portuguese). 2000. Available at http://www.sab.org.br/antrop/const1.htm.

Setzer, V. W. and Monke, L. Challenging the Applications: An Alternative View on Why, When and How Computers Should Be Used in Education. In Muffoletto, R. (Ed.), Education and Technology: Critical and Reflective Practices. Cresskill, New Jersey: Hampton Press, 2001, pp. 141-172. Also available at my web site.

Setzer, V. W. Considerations about the DNA hype. Available on my web site, 2001a.

Sheldrake, R. A New Science of Life - The Hypothesis of Formative Causation. Los Angeles: J.P.Tarcher, 1987.

Steiner, R. Die Philosophie der Freiheit - Grundzüge einer modernen Weltanschauung (GA - general catalogue - 4). Dornach: Verlag der Rudolf Steiner-Nachlasswerwaltung, 1962.

Steiner, R. The Philosophy of Spiritual Activity - Fundamentals of a Modern View of the World. Transl. R. Stebbing. West Nyack: Rudolf Steiner Publications, 1963

Steiner, R. Von Jesus zu Christus (GA 131). 11 lectures held in Karlsruhe, Oct. 4-14, 1911. Dornach: Verlag der Rudolf Steiner-Nachlassverwaltung, 1968.

Steiner, R. Allgemeine Menschenkunde als Grundlage der Pädagogik (GA 293). 14 lectures held in Stuttgart, Aug. 21-Sept. 5 1919. Dornach: Verlag der Rudolf Steiner-Nachlassverwaltung, 1968a.

Sutherland, I. V. and Jo Ebergen. Computers withouth clocks. In Scientific American, Vol. 287, No. 2, Aug. 2002, pp. 46-53.

Toland, J. Adolf Hitler. New York: Anchor Books, 1992.

Turing, A. M. Computing machinery and intelligence. In Mind - a Quarterly Review of Psychology and Philosophy, Vol. LIX No. 236, Oct. 1950, pp. 433-460, available at http://www.abelard.org/turpap/turpap.htm. Also in E. Feigenbaum and J. Feldman, eds., Computers and Thought. New York: McGraw-Hill, 1963, pp. 11-35.

Zajonc, A. Catching the Light - The Entwined History of Light and Mind. New York: Bantam, 1995.

Acknowledgement

I thank Frank Thomas Smith, the editor of the excellent electronic magazine Southern Cross Review, for many corrections concerning grammar issues and useful remarks regarding some of the contents.