REVIEW ARTICLE
Published in Pragmatics and Cognition Vol. 9, No. 2 (2001), pp. 293-312

PEOPLE ARE NOT MACHINES
Valdemar W. Setzer

Dept. of Computer Science, University of São Paulo
www.ime.usp.br/~vwsetzer

(See also a translation into Danish, Folk er ikke maskiner, done by Katia Bondareva.)

James H. Fetzer, Computers and Cognition: Why Minds are not Machines. Dordrecht: Kluwer Academic Publishers, xix + 323 pp., 2001, ISBN 0-7923-6615-8.

This excellent book consists of a foreword, a prologue, three main parts and an epilogue, in a total of 11 chapters. Eight of these chapters were papers previously published in academic journals and three were chapters of books. This is a fine collection of Fetzer's ideas on the subject of minds and machines, and should be studied by anyone interested in this area. Unfortunately, none of the chapters could be classified as directed to the general public, which is being highly influenced by the opinion of recent films (e.g., The Bicentennial Man [Columbus, 1999] and Artificial Intelligence [Spielberg, 2001]), and bestsellers like Kurzweil's The Age of Spiritual Machines (1999). As will be seen, I consider it absolutely essential that this subject be discussed at all levels, from all possible angles. After summarizing each chapter and adding some comments, I will contribute with a nonstandard brief discussion on it.

Fetzer titled the book's three main parts "Semiotic systems", "Computers and cognition" and "Computer epistemology". The foreword briefly describes these parts and presents three arguments: 1) Concerning part I, a "static difference" (between computers and minds), that is, the difference between computers operating and associating just marks (syntactical aspect) and minds operating on signs which stand for other things (semantic aspect). 2) Concerning part II, a "dynamic difference": "Computers are governed by algorithms, but minds are not" (p. xv) and 3) the simulation of thought processes does not make digital machines "thinking things".

1. Detailed description with comments

The Prologue, consisting of Chapter 1, "Minds and machines: Behaviorism, Dualism and Beyond", covers the Turing Test, with the extensions proposed in 1992 and 1993 by Harnad: the "Total Turing Test" (not just a linguistic indistinguishability, but additional kinds of human behavior) and the "Total Total Turing Test" (bodily indistinguishability). He also mentions Searle's "Chinese Room" allegory, showing that if a person who does not speak Chinese answers in Chinese questions formulated in this language using a book of rules in English, which directs how to combine the ideograms, she does it exactly as computers do. Fetzer agrees with Searle to the effect that "syntax processing alone does not appear sufficient to mentality" (p. 6). He calls the attention to the fact that Harnad's tests also do not guarantee mentality, and uses Peirce's three kinds of signs to speak about "three kinds of minds", and concludes that they provide an "account for the nature of consciousness and cognition" (p. 19).

Part I, "Semiotic systems", begins with Chapter 2, "Primitive concepts: Habits, Conventions and Laws", which expounds the relationship between "kinds of signs" and "kinds of minds". Chapter 3, "Signs and minds: An Introduction to the Theory of Semiotic Systems", elaborates on semiotic systems, and introduces Newell and Simon's conception of "physical symbol systems". He compares their concepts of symbol to Peirce's, and claims that "symbol systems and semiotic systems of Type III [Peirce's 'Symbols'] are not the same thing" (p. 59). One of his conclusions is that "Newell and Simon's conception may be adequate for digital computers, but remains completely inadequate for other things" (p. 63).

Chapter 4, "Language and mentality: Computational, Representational and Dispositional Conceptions", covers aspects of language, such as syntax and semantics, and the problem of the latter being or not reducible to the former. It also introduces the computational, representational and dispositional conceptions of minds, corresponding to syntactical, semantical and pragmatical conceptions. The first, "even when complemented by the parsing criterion, adopts an extreme implausible theory of the relation of form to content" (p. 93). The second, using an "inferential network", "cannot resolve the underlying difficulty of fixing the meaning of primitive language" (p. 93). He makes an interesting criticism of Fodor's "universal language of thought". Fetzer prefers the dispositional conception, elaborating his interpretation in terms of Peirce's kinds of signs. In the conclusion, he claims that

Just as human beings and digital machines qualify as different types of physical systems, so too may they qualify as different sorts of semiotic systems. ... If, after all, human beings and digital machines are 'fundamentally different' as we have discovered above, then what grounds remain in support of the view that digital machines and human beings can process knowledge, information or data in similar ways? The great debate between the 'strong' and the 'weak' conceptions of AI, it appears, may rest upon a misconception... (p. 94).

Part II, "Computers and Cognition" starts with Chapter 5, "Mental algorithms: Are minds computational systems?". It begins with the question whether

human thought [requires] the execution of mental algorithms ... to provide a foundation for research programs in cognitive science. ... This conception implies that the boundary of computability thus defines the boundaries of thought (p. 101).

Fetzer's intention is to show that the computational conception "cannot be sustained" and that "a semiotic approach appears to provide a more adequate conception" (p. 102).

In his explanation on what are algorithms, I miss the essential feature that each step of an algorithm has to be mathematically well defined. This is implied by the Church-Turing thesis, because each instruction of a Turing machine has this property. In many introductory courses on programming, instructors introduce the notion of an algorithm through "daily life" examples, like a description of how to change a flat tire. However, in this case each step is not mathematically well defined. In my opinion, this leads to many misconceptions, among them that our mind uses algorithms. Note that there is no present scientific knowledge on how a human performs even a simple operation like 2+3. (I conjecture that, unless science changes its present paradigms, we will never know it.)

Fetzer describes the computational conception as incorporating the following claims: "that all thinking is reasoning; that all reasoning is reckoning; that all reckoning is computation; and that the boundaries of computability are the boundaries of thought" (p. 105). But he finds them untrue, because "[t]he boundaries of thought are vastly broader than those of reasoning, as the exercise of imagination and conjecture demonstrates. Dreams and daydreams are conspicuous examples of non-computational thought processes" (p. 105). In this respect, it is worth citing a phrase by Penrose (1991: 412): "It has, indeed, been an underlying theme of earlier chapters [of his book] that there seems to be something non-algorithmic about conscious thinking".

I miss here and in other chapters at least the mention of "intuition". In 1997, Kasparov lost by 3-2 the match against Deep Blue-2 but, if I remember well, he won a game and another ended in a tie. Many people commemorated the fact that a machine had defeated the world champion of a game which is considered to require lots of intelligence. But very few people paused to think what could it possibly mean that a human could win a game or come to a tie playing a mathematical game against a mathematical machine that could test 36 billion of moves in the allotted time of 3 minutes for each move. How many moves could possibly be tested by Kasparov, maybe 200? When Kasparov defeated Deep Blue-1, one of its builders, Hoane, said: "The lesson is that masters such as Kasparov are doing some mysterious computation that we can't figure out" (Scientific American, May 1996, p. 10). Maybe he was not doing computations at all, but rather using what is naively called "intuition" of the right move. For further considerations, please see my essay "Reflections on computer chess", on my web site.

Another point I miss in Fetzer's arguments is "creativity". In a lecture given at the São Paulo State Technological Institute (IPT), Italian sociologist Domenico de Masi characterized it as being "fantasy" plus "concretivity", the latter being the ability of producing something socially useful in the world. Interestingly, he characterized fantasy without "concretivity" as being "diletantism" and "concretivity" without fantasy as being "bureaucracy". I think fantasy is the ability of having real novel ideas, including those that are not just a combination of previous ones (see, for instance, Penrose's (1991) interesting chapter "Inspiration, insight, and originality"). In this sense, it is a kind of intuition. A simple example would be to figure out how to join again two common friends who had a misunderstanding and are not in speaking terms with each other. Many social situations are absolutely new (each individual is different from all others), and past experiences do not help solving them.

Back to Fetzer. In this important chapter he covers many issues, e.g., the difference between programs and algorithms (which I don't think is so fundamental), determinism, minds as semiotic systems, syntax and semantics, and a crucial point that is rarely covered in the literature: "thinking about thinking" – to which I'll come back in the last section. Almost at the end of the chapter he writes:

From this perspective, the computational conception appears to have arisen from the most irresistible temptation to appeal to a domain about which a great deal is known (formal systems and computability theory) as a repository of answers to questions in a domain about which very little is known (the nature of mentality and cognition). The train of thought from thinking as reasoning to reckoning as computation and computability as cognition exerts tremendous attraction. It has motivated most of what passes for cognitive science today. Yet this view appears to represent a profound misconception in thinking about thinking (p. 125).

Then he calls the attention to the fact that incompleteness proofs in higher-order logic as established by Gödel were not touched in the computational conception. It is important to note that, as I heard from Prof. Newton C.A. da Costa, the introducer of Paraconsistent Logic, Gödel's theorem is valid in classical logic, but may be invalid in other types of logic.

Some comments on the section "Is thinking deterministic?" Following D.I.A. Cohen (1986), Fetzer associates non-determinism with the fact that a machine in a certain state needs intervention from the operator to follow one of possible different paths. The fact is that if it is an abstract machine, it could follow all different paths defined by non-deterministic transitions simultaneously. Non-determinism may be simulated in a deterministic computer by recording every non-deterministic transition that was taken, and then after the program stops (if it does!), backtracking to the situation at the moment that transition was taken and choosing another possible transition. This requires the input and eventual internal variables to be reset at their position and values at the instant of the non-deterministic transition. Obviously, for a machine with many non-deterministic transitions, this method becomes intractable. E. Dijkstra (1976) introduced the idea that non-deterministic programs, using what he called "guarded commands", can be much simpler than deterministic ones. Fetzer believes that our thinking "seems to provide plausible examples of mental indeterminism" (p. 112), that is, probabilistic causation. This section could benefit from a discussion of self-determinism, which is to me what really characterizes thinking (see the last section below).

Chapter 6, "What makes connectionism different?", is a critical review of the book Philosohy and Connectionist Theory, edited by Ramsey, Stich and Rumelhart. His remarks on the various chapters of this book are enlightening. Fetzer's conclusion is that criticisms against connectionism "are either unsound or unthreatening to the connectionist program" (p. 150). He stresses that "if we consider connectionism as envisioning cognition as computation over distributed representations ..., then no doubt what is most important about the program is its commitment to 'distributed' representations" (p. 151). Here we have two problems. As we have seen, Fetzer is against considering minds as computational systems, and distributed representations are a great mystery in cognition. For example, visual perception is a great unknown. Each eye divides the field of vision in four parts at the fovea, corresponding to the four quadrants; let's call them a, b (right side of the image), c and d (left side). Parts a and b formed by each eye are combined in the vision nerve (which, as far as I remember, transmit complex electric signals, which do not correspond to the images in electrical form), as well as c and d, each pair going to a brain hemisphere, where a and b (c and d) are separated in regions divided by the "Sulcus calcarinus". In the visual cortex, there is a separation of space perception, movements in the visual space, and optical remembering which activate different areas in the right hemisphere, and form perception and color perception in the left hemisphere. How then does a person see just one field of vision, as a complete image (Rohen 2000: 16)? The fascinating book on light by Zajonc presents other astonishing characteristics of vision. For example, once operated, a person who has never seen does not see any object (Zajonc 1995: 3, 183): vision (as well as other senses) depends on the ability of associating the perception to the concept. Zajonc (1995: 64) also mentions the fact that the perception of linear perspective in pictures gives an impression of reality due to a purely cultural effect (linear perspective began to be studied and largely used in the beginning of the 15th century). So we see that just considering cognition as being distributed needs further complementation on how perception and concepts are sensed as wholes and intermingle with each other.

I would also like to point out that the so-called "neural nets" are algorithmic structures, and there is no proof that our neurons form a structure with a similar way of functioning. Moreover, the structure of neural nets is not dynamic, whereas the structure of neuron nets in the brain is in permanent change (Penrose 1991: 389). The supposition that our neurons form something like a computational neural net is the basis of Kurzweil's prophetical book. All his predictions (for the years 2009, 2019, 2029 and 2099) are based upon the unjustified unscientific claim that "with 100 trillion [neuron] connections, each computing at 200 calculations per second, we get 20 million billion calculations per second" (Kurzweil 2000: 103). What for him counts as one of these "calculations" is not described. In the light of his forecast of the exponential growth of computer power, he reaches the conclusion that "we get the year 2025 to achieve human brain capacity in a US$1,000 device" (p. 103). As for memory, he concludes that "we can effectively match human memory for $1,000 sooner than 2023" (p. 103). Does he know that nobody knows how our memory works, and how large it is? My supposition is that our memory is infinite. For Kurzweil, machines "will embody human qualities and will claim to be human. And we'll believe them" (p. 53); this was very well represented in Columbus' (1999) and Spielberg's (2001) films. It is important to demystify such dangerous words and images, which try to convey the idea that we are machines – otherwise people would wrongly believe it to be possible to introduce into a machine each one of our capacities, including feelings, ideals, compassion, etc. Fetzer's book is an important contribution in this direction. Unfortunately, as already mentioned, it is not accessible to the non-academic people.

Chapter 7 has for me the most appealing title of all the chapters: "People are not computers: (Most) Thought Processes are Not Computational Procedures", which I would recommend to be extended to "People are not machines". The chapter begins with considerations on deductive rules of inference, which "are acceptable if and only if, when applied to true premises, only true conclusions follow" (p. 155), with the observation that only in this case syntax corresponds to semantics. This reminds me of the classical example of syllogism, "every human is mortal, Socrates is a human, then Socrates is mortal". In set theoretical terms this corresponds to "For all sets H and M, if HÍ M and sÎ H then sÎ M". This should only be applied to well defined mathematical sets. It so happens that it is not possible to define "human", "mortal" and "Socrates". So in my opinion this example should not be used as an example of a syllogism, just as changing a tire should not be used as an example of an algorithm. At most, one should call the former a "fuzzy syllogism". The problem lies in mixing mathematical symbolic logic with natural language constructs referring to real world objects or concepts that are not mathematical. This leads to the confusion of ill-defined human processes with well-defined digital machine processes (programs). Remember that it is not known how we perform 2+3. Therefore, we should not say that we "process" this operation – or anything else. And vice-versa: if this human operation is called an addition, then computers do not add; they combine symbols (the numerals) providing an outcome that coincides with the expected result. It could be objected that the addition of large numbers could be implemented in a computer simulating exactly our way of performing it (from right to left, one column at a time, with carry on, etc.). The fact is that for each column containing a pair of digits (plus eventually a carry on), we don't know how we perform the addition, so our way of adding large numbers should also not be called an algorithm.

In the section "Computation is not sufficient", Fetzer criticizes the strong and weak AI theses:

According to (what is known as) the thesis of weak AI, computers are simply tools, devices and instruments that are or may be useful, helpful or valuable in the study of mentality, but do not posses minds, even when they are executing programs (p. 159).

This reminds me of one of "Setzer's Laws": "A computer program which simulates some human behavior is a demonstration that humans do not 'function' that way". (See other funny "laws" on my web site.) That is, in my opinion computers may be valuable as counter-examples, but not as a basis for studying mentality. In fact, what Fetzer does with his wonderful book is demonstrating that various models of mind and cognition based on computers do not hold.

The section "Computation is not necessary" reveals his hope that it could be shown that the conditions imposed by the computational conception "are not merely insufficient for mentality, as we have already discovered, but are unnecessary for mentality, as well" (p. 160). As I said, we perform 2+3 but there is no proof that this is done algorithmically. However, can it be algorithmically proved that some procedure is not performed algorithmically? Here we touch one of the problems of current scientific research paradigms. Interestingly, Fetzer apparently does not criticize in his book another basic scientific paradigm: mathematical modeling using calculations. What if our mind just does not calculate, that is, its functioning cannot be reduced to symbolic or numeric manipulations? Would we not be scientists, then?

The section "Disciplined step satisfaction" deals with the related problem of the stepwise interpretation of algorithms or programs. My phrasing is correct: one should not say that computers execute a program; they always interpret it, even at machine language level. I conjecture that many times humans do execute "something" (which should not be called a program); this would be another essential difference between humans and digital machines. Fetzer cites Dietrich (1991), and Cummins and Schwarz (1991) who "go much further, however, and define 'cognition' as the computation of functions" (p. 163) – which implies the execution of procedures. The following section, "Human thought processes" points out that many thinking processes – e.g., dreaming and day-dreaming – fail to satisfy those [computations of functions] conditions. Fetzer notes that perception varies tremendously with age: "Perception is a fallible activity, which makes perception a second kind of non-computational thought process" (p. 164). (See my remarks above about visual perception.)

Other sections of this chapter cover important issues, such as consciousness and laws of thought. Interestingly, Fetzer does not mention self-awareness. Adult humans may be aware of what they think, and of some of the causes that move them. Without covering these aspects, any treatment of consciousness is in my opinion incomplete.

Part III, "Computer epistemology", contains important papers on program verification, an issue that stirred the Computer Science community, but don't touch directly the main subject of the book. I will however comment on them, after describing the chapters on this part, since this was one of my areas of research.

Chapter 8, "Program verification: The very idea", is the famous Communications of the ACM 1988 paper that criticized DeMillo, Lypton and Perlis (1979) who argued that the situation with program verification was worse than with mathematics. In the latter, there is a social process of many mathematicians agreeing that, say, a theorem proof is correct (recall Andrew Wiles's drama of proving Fermat's Last Theorem [see Singh 1998]). Program verification, however, is in general an isolated activity, because programs are complex and boring. Fetzer's main point is that programs belong to applied mathematics (he considers computers to be applied mathematical machines), "where propositions …, unlike those in pure mathematics, run the risk of observational and experimental disconfirmation" (p. 204). Fetzer concludes that "the difference between proving a theorem and verifying a program does not depend upon the presence or absence of any social process during their production, but rather upon the presence or absence of causal significance" (p. 213).

Chapter 9, "Philosophical aspects of program verification", repeats parts of the previous one, elaborating upon Hoare's (1969) thesis that programming is a mathematical activity. Further, he discusses whether it is better to treat a program as an abstract, formal mathematical entity, or as a running program:

When the model is an abstract model (the axioms of which can be given as stipulations) that does not represent a specific physical machine, a formal proof of correctness can then guarantee that a program will perform as specified (unless mistakes have been made in its construction). Otherwise, it cannot (p. 234).

Thus, Fetzer correctly insists that there is a fundamental difference between a program as a text and a program controlling the functioning of a computer, that is, a program as a cause. His last section is quite aggressive:

If the entire debate has brought into the foreground some indefensible assumptions that have been made by some influential figures in the field, then it will have been entirely worthwhile. It should be obvious by now that pure mathematics provides an unsuitable paradigm for computer science (p. 243).

He goes on to remark that the issue of program correctness has deep social and ethical implications, since "we can afford to run a system that has been only formally assessed if the consequences of mistakes are relatively minor, but not if they are serious" (p. 244), and suggests that formal proofs be complemented by prototyping and testing.

Chapter 10, "Philosophy and computer science – reflections on the program verification debate", contains a fascinating account of how he got involved with the question of program verification. The account includes the long editorial process to have the 1988 paper accepted, the letters criticizing his ideas, and how the issue of program verification became questions of the philosophy of computer science.

Regarding this part of the book, I will comment on the reasons why I think formal program verification never turned into a standard procedure in program development.

1. Program verification requires advanced mathematical reasoning, not mastered by many programmers. Figuring out the proper "loop invariants", i.e., assertions that are valid at any execution of a loop of instructions, is not a trivial task. Total correctness, that is, proving also that the program not only gives the correct results but also stops for any input, requires even more advanced mathematics.

2. Computers permit that lousy programs give satisfactory results. What I mean is that a program can be developed without method and proper documentation. Initially, in general such a program does not work properly. Then the programmer enters a phase of testing and modification, frequently also without any method and without planning. After many modifications (in general, more than one modification is done before the next test) and runs, the program starts giving the expected results. Typically, if more than one programmer is working at the program, one of them shouts to the others "don't touch it anymore!" – because the fact is that in general nobody knows precisely why the program has started to work. Planning and documenting a program's development and testing requires enormous self-discipline, because the computer does not interpret the documentation, but only the code. The result is that most programs are lousy. This is why changing a program is so expensive. Lousy programming was the deep reason for the gigantic costs involved with the "Year 2000 Bug" problem – in badly documented programs, it was extremely difficult to discover where dates were being used.

3. One additional problem is that nowadays programs display lots of "cosmetics". Such additions did not exist when the only results were printed reports. So customers may become fascinated with the multimedia effects produced by a program and never mind about its robustness, easiness and speed of introducing modifications. If a program can be badly developed and still run and satisfy the customer, why should a programmer bother to go through elaborate proofs of correctness? Nobody will look at the proofs, everybody will look at the cosmetics displayed.

4. Another factor for not dedicating too much time for good documentation and proofs of correctness is that programs tend to be replaced quite fast.

5. If a program is incorrect, formal correctness proofs may give no results. In other words, correctness proofs prove that a program is correct, but may reveal nothing if the program is incorrect. The discussion of proofs of program correctness should take into account "program development by transformations" (see e.g. Setzer 1979). From this point of view, programs are initially specified through assertions. Then, rules of transformations which have been previously proven correct are applied, eventually using a computer, until one gets a final correct program. Obviously, to conform to Fetzer's important consideration that running a program on a computer is different from considering it a text abstraction, the rules of transformation should be verified taking each machine into consideration.

There are rigorous, mathematical techniques for developing programs, part of the field called software engineering, a field mentioned by Fetzer in his next chapter. Unfortunately, these methods are so complicated mathematically, requiring the definition of multiple recursive functions, that their specifications tend to be more complex than the final programs they produce.

The Epilogue, "Computer reliability and public policy: Limits of knowledge of computer-based systems", starts with considerations on "expert systems" and the difficulties of specifying what is naively understood as the "knowledge" possessed by some person. Then he discusses the problems of computers eventually having design flaws, the source of information leading to the construction of an eventually inaccurate system, and the programmer's eventual programming errors. This characterizes data processing systems as being eventually quite unreliable. Fetzer concludes that

no matter what solutions a society might adopt to enable its citizens to better cope with computer systems, such as software standards and liability protection, the full dimensions of the problem must be understood. Our confidence is this technology should not be merely an article of faith (p. 308).

Regarding such important concepts as "data", "information" and "knowledge", I would like to refer the reader to a recent paper where I define the first and characterize the other two, as well as "competency", leading to quite practical results (Setzer 2001). I draw distinctions between the various concepts, and also review some of the literature on the subject. But what is really important in Fetzer's Epilogue is that it calls attention to the unjustified confidence people have in computers. It is a pity he does not relate this subject to the book's first. The fact that people regard themselves as machines gives the latter an undue importance in individual and social life. If a person does not regard herself as a machine, she tends to be much more careful when using machines. For instance, she would probably be more concerned with children using computers, than another person whose Weltanschauung is that herself and the whole world are just machines. For the latter, a machine (a child) being taught by another machine (a computer) is not strange and does not raise any special concern (Setzer 1989: 61).

2. Beyond Fetzer's critique

I would like to make now some delicate considerations. I appreciate the efforts of great people like Searle, Penrose and Fetzer. However, I think they are never going to be satisfactory. In some sense, they are using the same paradigms as the people who advocate that minds are machines. One exponent of such a view is Antonio Damasio:

The facts I have presented about feelings and reason, along with others I have discussed about the interconnection between brain and body proper, support the most general idea with which I introduced the book: that the comprehensive understanding of the human mind requires an organismic perspective; that not only must the mind move from a nonphysical cogitum to the realm of biological tissue, but it must also be related to a whole organism possessed of integrated body proper and brain and fully interactive with a physical and social environment. The truly embodied mind I envision, however, does not relinquish its most refined levels of operation, those constituting its soul and spirit. From my perspective, it is just that soul and spirit, with all their dignity and human scale, are now complex and unique states of an organism (Damasio 1994: 251).

Note that John Searle is exactly of this opinion. His first premise is "Brains cause minds" (Searle 1991: 39), and he writes:

Suppose we ask the question that I mentioned at the beginning: 'Could a machine think?'. Well, in one sense, of course, we are all machines. We can construe the stuff inside our heads as a meat machine. And of course we can all think. So, in one sense of 'machine', namely that sense in which a machine is just a physical system which is capable of performing kinds of operations, in that sense, we are all machines that can think (Searle 1991: 35).

Searle's argument is that we have semantics, computers don't, so we are more than computers. We are mysterious kinds of machines. Fetzer uses arguments based mainly in his Peircean view of signs, that is, for him mind processes are abstractions: he is not able to say how humans associate signs.

Now, let's make a very special hypothesis: humans, and in fact all living beings have a physical body, which can be seen and touched, which is the sole subject of contemporary scientific research, but they have nonphysical constituents as well. A nonphysical constituent cannot be directly observed, in any way, through our senses or other physical processes. We may, though, observe its action upon the physical (how this may be possible will be discussed below). So I am suggesting that we abandon what I like to call "the central dogma of contemporary science" (CDCS), namely that there is only a material universe and there are only material, physico-chemical processes in the universe. I am not alone in suggesting the abandonment of this dogma. Penrose (1991: 96, 428) acknowledges the existence of a Platonic world of mathematical concepts. As we will see, to me this world contains more than mathematical concepts.

In the physical world, substances have different degrees of substantiality. The ancient Greeks had already recognized this, when they spoke about earth, water, air and fire, which for them represented the qualities inherent to the corresponding physical states. Analogously, let us assume that nonphysical constituents have different levels of "being". Thus, "above" its physical body, each typical plant has the lowest level of nonphysicality, which provides it with what is generally called life processes: nutrition, growing from the inside (minerals grow by sedimentation), tissue regeneration, reproduction, organic form. It is known that changes in the DNA of a seed may produce changes in the form of a flower, but it is not known how the form is produced from the DNA. To me, it is clear that the form of a leave of a plant of a certain species follows a model. However, a model is not a physical entity, it is a thought-entity. This model, the nonphysical constituent of the first level, interacts with the physical part of the plant, including the genes, imposing upon it the organic forms that characterize a certain species. It also interacts with the environment, providing, for example, for different sizes according to altitude and climate – a fact that surprised Goethe in his trip to Italy and prompted him to develop his notion of "Urpflanze" (see, for instance, his essay "Glückliches Ereignis"). So a plant is a consequence of its genes, the environment (including the inner environment), and of a third factor. Biologist Richard Lewontin, examining research done with plant clones planted at various altitudes, recognizes very well this third factor but he calls it "noisy development" (Lewontin 2000: 35). Just like Searle, he is not willing to abandon the CDCS. However, he writes: "If we had the complete DNA sequence of an organism and unlimited computational power, we could not compute the organism, because the organism does not compute itself from its genes" (id.: 17; my italics).

In addition to the nonphysical constituent of the kind that acts "behind" each plant, each typical animal would have a nonphysical constituent of a second level of nonphysicality. This constituent would be responsible for what plants do not have: movement, instincts, consciousness (if you hurt an animal, it reacts; this shows that it has consciousness), pain, sleep, breathing. Note that a holistic approach considers typical, developed species, and not transitions, i.e., animals that are almost like plants. Transitions between plants and minerals and between animals and plants should be investigated having the typical species as models; this is the top-down, general-to-particular paradigm of Goethean science. The existence of a nonphysical constituent of the second level changes the constituents at the lower levels, including the physical body; that's why animals have different forms than plants, as for instance hollow organs, members which permit movement, a nervous system, etc. You may see here another kind of explanation, different from the common one: animals have a certain form because of a thought-model, and this permits them to move. Their movement is not due primarily to their form, that is, the fact that they have members. The archetype of their movement is already in their model – their constituent of the second level, which shapes the physical body.

Humans would have a third nonphysical constituent, which is unique for each individual. And this makes them absolutely different from animals: I never call a human being a rational animal. According to the view of living beings here proposed, humans are not animals. Obviously, humans have much in common with animals, but animals have much in common with plants (organic tissues, growth, regeneration, reproduction, etc.) and we never call them "moving plants". The presence of the third constituent gives humans the characteristics that don't exist in animals: conscious thinking, self-consciousness, individuality transcending mere genetic individuality and conditioning by the environment (recall the case of the Dionne quintuple sisters: they had quite different lives, albeit having the same DNA and environment as children), and freedom.

I cannot prove that humans can be free, but I may suggest an experiment that anyone can perform, in order to convince herself that this is possible. Sit in a quiet place and close your eyes. Try to produce an inner state of calm, avoiding inner anxieties, preoccupations, etc. Choose two numbers between 0 and 9, which mean nothing to you (e.g., don't choose the first digit of your phone number or the age of your child or grandchild), then choose one of them. Imagine this number in front of you, in a large display. Concentrate on this image for some time, without letting other images or thoughts enter your mental representation. Untrained people will be able to concentrate for some seconds, others for a longer time. It is easy to recognize that nothing forces you to choose one of the two numbers, and to imagine other things – at least for some time. If you perform this experiment, you will have the inner experience of self-determination.

In chapter 5 Fetzer discusses determinism, indeterminism, and non-determinism. But he and others never refer to self-determinism, an essential feature of being free. Note that it begins in our thinking, and not in our physical actions. I am not free to jump 4 meters above the ground, but I am free to choose my next thought, any thought. If humans would be machines, they would be entirely subjected to physical laws, and there would be no possibility whatsoever of being free, not even in thinking. Without freedom, it is impossible to assign responsibility, dignity, and a sense to human life. Most scientists will probably say that freedom of thinking is an illusion, and that our thoughts are determined by our "circuits". But, as I hope you have concluded, this is not what everyone experiences. If you can determine your next thought, you are not a machine. Note that explanations appealing to quantum mechanical processes, as advanced by Penrose with his microtubules (Kurzweil 1999: 117), would lead to randomness. However, we don't experience our acts or thoughts, when self-controlled, as being random.

In my opinion, Einstein was in the grip of a tremendous inner conflict, because, following Spinoza, he considered the world to be absolutely deterministic (Jammer 2000: 62, 66, 69, 75) and could not admit free will (ibid., 61, 179). He said: "Man should conduct his moral life as he would be free" (ibid., 71). However, when in 1941 the horrors of German concentration camps became known, he had to assign responsibility to the Nazis. He also assigned responsibility to the German people, for having elected Hitler: "Germans, as a whole nation, are responsible for these mass killings and should be punished as a people, if there is justice on Earth" (ibid.). He considered himself religious, and coined the expression "cosmic religion", where God was not personal and did not punish or reward (ibid., 64). As many reasonable people, he thought he had to admit that matter or energy had to be created at the beginning of the physical universe. But if he would have admitted the existence of a nonphysical constituent of each human being, he would have been able to continue to be religious without the inner conflicts that he had to face. He would not have to embrace a religion of fear nor of reward and punishment. Although Einstein considered himself religious, I have a strong impression that he was dominated by the CDCS, which obviously leads to determinism, or randomness (which he disliked).

If science would get rid of its main prejudice, its main dogma, and accept the hypothesis of the existence of a nonphysical universe, much research would change completely. For instance, instead of looking at neurons as the origin of our thoughts, one could suppose that their physical activity is a consequence of the thinking activity. Experiments would then be quite different. Or, instead of looking at the genes as the origin of every life manifestation, one would look for the outcomes of interactions between nonphysical constituents of the first level, the genes and the environment. This leads to the central, old question: how may a nonphysical process interact with the physical world? I have some possible explanations for such a phenomenon. Firstly, suppose a physical system is in a state of unstable equilibrium. Then, an infinitesimal amount of energy could produce a change of state. Maybe neuron functioning is based upon this scheme. Secondly, take a living cell in a living organism. There are three possibilities: either the cell stays as it is, being used to differentiate or form tissues; or the cell subdivides (mitosis), being then used for growing and regeneration; or it dies. The decision on which of these three paths to take does not require energy. Obviously, subdivision itself requires energy, but not the decision to subdivide. A third possibility would be quantum non-determinism at the atomic or molecular level; choosing some change among various possible changes does not require extra energy.

I am sure that many readers are now shaking their heads, considering that I am not being scientific, if not just classifying my last paragraphs as "pure rubbish". Probably, they have consciously or unconsciously fallen in the grip of CDCS. I want to make it clear that what I want is not to embrace mysticism and become unscientific, but to enlarge scientific activity just by accepting the hypothesis of the existence of nonphysical constituents in plants, animals and humans. This is not a matter of faith but a working hypothesis, formulated through clear concepts, rather than through fuzzy feelings or ideas. This enlargement has to begin by getting rid of CDCS, which is nothing but a limiting and pernicious prejudice.

If we admit the existence of a Platonic world of ideas (encompassing also mathematical concepts), then we may consider that our third nonphysical constituent is able to reach that world. Look at the entrance of your room. What are you perceiving with your eyes? Probably everyone will say: "I am perceiving a door". However, the fact is that you are not perceiving a door. "Door" is a concept. What you are perceiving are differences in color (I used 'perceiving' instead of 'seeing' to eliminate possible misinterpretations of 'seeing' as involving concepts.) Ask your students in your classroom: everyone will agree that s/he is perceiving a door. Nobody has the faintest doubt about it. Now we reach a fundamental question. How is it possible that we all are so sure of what we "see" if we only perceive differences in color? A possible explanation is that we are able to complete our perception with the nonphysical concept that exists behind each perceived object. This is done by our thinking. The person who introduced the idea that thinking is a perception of the Platonic world of ideas was Rudolf Steiner, at the end of the 19th century, elaborated in his masterpiece "Die Philosophie der Freiheit" (Steiner 1963). According to him, thinking works as a bridge between the inner perception and the nonphysical world of concepts (id.: 109). Through thinking, we complete with perceived concepts the partial and instantaneous world presented by our senses, and reach the totality of perceived objects (ibid.: 124). In other words, he associates this activity with what I loosely characterized as our third nonphysical constituent.

The reader might ask, "What about our physical brain? Why is it necessary?". Steiner gives an interesting analogy to answer this question. He describes a person looking at herself in a mirror. She becomes conscious of her figure. Now break the mirror. She is still there, but she is not conscious of her own figure anymore. Analogously, the brain works as a mirror: it makes us able to be conscious of our own thoughts. This is why the wisdom of our language has also called thinking "reflecting". Without this consciousness, we would not be able to control our thinking process. And we would not be free. In Steiner's Weltanschauung, thus, physical existence is absolutely essential. Elsewhere, he describes how in each child, the third nonphysical constituent undergoes a continuous process of "penetrating", dominating and shaping the lower level constituents, including, indirectly, the physical body. This is why a newborn looks so round, so universal (and everyone in the family is able to recognize her own traces in the baby's face). As the child grows, her third constituent turns her into an individual being, not just in appearance, but also in feelings, impulses, ideals, etc. Obviously, genes and the environment also play a role, but this higher individuality helps explaining for example why perfect twins with the same environment sometimes develop completely different impulses and lives.

I am sorry for having had to go through all these details. Unfortunately, I had to briefly (and roughly!) expound my nonstandard view of the world in order to justify my claim that, unless they get rid of the CDCS, Fetzer and others will never be able to satisfactorily explain and defend their (correct) intuition that humans are not machines. Their arguments, which either appeal to a mysterious "semantics" (in my model, following Steiner's, there is nothing mysterious to it: we understand something if we are able to associate through thinking the perception of a phenomenon with its essence, its correct and real concept), or to Penrose's non-algorithmic processes (p. 110) (obviously our nonphysical constituents don't follow algorithms, much less discrete calculations), don't reach the essence of our being. Humans are not machines because they are not just physical beings. But not only cognition will never be understood through mere physical processes, or abstractions that have nothing to do with (physical and nonphysical) reality. I conjecture that the CDCS will never lead to a good explanation of life, life processes, instincts, sleep, dreams, death, birth, consciousness and self-consciousness, good and evil, freedom, human dignity, history, and so on. Not even the form of a leaf will be well explained that way. And science will continue to be condemned to stay in its present Platonic cave of shadows, serving technology and capital, rather than humankind.

References

Cohen, D.I.A. 1986. Introduction to Computer Theory. New York: John Wiley and Sons.

Columbus, C. (director). 1999. The Bicentennial Man. Film.

Damasio, A. 1994. Descartes' Error – Emotion, Reason, and the Human Brain. New York: Grosset/Putnam.

De Millo, R., Lypton, R., and Perlis, A. 1979. "Social processes and proofs of theorems and programs". Comm. ACM 22, 5, 271-280.

Dietrich, E. 1991. "Computationalism". Social Epistemology 4 (1991), 135-154.

Cummins, R., and Schwarz, G. (1991). "Connectionism, Computation and Cognition". In Horgan, T. and Tienson, J. (eds), Connectionism and the Philosophy of Mind. Dordrecht: Kluwer, 60-73.

Dijkstra, E.W. 1976. A Discipline of Programming. Englewood Cliffs: Prentice Hall.

Hoare, C.A.R. 1969. "An axiomatic basis for computer programming". Comm. ACM 12, 576-580, 583.

Jammer, M. 2000. Einstein e a Religião - Física e Teologia, trans. V. Ribeiro Rio de Janeiro: Contraponto. [Engl. original Einstein and Religion: Physics and Theology.]

Kurzweil, R. 1999. The Age of Spiritual Machines - When Computers Exceed Human Intelligence. New York: Penguin.

Lewontin, R. 2000. The Triple Helix - Gene, Organism, and Environment. Cambridge, MA: Harvard University Press.

Penrose, R. 1991. The Emperor's New Mind - Concerning Computers, Minds and the Laws of Physics. New York: Penguin.

Rohen, J.W. 2000. Morphologie des menschlichen Organismus. Stuttgart: Verlag Freiesgestesleben.

Searle, J.R. 1991. Minds, Brains and Science - the 1984 Reith Lectures. London: Penguin Books.

Setzer, V.W. 1979. "Program development by transformations applied to relational database queries". Proceedings of the 5th International Conference on Very Large Databases. Menlo Park, CA: Morgan Kaufmann, 436-443.

Setzer, V.W. 1989. Computers in Education. Edinburgh: Floris Books.

Setzer, V.W. 2001. "Data, information, knowledge and competency". On my web site.

Spielberg, S. (director). 2001. Artificial Intelligence. Film.

Steiner, R. 1963. The Philosophy of Spiritual Activity - Fundamentals of a Modern View of the World; Results of Introspective Observations According to the Method of Natural Science, with an introduction by Hugo S. Bergman, trans. R. Stebbing. West Nyack, NY: Rudolf Steiner Publications.

Singh, S. 1998. Fermat's Enigma - The Epic Quest to Solve the World's Greatest Mathematical Problem. New York: Anchor Books.

Zajonc, A. 1995. Catching the Light – The Entwined History of Light and Mind. New York: Bantam.