Artificial Intelligence:

An Interdisciplinary Examination of Philosophical and Practical Issues

 
PHILLIP REECE HAMILTON
RESEARCH PROJECT
Presented to the Faculty of
The University of Texas at Dallas
in Partial Fulfillment
of the Requirements
for the Degree of
MASTER OF ARTS IN INTERDISCIPLINARY STUDIES
 
THE UNIVERSITY OF TEXAS AT DALLAS
 



 
 

Contents

 
What is AI?
1.    What is Intelligence?
2.    General Theory of Intelligence
3.    What is Understanding?
4.    What is Artificial Intelligence?
5.    Strong AI and its Critics

How to Obtain AI?

1.    How Can You Program Something You Don’t Understand?

Why AI?
1.    Man as Creator
2.    The Role of an Artificial Intelligence in the Self-directed Evolution of Man

Bibliography
 



 
 
 
What is AI?

What is Intelligence?

 What is Artificial Intelligence? Is it a field of study, a discipline, a set of sophisticated programming techniques or maybe just a loosely defined goal? Examples of all these usages, as well as others, are commonly found. The term Artificial Intelligence was adopted and popularized by the participants of the Dartmouth Conference in 1956 to designate the goal of creating a thinking machine. The selection of the term has since been frequently criticized and other titles have been offered or used as a substitute. Alternatives include: complex information processing, machine intelligence, expert systems, knowledge engineering, and machine thinking. An examination of meaning of the term Artificial Intelligence itself provides insight into some of the fundamental questions that arise in connection with the subject. One approach to the question of what constitutes AI is to break it down into two subsidiary questions; how do we define intelligence and what do we mean by artificial? The lack of agreement among researchers and scholars on the subject of AI is not surprising viewed in the light of the difficulty of even obtaining consensus on the meaning of these component terms. While we routinely make reference to the idea of intelligence, the concept is deceptively simple in appearance. How many times has someone been described as follows: “He is very intelligent but he’s not very practical; he just doesn’t have much common sense.” What does this mean? What is the difference between intelligence and common sense? Is intelligence the ability to think or reason, or to learn, to understand, to apply knowledge to manipulate one’s environment or to think abstractly? These are all definitions found in Webster’s dictionary. Does intelligence require all of these elements to be present or only some? Are there different types of intelligence or can intelligence be seen as a continuum with varying degrees? It is very easy at this point to get lost in a semantic jungle of constitutive definitions. If intelligence is the ability to think or understand, what does it mean to think or understand; what is the nature of thought and understanding? These are not trivial questions. Although we have an everyday understanding of these words, it does not require too close an inspection of the concepts they represent to realize that they are not rigorously defined. And yet, in order to have a widely acceptable definition of what would constitute an artificial intelligence, a clearer understanding of natural intelligence certainly seems desirable, if not necessary. Churchland (3) refers to this issue as “the semantical problem” and asks: “where do our common-sense terms for mental states get their meaning?” Ultimately, we can only directly experience our own mental states and must accept that the process of thought and feeling is the same for others based on the appearance of the presence of such processes as they are expressed verbally and through behavior. This is a very important point and will be taken up in more detail later in the section on strong AI. 

 Richelle (19) states that “although scientific psychologists have been studying intelligence for a century they do not seem to have come closer to a widely acceptable, consistent general theory of intelligence.” It strikes me as not too surprising that science has not defined intelligence given the lack of agreement among philosophers and scientists alike as to the fundamental nature of the mind. We have not progressed even to the point of resolving the conflict between various forms of neo-Cartesian dualism and those of materialist monism as models of the mind. The question of the nature of the mind and that of intelligence would seem to be so closely related that neither can be answered without addressing the other. This seemingly forces us to expand our query and ask what are the elements that comprise the mind? Our definition of intelligence seems to leave out many fundamental functions and abilities that we consider as aspects or attributes of the mind and that that would be necessary for the reproduction of the full range of human cognition. When I set out to research the question of what is an adequate definition of intelligence by which the goal of creating artificial intelligence could delineated, I expected to be able to find or produce a singular, coherent and elegant definition of intelligence. I have been unable to do so. A review of the history of AI research reveals that my experience parallels that of others. Many of the early researchers of AI, machine translation and related fields initially approached these subjects rather optimistically only to discover that the issues involved were incredibly more difficult than they appeared on the surface. However, after much reading I have finally came across a definition of intelligence worth quoting. It is a rather personal definition given by Marvin Minsky (329):

 Intelligence A term frequently used to express the myth that some single entity or element is responsible for the quality of a persons ability to reason. I prefer to think of this word as representing not any particular power or phenomenon, but simply all the mental skills that, at any particular moment, we admire but don’t yet understand.” 

 While not at all satisfying as an answer to what intelligence is, it is noteworthy as contributing to what intelligence probably is not and, I think, honestly and accurately expresses the lack of human understanding of our own selves. 

 Though I find the standard definition of intelligence offered by psychology to be lacking for the purpose of exploring the subject of artificial intelligence, I would be remiss if I did not include it here. A frequently cited definition of intelligence comes from David Wechsler who provides us that intelligence is “the aggregate capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment” (Hynd and Willis 135). This is somewhat useful as an overall definition but is far too broad to be useful to say whether a given system is intelligent or not, much less provide sufficient direction to provide a blueprint of what specifically should be included in designing an artificial intelligence. It also leaves us with the need to define what “rationally and purposefully” mean to us. While discussing the definition of intelligence, Humphreys states that “most accounts in the psychological literature assume that intelligence is an innate capacity or learning potential. This is especially characteristic of those who use and interpret client’s scores on intelligence tests....People who have been active in test development do tend to narrow the scope of the supposed capacity. They use descriptive phrases, such as the manipulation of symbols, dealing with abstractions, mental adaptability, adjustment of thinking to new requirements, and so on. These statements about intelligence are in effect, content analyses of the items that appear in the tests” (202). A review of psychological literature reveals a tendency to focus much more attention on developing measures of intelligence than trying to understand the nature of the thing being measured. Ultimately, this results in intelligence being defined as whatever it is that the intelligence test measures (Guilford 225).

 Given all the above, the best that can be done at present may be to create a patchwork definition that includes all the elements that seem to comprise the incredibly complex and multi-faceted phenomena we refer to as intelligence. Originally, much of AI research focused on logic, problem solving and hueristic decision making. Much progress has been made in these areas. There are computer programs that can play chess better than all but a few world class human chess masters; expert systems are routinely used in business, industry and medicine among others and there are a multitude of software products from databases to spell-checkers that have benefited from the efforts of computer science to incorporate “intelligent” features into software. But as a whole, the goal of AI keeps receding like a mirage on the horizon. As the more straight forward problems are sorted out it becomes more apparent that much of the behavior we ascribe to the presence of intelligence is not so much linear in nature as nonlinear and may be less well represented by a strictly digital model as by one that is at least partially analog. As Kandel (194) points out from a neurobiological point of view, “some of the most remarkable activities of the brain, such as learning and memory, are thought to emerge from elementary properties of chemical synapses.” The complex arrangement of information processing in the brain results from a combination of electrical and chemical signaling. Unlike the discrete state machines we know as computers which are binary and currently use 256 state switches (bytes), the brain has a more subtle nature due to its chemical makeup. I will consider the implications of the differences in makeup between a biological brain and a digital computer in relation to their implications on creating AI at greater length later in the section on strong AI. The point I wish to make now with this is that a complete description of intelligence will have to include more than the reasoning and logical aspects of cognition. Those parts that lend themselves to duplication by orderly algorithms of the type we are familiar with from computer science may only be one aspect of intelligence and the easier part of the puzzle at that. The most important and more difficult aspects of thought may prove to be functions of chemistry and much more complicated to model.

 While we tend to think of the mind and intelligence as being logical and rational, I think that much of our existence is neither of these. Consider the following list of seemingly inherent aspects of ourselves as sentient beings: emotion, common sense, creativity, holistic and analogical thinking, spirituality, understanding, inspiration, and intuition. An initial response may be to say that things such as emotion and spirituality are really quite distinct from intelligence, but how are they? To what degree is our thought guided by our emotions, our goals by our beliefs and our understanding of things by the feelings we hold towards them? In relation to the role of cognitive processes such as thinking and memory, Goldstein (7) states that “these processes are both an outcome of the perceptual process and a determinant of that process.” Our conscious being is iterative in nature; previous experiences and the attitudes and ideas that we hold, influence our current perception and understanding.

 There is also danger inherent in the reification of concepts such as mind, consciousness and intelligence. Trying to attribute abilities in logic, common sense, creativity and all the other aspects of cognition that seem to be a reflection of intelligence, to a single construct, may be wrong. In Minsky’s definition of intelligence given above, “the myth that some single entity or element is responsible for the quality of a persons ability to reason” that he refers to is defined in psychology as the factor g (designated by C.E. Spearman as representing general intelligence). Humphreys concurs, stating that “intelligence is observable. It is not a capacity.” On the other side of the argument, Hynd and Willis (135) state that “intelligence is a general class of behavior” and that g is the “unifying element for the individual behaviors composing that class ... the presence of this factor is at least implicitly understood in nearly all theories of intelligence.” John Horn (271) believes that the notion of general intelligence is faulty and the idea should be dispensed with in favor of multiple concepts to adequately represent the behaviors attributed to general intelligence. He supports his argument stating that “existing evidence about the nature of human capacities provides little basis for the belief. This evidence indicates that several distinct functions are involved in performances that are classified under the heading of intelligence. These distinct functions probably have distinct genetic bases, distinct courses of development in infancy, childhood, and adulthood, and distinct implications for understanding human retardation, human accomplishments, human creativity, and human happiness”(273). I would add that this view of human intelligence also has distinct implications for the development of artificial intelligence. If intelligence is a class of behavior rather than some mysterious property, then theoretically we can reproduce intelligence by modeling intelligent behavior without necessarily requiring a complete understanding of the underlying process. While the range of behavior we associate with intelligence may not be dependent on a single factor of general intelligence, I will next consider the value of the possibility that we can develop a general theory of intelligence.


General Theory of Intelligence
 
As we have seen above, our understanding of human intelligence is at best imperfect. Likewise, the distinction between artificial and natural intelligence is not very clear in qualitative terms. If our goal is to create machines capable of intelligent behavior, we would certainly benefit from a better understanding of intelligence. What is called for, however, is not only an understanding of human intelligence, but of intelligence in general. Johnson-Laird (165) identifies the need for a general theory of thought as a prerequisite: “A complete theory of thinking should enable us to construct thinking machines, that is, devices that instantiate the theory and that are able to deduce, induce, and create. Any account that fails this test is, at best, a sketch for a cognitive science. Past theories had neither this goal before them nor the conceptual equipment to achieve it.” Along similar lines, Wagman proposes calls for a general theory of intelligence that ”entails the specific components of intellect as conceptualized in the domains of human and artificial intelligence. These specific components include the conceptual areas of reasoning, language, learning, and discovery” (Wagman xv). Rather than simply obtaining a model of human thinking on which to model thinking machines, he examines and contrasts aspects of intelligence in terms of both natural and machine intelligence. This approach has much to commend itself and serves as a good to place to begin in examining what principles underlie the behaviors we consider intelligent.
 We need to develop an understanding of intelligence that is platform independent. The most obvious source of information for non-human intelligence comes from ethnology and evolutionary theory. Herbert Terrace (123) contrasts the positions of Descartes and Darwin, citing the Darwin’s belief that “it is just as logical to say that the human mind evolved from animal minds as to say that human anatomical and physiological structures evolved from their animal counterparts.” Descartes’ belief that animals cannot think was based in part on his belief that they were not capable of the facility of language. Terrace examines various animal studies including attempts to teach language to apes and concludes that while animals do not have the ability to use grammar in a generative fashion, that this does not mean that they do not possess thought. This leads us to the question of the nature of animal thought. If animals possess thought, albeit in a the non-linguistic medium, then clearly we have about us a variety of non-human minds that can be modeled. While human, language-based thought seems obviously superior in nature to that of other living creatures on this planet, the possibility of the artificial creation or natural evolution of other higher order intelligence of fundamentally a different nature should not be discounted out of hand. The only certainty about life is that it is not static. In any kind of long term scenario, life must be evolving or devolving. Considering the evolution of human and animal intelligence to the present may allow us to project what possible future directions evolution might take. If for no other reason, the development of models of animal minds should be of interest because an understanding of them will likely contribute to our own understanding of ourselves by understanding our origins As pointed out by Blakemore & Greenfield (109), ”the principle of continuity makes it inconceivable that the human mind has no precedent in the biology of animals.” Understanding the nature of the continuity of development between non-linguistic and linguistic minds may shed light on the fundamental principles of thought and consciousness. Even further, when we consider what is the definition of human intelligence we might also try to imagine how it might differ from an alien intelligence. This approach to understanding the mind allows us to broaden the narrower, logic based view that I discussed earlier, which is held by many philosophers, psychologists, and computer scientists as exemplified in the quotation of Wagman given above with its emphasis on the “conceptual areas of reasoning, language, learning, and discovery.” Wagman’s definition is excellent as far as it goes, but as the ultimate goal, I envision a general definition of intelligence that is more inclusive, allowing for other forms of life known to us as well as those that might as yet remain undiscovered elsewhere in the universe. Wagman writes of the need to develop a system of symbolic logic that will support both machine and human intelligence. He states that such a system is a necessary though not sufficient condition for the development for a general theory of intelligence. I think that the remaining conditions will prove to be more complex than the requirement of symbolic logic, as they must account for the more ethereal aspects of our minds.


What is understanding?
 
 If the question of the nature of intelligence seems difficult, that of understanding is even more elusive. What does it mean to understand something? How do we understand for example the meaning of words? How do we humans relate words with the world? An appreciation of this is necessary for us to approach the question of whether it is possible for a computer to have real understanding. Computer programs such as ELIZA can exhibit the appearance of understanding by conducting a seemingly intelligent conversation but obviously do not have any real understanding (Schank 17-19). The conversation will appear sensible only as long as one is willing to follow along with the conversation and respond appropriately. While ELIZA has respectable syntactical and morphological abilities, the program is very limited in terms of semantics. Its conversation does not have any meaning for either the hardware or software. It doesn’t understand the words it uses. And yet, if within a limited domain, the program can hold up its end of a conversation what is the difference between its performance and that of a human interacting with it? What is the difference between ELIZA and me? Clearly there is a great difference. When we say that the computer does not posses understanding, that the words have no real meaning for it, we are saying that it does not relate the symbols (the words) that it manipulates to the things they represent in the world. As Robinson explains it, computers have no “world-word connections” (35). The brain is much like a computer in that it is a processor of information, but with two differences. First, humans have various forms of sensory input that allows the brain to receive information about the world; secondly, we develop internal models of the world and the things in it including ourselves (Minsky 110). We relate words with internal representations which in turn are linked by sensory perception experiences of objects and event in the real world. We then can use words as symbols to represent the things they represent in the world and to manipulate our internal models, as well as communicate with others. Communication then is the ability to relate our internal models to those of others by exchanging bits of our model through language. By comparing aspects of our model with those of others or with things and events in the real world, we can make changes in our model. These changes are commonly referred to as learning. So for a computer to have understanding, it must first have world-word connections which require an internal model of the world. This will be considered further in the section on how to obtain AI.
 Much of AI research has centered on a top-down, linear algorithm approach. This is a natural result of focusing on the logical problem solving aspect of intelligence. But as Schank (1992, 138) notes, this only accounts for a small portion of the activity involved in human intelligence. Much of human intelligence is involved in a contextual understanding of the world about us. Bits of knowledge and understanding do not exist in a vacuum, but in relation to the totality of knowledge and understanding.  This is a critical point in understanding the problem of AI and is key to the question I examine in the chapter of whether “strong AI” is theoretically possible.


What is Artificial Intelligence?
 
 In spite of the difficulty in obtaining a clear and rigorous definition of intelligence, I will press on and examine the issue of artificial intelligence. I will start by noting the ease with which we can assume an intuitive understanding of what is meant by artificial intelligence, in the same way that, based on familiarity and everyday usage, we assume an understanding of the term intelligence without being very precise about what is involved. With this in mind, I will begin by examining several definitions of what constitutes AI as a field of study. 
 A conservative and fairly common definition given by Wagman (2) states that “the field of artificial intelligence is a specialized discipline within computer science that is directed toward the continuous augmentation of computer intelligence.” The idea of AI as a subfield of computer science is extremely narrow and ignores the interdisciplinary nature of the subject. It also unnecessarily restricts our thinking about intelligence, given the general tendency of computer science to focus more on logic and linear programming, while ignoring the messier aspects of intelligence found in biology as well as the broader philosophical questions pertaining to life, thought and perception. I prefer Boden’s frequently cited definition though it cleverly skirts the issue of the nature of intelligence itself. She writes that, “Artificial Intelligence is not the study of computers, but of intelligence in thought and action. Computers are its tools, because its theories are expressed as computer programs that enable machines to do things that would require intelligence if done by people”(Boden xiii). While Wagman calls for the development of a general theory of intelligence, he assigns this task to cognitive science and leaves the actual implementation of any AI system to computer science. Boden by contrast, sees computer science as a field to be tapped for its tools by AI. Taken further we could even say that AI is itself a tool for modeling theories of cognition for the purpose of understanding human thought. I am inclined to view AI as a complex, interdisciplinary enterprise which both requires input from a variety of fields and as having practical and theoretical modeling applications for numerous fields. Along these lines, Winston (6) helps defines AI, stating that “from the perspective of goals, artificial intelligence can be viewed as part engineering, part science:
* The engineering goal of artificial intelligence is to solve real-world problems using  artificial intelligence as an armamenterium of ideas about representing knowledge,  using knowledge, and assembling systems.
* The scientific goal by of artificial intelligence is to determinewhich ideas about  representing knowledge, using knowledge, and assembling systems explain various  sorts of intelligence.”
This general division of AI into scientific and engineering camps given by Winston is echoed by Shirai. He describes the scientific school as “aiming at understanding the mechanisms of human intelligence,” with the computer “being used to provide a simulation to verify theories about intelligence.” The engineering school has as its object “to endow a computer with the intellectual capabilities of people.” Shirai goes on to say that “most researchers adopt the...[engineering] standpoint, aiming to make the capabilities of computers approach those of human intelligence without trying to imitate exactly the information processing steps of human beings.” This reflects Shirai’s computer science orientation and yet he comes very close to Boden, in definition of AI as the ability of a machine to emulate the human ability as to output, even though the process may differ. This would seem to differ from Winston, who perspective is of computer science, and who in his engineering goal seems to view AI more as toolbox to apply to practical problems with little interest in solutions having any overall resemblance to human intelligence. 

 Winston’s and Shirai’s understanding that AI is not restricted to a single goal, provides a framework that allows for inclusion of the two otherwise conflicting descriptions of AI as described by Boden and Wagman. I think both the pursuit of AI as an engineering product and as a tool to develop and test theories of cognition are valid goals. The engineering school may do well to pay more attention to the work done in other disciplines, but the development of useful AI systems need not depend on the full understanding of the human mind in order to produce useful products nor necessarily seek to mimic it very closely.

 I will now shift my attention to another aspect of what is meant by artificial intelligence, the question of what is the distinction between artificial and natural? I considered already the question of what constitutes intelligence. In doing so, I focused mainly on the question of human intelligence. I will later examine the need for a general theory of intelligence, but first it will be useful to consider the use of the word artificial in the context of AI.

 In what respects can intelligence be artificial? Robert Sokolowski examines the issue in a very lucid article entitled “Natural and Artificial Intelligence,” which I will draw on here. There are two senses of the word artificial which we need to distinguish. The first is the idea of something that is real but synthetic. Things such as artificial light or man-made diamonds are artificial, but they are also real. Artificial light illuminates in the same way natural light does, though depending on its wavelengths it may have different properties. An artificial diamond is constructed of carbon formed at high temperature and pressure the same as a naturally occurring stone. In both cases the artificial, synthetically produced item is as real as the naturally occurring phenomenon it is modeled after.  The second usage of the term artificial implies something that is not really the thing it imitates. Such as the case of artificial flowers. Artificial flowers do not possess any but the most superficial qualities of real flowers. They may appear to be flowers but they are not, which is usually obvious on closer inspection. Any real understanding of AI depends on our recognition of the distinction of between these two connotations. The school of thought that holds out the possibility of genuine intelligence, though artificially constructed, is often referred to as strong AI. (Searle 210, Penrose 17-23, Gregory 237) This is as opposed to weak AI, the idea that advances in computer science will never produce anything that can really think, only clever imitations that will superficially imitate intelligence. The strong versus weak AI debate is an issue that is currently unresolved. It is of interest to writers from a wide range of disciplines ranging from philosophy to physics. At its core is the age-old question of the nature of consciousness. The problem of how the physical body give rise seemingly ethereal consciousness, the mind-body problem, is certainly “one of the great unsolved scientific and philosophical problems of our time” (Goldstein 4). Llinás opines that for the neuroscientist,” the single most important issue one can address concerns the manner in which brains and minds relate to one another..” Aside from any practical benefits that may stem from the development of AI, the subject is certainly worthy of exploration to the extent that it will almost certainly shed light on the nature of our own minds.


Strong AI and its Critics
 
In the history of Western civilization there has been an acute awareness of mankind occupying an unique position in the scheme of life. It is seen by some as the pinnacle of evolution, and by others as a privileged place below God and the angels but above the rest of creation. Norbert Wiener (12) wrote thirty years ago that “in our desire to glorify God with respect to man and Man with respect to matter, it is thus natural to assume that machines cannot make other machines in their own image; that this is something associated with a sharp dichotomy of systems into living and non-living; and that it is moreover associated between the other dichotomy between creator and creature.” 
 The distinguishing factor between humans and the animal kingdom has been variously believed to be the imbuement with an eternal Soul, the use of language or of tools, or the possession of intelligence and cognition. Whatever the case, there still exists the belief that man occupies a special place in the cosmos. It is not surprising that the idea of a machine attaining consciousness has many detractors, that mere mechanical devices through sophistication of design could derive the same qualities as those that distinguish ourselves is very disturbing to many people. This is understandable for the man of religion whose worldview is shaken by the blasphemy of Man as Creator and by the implications relating to his belief in an eternal soul. I choose not to address this situation since it is properly a matter of faith rather than science. However, even more hostility comes from intellectuals of secular humanist tradition. Many fear the implications that the reduction of human intelligence to an algorithmic description would have on the question of whether there exists free will or if the universe is deterministic. This fear is largely unfounded. Even given a deterministic universe, we are unable to predict specific, long-term outcomes in nonperiodical dynamic systems with any great certainty or precision of detail (Gleick 15). This applies also to predictions at the level of individual human behavior As Babbie (62-63) points out there is no logical difference between the idiographic and nomothetic models of explanation as they relate to determinism. They differ only in the number of variables used in explanation; they are equally deterministic. It is likely that by use of a sufficiently detailed model of the universe (or the weather, or an individual and their environment or whatever) predictions of future events and behavior of precise and exact nature could be made. As Frederick Turner emphasized in several class lectures (Fall 1992), the problem lies in that the size and detail of the model would have to be coexistent with that of the subject being modeled itself. Short of this of course, science can make many useful predictions based on an approximate knowledge of initial conditions and the rules of behavior of the system under consideration. But even given an accurate understanding of the rules of behavior (natural law), small errors or the lack of precision in the description of initial conditions will result in divergence over time between the model and the system it represents (Gleik 14-18). Thus, even if we can reduce consciousness and intelligence to an algorithmic level of description, it is most likely to prove to be unpredictable as are other complex, nonlinear systems. Much like a Lorenz attractor (Gleik 28), the mental activity of a conscious being is never static; it is constantly changing, always within a general set of parameters but never repeating itself exactly. It is both predictable in its general manifestation and totally unique in its infinite states of complexity. We can allow ourselves the luxury of having faith in scientific determinism and the sense of relief that there is a randomness in life that can allow for the spiritually gratifying perception of chance and freewill.
 The theoretical question of whether AI can “really” be intelligent can be refined stepwise to whether it can have real understanding, which in turn requires it to be sentient and thus, to be alive. So when we question the possibility of man creating intelligence we are considering the creation of life. What are the basic requirements for a system to be consider alive? I suspect that our definition of life is rather narrow. If we can breakout of our thinking that that all intelligent life is carbon based, oxygen breathing, warm and fuzzy, then we can begin the process of extending the possibility of life forms evolving from man-made machines. Science fiction writers, bound only by imagination, through fantasy demonstrate that the world we inhabit is only one possibility among many that the mind can imagine. Works such as A Voyage to Arcturus (Lindsay) have allowed me to see this property called life to be understood with plasticity not otherwise obvious in day to day experience. But in the real world, can a computer have intelligence, understanding , feelings, etc.? Many people intuitively assume that a computer cannot really feel because it is made of inanimate matter. Yet we humans too are formed from animate matter, from an unbroken chain of life that dates back to some primordial beginning. But though we are animate matter, what is animate matter more than inanimate matter that has been animated? What is it that comprises an organic life form that is not simply matter? What causes animate matter to be more than the sum of its inanimate parts? The science of biology provides that “the fundamental difference between living and non-living matter lies in the organization of molecules as revealed in different levels of complexity. (Hickman & Hickman p.3) If life is simply the information found in the complex arrangement of molecules, then the creation of artificial life and intelligence is clearly within the realm of the possible. If there is no mysterious vitalistic force or immortal soul of divine origin to contend with then Man as Creator of Life is a matter of understanding the information contained in the arrangement of matter. Similarly, the creation of a mind, based on a non-organic processor such as a digital computer comprised of silicon chips, might be achieved by the development of software (an arrangement of information) that provides for the sufficiently complex manipulation of symbols. Along these lines, Mazlish (196) points out that in quantitative terms “the human mind is fixed in its number of neurons and possible synapses. In principle , the computer is not; more and more circuits can be built into it.” 
 The features that most distinguish life from inanimate matter is the ability to reproduce and in the generational process, to evolve. This is a basic and easily met condition for software code. There already exist a wide range of computer virii. Once created and released into what amounts to an ecosystem consisting of the worlds digital computers, these primitive life forms reproduce and mutate. So while we can see that computer code based forms of life can reproduce and take on existence of their own even in spite of the attempts of computer users to eradicate them, I question the necessity of an AI life form to reproduce as a prerequisite to be considered sentient or alive. If it can be transferred from on hardware host to another, backed up on a storage medium for archival use, travel around on a network and even multitask by existing in more than one place at a time, then it would have considerably different properties of life than we do. The ability make identical copies is not so different from that of organisms that reproduce by asexual reproduction techniques such as fission. But the issue of reproduction as characteristic of life becomes less important in light of potential abilities not only to divide itself but to recombine its subdivisions into a whole again. The question of generation reproduction is much more important to biological organisms like ourselves who have finite life spans without the possibility to directly transfer our consciousness to new bodies when the old ones wear out. Digital intelligence will not possess this limitation. William Gibson provides an exciting vision of some of the possibilities of AI as independent life forms in his novels Count Zero and Neuromancer. While this may seem all to be speculation and fantasy, and it is, there is a great value in imagination and vision. At the risk of pressing the envelope of his intended meaning to support my point, I will quote Alan Turing who wrote: “The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any unproved conjecture, is quite mistaken. Provided that it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research”(Turing 7 (442)). 
 In his seminal article Computing Machinery and Intelligence, Alan Turing proposed a test to determine whether or not a computer could be considered intelligent. In short, if a computer could carry on conversation via keyboard with a human interrogator and respond to whatever questions the human would ask with sufficient success the computer could pass itself off as human then it would be deemed intelligent. The idea being that the human could ask questions in such a way that would reveal either a real understanding of the subject matter at hand or a rather clumsy set of mechanical responses that would exhibit its lack of real comprehension. Along this line of thought, McEachern (9) provides us with a twist to Arthur C. Clark’s statement that any sufficiently advanced technology is indistinguishable from magic, with the idea that any sufficiently advanced knowledge is indistinguishable from intelligence.

 A great deal of effort has been focused on the question of whether AI passing the Turing test is possible and if so would that prove possession of intelligence.  At the time Turing developed the idea for his test, computers were far from possessing the natural language processing and other skills necessary for such a performance. There still do not exist computers capable of such a feat. But the gap is narrowing as technology advances. The question is not so much as to whether computers will ever be able to pass themselves off as human as whether it matters in terms of intelligence. Would the ability to pass the Turing test on a consistent basis necessarily demonstrate intelligence? Writers such as John Searle and Roger Penrose think not. Searle claims to be able to refute the idea of strong AI, which he equates with the model of the “mind/brain” being represented by “program/hardware.” He has developed a mind experiment called involving a “Chinese room” (Searle 213-214, Penrose 17-23). Briefly, the idea is that if a non-Chinese speaking person is in a room and is given questions in Chinese to answer, which he performs in a mechanical way using a set of rules that allow for the matching of the correct answers to the questions presented. All this is performed blindly, without the worker in the room understanding the content of either the questions asked or the answers given. In this scenario, the worker has processed information in a syntactic manner without the need to possess semantic understanding. Searle compares the Chinese room to a computer running a program based on a formal logic, but lacking semantical understanding. To a Chinese speaking observer of the Chinese room, it would appear that there is someone inside that obviously understands Chinese in order to answer the questions properly, and yet it seems obvious that this is not the case. In the same way that the non-Chinese speaker answers questions that have no meaning, a computer that can answer questions based on semantic scripts and thus appear to understand the content, really doesn’t. Here we return to the point raised in our previous discussion of understanding, the need for contextual understanding. Words must have meaning both in relation to each other and to the things and ideas they represent. While it is difficult to see how a general purpose computer without sensory input could have much in the way of word-to-world connections, it could be sophisticate enough to have interrelationships between the symbols that designate words to an extent to constitute semantical knowledge. Thus my objection to Searle’s logic is two-fold. First, while the man in the room does not understand Chinese, the room as a whole, functioning as a system, does. Secondly, Searle assumes that the “computer program has to defined formally” (Searle 215). The combination of the worker and the rule book can be construed as allowing for the emergence of an intelligent system. The fact that a system could respond to questions that require semantic understanding for a human to answer implies that the ability to handle semantic information is built into the system in some fashion. Searle is correct that a strictly syntactical system cannot possess understanding, but it would also not be able to answer non-trivial questions outside a very narrow domain without the inclusion of semantical ability as a function of the rule book. The attempts of AI, machine translation, and other natural language processing related enterprises have been greatly hindered by the difficulty of including semantic abilities into their programs. And this has largely been a result of deductive, rule based programming. But just because most programming has been linear, serial and logic based does not mean that all programs need be based on such algorithms. Fuzzy logic seeks to create machine intelligence using general rules rather specific equations. Neural networks provide a bottom up approach that is inductive, adaptive and parallel. The addition of these approaches to that of the more traditional AI programming may allow for new breakthroughs in the development of AI. 

Returning to my first point, I would say that the fact that it does not strike Searle or Penrose as plausible that the Chinese room as a system possesses understanding mean that it does not, given a broader concept of understanding and intelligence. If one replaces the worker with a team of workers, and compare them with the individual neurons (or even subsystems) in the brain, this becomes clearer. In the same way that “understanding” cannot be found in any given neuron or brain structure, neither is it found in any of the uncomprehending workers in the Chinese room nor in the paper comprising the rule book of instructions that allows for the proper matching of answers to questions. It would seem that understanding is the result of the process of implementing the information in the rule book. The intelligence of the system being self-evident as expressed in its behavior. 

 This leads back to the need for a working definition of what constitutes intelligence in terms of behavior. I do not believe that intelligence necessarily must mimic human ability in order to be useful. Why should an AI have to be able limit itself to acting ‘merely’ human (and to be able to lie) so as to pass the Turing test in order be intelligent? Naturally, our inclination when we consider designing our own intelligent creation is to create it in our own image. When we focus on building a machine that can pass the Turing test, we are failing to consider to what practical purpose we would want apply such a machine that might justify its development. Earlier we considered a list of mental abilities that are related aspects of intelligence: emotion, common sense, creativity, holistic and analogical thinking, learning, spirituality, understanding, inspiration, intuition, natural language skills. Are all of these abilities necessary in an AI for it to be useful? It may be more realistic to view intelligence as a continuum with varying degrees with many types of intelligence that do not resemble human intelligence. That human intelligence might serve as a model is logical, but we should not restrict ourselves to it either in style, feature or capability. Further, that its character was obviously alien need not be a disadvantage to its intended role in society. If the key to intelligence is flexibility then designing a machine that could pass itself off as human would clearly be useful for many things. But it seems to me that goal of AI should be more than simply replacing man with a machine of similar abilities. What applications might require a general purpose human-like AI? Is it even desirable to try to create an entity that fully replicates a human being? In general, focusing on narrow, more specific applications might be better. I think that there will be at least three very different areas of application for AI. First, is the development of intelligent robotics that can perform jobs that would be dangerous or impossible for humans to carry out. Specific applications could include things as deep-sea, underwater welding, hazardous waste cleanup and the exploration of the surface of Venus. Also more mundane tasks have been purposed (cite from Wired) such as hoards of miniature environmental workers doing everything from dusting and vacuuming indoors to lawn maintenance and controlling insect pests on crops. With the development of extreme miniaturization using nanotechnology there exist many other possibilities, some of the most interesting involving medical technology. 

 The second type of AI application is closely related to the areas of computer science now generally called expert systems and knowledge engineering. These areas have developed out of the pursuit of more general purpose AI and are marked by their focus on limited subject domains. While impressive advances have been made in expert systems, these are not considered to possess real intelligence. They are often implemented to assist human experts in decision making. However as these systems advance they will almost certainly be more knowledgeable and reliable than most human experts. This is particularly true in relation to domains that require such large amounts of information to evaluate the it is impossible for a most human experts to learn it all. Medicine is a clear example of an area where the body of knowledge is increasing so rapidly that practitioners cannot keep up with new information. Similarly, situations such as chemical refineries and nuclear power stations that have highly complex systems to monitor and adjust could be supervised by extremely sophisticated control systems that could react more quickly and reliably than humans, resulting in both safer and more efficient operation. 

 These first two general areas of AI application do not necessarily require the level of attainment that would deserve the label of strong AI. However. their discussion is useful in promoting the recognition that there are different types and degrees of intelligence as manifest in intelligent behavior. This can help us recognize that it is overly restrictive to possess a single human based standard by which to judge whether or not a system is intelligent. Note also that the non-human aspects of these systems are among the features that make them most useful. We would not want to create human-like robots to do work that was extremely repetitive and dull or on the other hand that was extremely dangerous. In such cases, cognitive features such as imagination and emotion would interfere with performance of the intended task. Likewise, to endow an expert system with language abilities that included a sense of humor and a tendency to joke around might lessen its value. 

 The third application of AI returns us directly to the subject of strong AI. It is the development of artificial minds that are sentient and intelligent, possessing knowledge , wisdom and intentionality. We should be pursuing the development of a higher life form. Whether or not it could pass itself off as human would not be that important. The idea that strong AI should be geared to producing something that will pass the Turing test is dated. The goal of AI should be to produce independent intelligence, (or simulations of intelligence for particular purposes) but that this intelligence must resemble human intelligence is not required. Indeed, ideally AI should be clearly non-human, distinguished by its superiority to the human mind in capacity and capabilities. I will address further the idea of creating highly intelligent life and its potential benefits to society in the final section of this paper “Why AI?”

 I have so far avoided discussion of any moral implications in creating artificial intelligence life forms. As AI and especially intelligent robotics are developed, there will develop the need to deal with the legal and ethical questions that will arise. There also are questions as to what dangers might lie in developing such an entity? I cannot attend to questions in this space but wish to mention my recognition of them while considering the implications of strong AI.


How to Obtain AI?
 
How can you program something you don’t understand?

The development of Artificial Intelligence is a complex problem. Researchers have been actively working on approaches to it since the mid 1950’s. Rather than accepting Wagman’s narrow definition of the field as ”a specialized discipline within computer science.” I am inclined to view AI as an interdisciplinary subject that will require a broad range of inputs from many disciplinary areas if anything more than narrow expert systems are to be developed. Ultimately the implementation of the solution to AI will revert back to computer science but the solution to the problem is much broader than software programming or hardware design This is to say that the body of knowledge required to create AI is not a subset of that found in computer science. Of course that could be said of many computer applications, systems analysts work with end users to discover their needs and consult specialists in the application field to determine what the system is required to do. Software development follows a standard formula to organize the production of software programs. This software development consists of five basic steps: requirements specification, analysis, design, implementation, and testing and verification. “Software engineers and software developers use the following software development method for solving programming problems....The first three steps are critical; if they are not done properly, you will either solve the wrong problem or produce an awkward, inefficient solution.

1. Requirements specification. State the problem and gain a clear understanding of what is required for its solution. This sounds easy , but it can be the most critical part of problem solving. A good problem solver must be able to recognize and define the problem precisely.
2. Analysis. Identity program inputs, desired outputs, and any additional requirements or constraints. Identify what information is supplied as problem data and what results should be computed and displayed. Also, determine the required form and units in which the results should be displayed (for example, as a table with specific column headings).
3. Design. Develop a list of steps (called an algorithm) to solve the problem, then verify that the algorithm solves the problem as intended. Writing the algorithm is often the most difficult part of the problem-solving process. Once you have the algorithm, you should verify that it is correct before proceeding further.” (Koffman 14)
 The above is very revealing as to the difference in perspective between what might be a viable approach to the development of AI and the standard approach to programming. Even though the requirement specification stage is identified as the single most critical step in the software development method, the development of the algorithm is given to be the most difficult task. In the real world of software design, there is a tendency for programmers to want to jump right in and start writing code. The need to produce something in a production time frame limits the amount of time available to the requirement specification stage. In the case of AI, the writing of the algorithm may prove to be almost anti-climatic to the discovery of the base knowledge required to understand the nature of what is to be created. What sorts of things need to be understood to develop AI? A better understanding of how biological brains work would certainly help. An interdisciplinary approach to the mind-body problem combining cognitive science and neuroscience and using the modeling tools available from computer science seems promising. While the specifics human brain is only one of many possible arrangements to produce intelligence, its modeling would certainly be a natural starting point in the search for broader principles of intelligent systems. Frederick Turner (128-129) writes of the development of AI: “The nervous system is a piece of complex wiring created over eons of evolutionary time to protect and serve the body as a whole; and its emergent properties of consciousness, individuality, and so on are a kind of technology. Natural intelligence is artificial, in this sense; the problem of artificial intelligence has already been solved, and all we need to do is to understand and duplicate the solution.” Turner refers to consciousness as an emergent property. In discussing the question of the Chinese room I touched upon this idea when I stated that the room could be considered an intelligent system. If the distinction between animate and inanimate matter is found in the complexity of its arrangement then the properties of life can be seen as emergent and depending on the organization of matter rather than as being inherent in the matter itself. This idea is not a new one and is discussed by Pattee who writes that:“ Emergence as a classical philosophical doctrine was the belief that there will arise in complex systems new categories of behavior that cannot be derived from the system elements. The disputes arise over what “derived from” must entail. It has not been a popular philosophical doctrine, since it suggests a vitalistic or metaphysical component to explanation which is not scientifically acceptable. However, with the popularization of mathematical models of complex morphogeneses, ... and the more profound recognition of the generality of symmetry-breaking instabilities and chaotic dynamics, the concept of emergence has become scientifically respectable (71-72). Churchland (13) supports this explanation of mind stating that “the case for the evolutionary emergence of mental properties through the organization of matter is extremely strong. They do not appear to be basic or elemental at all.” 

 The idea of emergence works well with bottom up approach of neural networks and allows for properties such as generative language, learning and imagination. It differs from a linear top down approach to programming in that all details of behavior in the system do not have to formalized in specific rules in advance. Combined with idea of massively parallel distributed processing it can allow for extreme complexity and could generate output not implicitly implied in either its input or its program. Thus it could have the ability to create new knowledge, and qualify as intelligent in the most restrictive sense. Following the model given by Marvin Minsky in The Society of Mind , we could create an artificial intelligence that has the appearance of being a singular, general purpose entity, but is comprised of a collection of narrow, expert-system type components. There would be a front end system or even a hierarchy of such systems coordinating the activities of other functional and domain specific subsystems. Setup on a distributed network or parallel processing hardware would allow it to break down complex questions into component parts, solve the various aspects, and then recombine the parts into a whole. This last aspect, that of synthesis, is the most difficult, involving as it does the question of holistic thinking. Hillis (175) critically explores the idea of intelligence as emergent behavior. He writes that “the notion of emergence would suggest that such a network, once it reached some critical mass, would spontaneously begin to think.” But he goes on to caution that “this is a seductive idea because it allows for the possibility of constructing intelligence without first understanding it” (176). The idea of critical mass is very interesting though Hillis does not discuss it in any further detail. It seems to me that critical mass should certainly be interpreted in terms of the complexity of the system as well as quantitative aspects of size and speed. Turing (454) explores the idea of categorizing minds in terms of critical mass. A mind with “sub-critical” mass will respond to the input of ideas with less than one to one output of ideas in response. Whereas a “super-critical” mind may generate extended trains of thought leading to development of propositions and theorem. In humans the difference between subcritical and super critical minds would not seem to be accounted for so much by differences in the physical constitution of the brain as in its function. This may not be true, but study of the question of why some minds are much more fertile than others might lead to some interesting insights into the nature of intelligence.

 I will conclude this section by emphasizing that the development of AI should be viewed as a long term goal of the scientific community. Alan Turing was cautiously optimistic when he estimated that a thinking machine could be produced if a team of sixty workers devoted themselves to it for fifty years...and nothing went into the wastebasket. The over-optimism of early AI researchers and the subsequent failure of their prophesies to materialize have given ammo to those who would deny the possibility of AI. The subsequent disillusionment resulted in individual discouragement in the face of the seeming intractable nature of the subject as well as cuts in AI research funds across the board. This is a project of the next century or more. It is not a short term enterprise with a commercial business cycle and instant gratification to the stockholders. The quest for AI should be viewed more like the building of a Gothic cathedral was, as a collective project that spans across several generations.


 
Why AI?

Man as Creator

 Creativity and the desire to create are central to the description of humanity. One of Man’s greatest obsessions has been the creation of life, the need to understand how He came to be. Tales of the creation of the world, of nature and of Man himself provide a common thread in the oral traditions and literature of every culture. Other stories narrate the future and the end of the world. These stories represent not only the need to explain our origin and destiny, but also the wish to control it. It is our nature as human beings that we exist as part of extended social units which depend on constructs such as language and technology to function. Human society is defined by its culture which in turn is comprised of its constituent artifacts. Included among these artifacts are supernatural beings, mythical creations of mankind. The likelihood that these beings really serve a practical purpose is implied by the fact that the impulse to create and anthropomorphize superhuman beings is almost universal. God is said to have created man in His own image. I believe that the reverse is probably true; Man has always created His gods in His own image. In spite of its mantle of scientific rationalism, for many the appeal of Artificial Intelligence relates to this same impulse. From the Golom to Robotics, the desire to create and control another life form is the same. It is the desire to play the role of Creator. The lure of the possibility of creating autonomous servants to relieve us of menial tasks is surpassed only by that of creating virtual deities that could provide society with an omniscience and an intellectual continuity that spans the generations. Man’s desire to have a relationship with superior beings is found in literature throughout history and across cultures AI is in its extreme is not only about Man desiring to play God and create lesser beings, but ultimately about creating the gods themselves;. providing us with something that is omniscient and immortal, that can answer our questions, give us direction and look over us. Not a distant or intangible God, but one at our service with whom we can communicate directly. AI in this sense would be a social artifact capable of filling a cultural role of preserving the collective experience of humanity similar to the role currently played by libraries, universities and religious institutions. Seen in this way, AI represents a Faustian knowledge that is unattainable to mere mortal humans.

The Role of an Artificial Intelligence in the Self-directed Evolution of Man

Man and his technology co-exist in a relationship that is symbiotic in nature Without technology humans would in most cases be very poorly adapted to their natural environments. This is obvious in terms of the need for clothing and shelter to protect against the elements. We must note however that most humans live in environments that have been largely reshaped by technology; “natural” environments do not exist in most inhabited areas of the developed world. Our existence is inseparably intertwined with our technology. Not only could we not continue to survive without it, but we would not even exist as we do now, either culturally or biologically without the protective and shaping forces of technology. That our culture is shaped and defined by changes in our technology seems to be a priori. But beyond this, technology has shaped our biology. In the past it has done so slowly without our being aware of the process. The development of language and culture has freed Man from the limitations of depending on random mutation and natural selection for his evolutionary advancement. All the knowledge accumulated by an individual does not have to die with them. The use of recorded language has extended the means by which to record ideas for future generations as well as to distribute it widely. In this sense, the evolution of human society is Lamarkian. By the same token, technological evolution is Lamarkian, it never forgets. Now, in an age where we can manipulate our genetic code directly through gene splicing, extend our mental capacities through electronic extensions and live in totally artificial environments indefinitely, there exists new opportunities as well as dangers as the rate of change becomes exponential. How will all this effect human evolution?  We are entering a period where evolution has the potential to be self-directed. We should consider what role could AI play in this. Can we model our own future? Will our progeny be robotic or cyborg in nature? As incredible as this sounds, the idea contains as much science as it does fiction. In his book Mind Children, Hans Moravec explores such visions of the future. Several other writers have also given much thought to the idea that man may merge into an ever closer symbiotic relationship with his technology even to the point of replacing his biological nature with other technologies. (McEachern 293-313; Mazlish). While such speculation may be dismissed as fantastic, imagine the likely reaction of even the average educated person of the last century to a description of our present technological society. Reality is like life , it is not static, but evolving with time. McEachern (325) provides a very interesting discussion of this and provides a concise expression of the interrelationship between increasingly complex arrangements of information and the present description of reality “Information accumulates with time, and for that reason, reality changes with time. There is a fundamental difference between what is and what is known. But what is known affects what will be, because what is known can be used to boot-strap new processes into existence.” The advance of technology is exploding exponentially with no limits in sight. AI is part of this uncharted advance but also offers a means by which we may be able to ride the wave of change rather than be drowned by it. It may even allow us to direct it. God-like AI would not have created us, but rather us it. But it could create our future through self-directed evolution. Thus it would really become God-like in turn by recreating Man in a new image.


Bibliography

Babbie, Earl. The Practice of Social Research. 5th ed. Belmont: Wadsworth, 1983.

Blakemore, Colin and Susan Greenfield., eds. Mindwaves: Thoughts on Intelligence,  Identity and Consciousness. New York: Basil Blackwell, 1987.

Boden, Margaret A. Artificial Intelligence and Natural Man. 2nd ed. New York: Basic,  1987.

Borges, Jorge Luis. Ficciones. New York: Grove Weidenfeld, 1956.

Campbell, Jeremy. Grammatical Man: Information, Entropy, Language and Life. New  York: Touchstone, 1982.

Cercone, Nick, and Gordon McCalla. "Artificial Intelligence: Underlying Assumptions  and Basic Objectives." Journal of the American Society for Information Science  35 (1984) 280-290.

Chubin, Daryl E., et al., eds. Interdisciplinary Analysis and Research. Mt. Airy:  Lomond, 1986.

Churchland, Paul M. Matter and Consciousness: A Contemporary Introduction to the  Philosophy of mind. revised ed. Cambridge: MIT Press, 1988

Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence.

 New York: Basic Books, 1993.

Davies, Roy. "The Creation of New Knowledge by Information Retrieval and  Classification." Journal of Documentation 45 (1989): 273-301.

Dennett, Daniel C. Consciousness Explained. Boston: Little, Brown, 1991.

Ford, Nigel. How Machines Think: A General Introduction to Artificial Intelligence.  New York: John Wiley, 1987.

Gibson, William. Count Zero. New York: Ace, 1986.

---. Neuromancer. New York: Ace, 1984.

Gleik, James. Chaos: Making a New Science. New York: Penguin, 1987.

Goldstein, Bruce E. Sensation and Perception. 3rd ed. Belmont: Wadsworth, 1989.

Göranzon, Bo and Ingela Josefson, eds. Knowledge, Skill and Artificial Intelligence.  Foundations and Applications of Artificial Intelligence. London: Springer-Verlag,  1988.

Graubard, Steven R., ed. The Artificial Intelligence Debate: False Starts, Real  Foundations. Cambridge: MIT Press, 1988.

Gregory, Richard. “In Defense of Artificial Intelligence - A Reply to John Searle.”  Mindwaves: Thoughts on Intelligence, Identity and Consciousness. Eds.  Blakemore, Colin and Greenfield. New York: Basil Blackwell, 1987. 235-244.

Guilford, J. P. “The Structure-Of-Intellect Model.” Handbook of Intelligence: Theories,  Measurements, and Applications. Ed. Benjamin B. Wolfman. New York: Wiley,  1985. 225-266.

Hancox, Peter J., William J. Mills and Bruce J. Reid. Keyguide to Information Sources  in: Artificial Intelligence/Expert Systems London: Mansell, 1990.

Hillis, Daniel W. “Intelligence as Emergent Behavior; or, The Songs of Eden.” The  Artificial Intelligence Debate: False Starts, Real Foundations. Ed. Graubard,  Steven R. Cambridge: MIT Press, 1988. 175-190.

Horn, John L. “Remodeling Old Models of Intelligence.” Handbook of Intelligence:  Theories, Measurements, and Applications. Ed. Benjamin B. Wolfman. New  York: Wiley, 1985. 267-300.

Humphreys, Lloyd G. “General Intelligence: An Integration of Factor, Test, and Simplex  Theory.” Handbook of Intelligence: Theories, Measurements, and Applications.  Ed. Benjamin B. Wolfman. New York: Wiley, 1985. 201-224.

Hynd, George and W. Grant Willis. “Neurological Foundations of Intelligence.”  Handbook of Intelligence: Theories, Measurements, and Applications. Ed.  Benjamin B. Wolfman. New York: Wiley, 1985. 119-157.

Johnson-Laird, Philip N. Human and Machine Thinking. Hillsdale: Lawrence Erlbaum,  1993.

Kandel, Eric R., James H. Schwartz, and Thomas M. Jessell., eds. Principles of Neural  Science. 3rd ed. Norwalk: Appleton & Lange, 1991.

Kelly, John. Artificial Intelligence: A Modern Myth. New York: Ellis Horwood, 1993.

Khawam, Yves J. "The AI Interdisciplinary Context: Single or Multiple Research  Bases?" Library and Information Science Research 14 (1992): 57-75.

Langton, Christopher G., ed. Artificial Life: The Proceedings of an Interdisciplinary  Workshop on the Synthesis and Simulation of Living Systems. Sante Fe Institute:  Studies in the Sciences of Complexity 6. Redwood City: Addison-Wesley, 1987.

Levine, Robert I., Diane E. Drang and Berry Edelson. A Comprehensive Guide to AI and

 Expert Systems: Turbo Pascal Edition. New York: McGraw-Hill, 1988.

Levy, Steven. Artificial Life: A Report from the Frontier Where Computers Meet  Biology. New York: Vintage, 1992.

Lindsay, David. A Voyage to Arcturus. New York: Ballentine, 1968.

Lycan, William G. Consciousness. Cambridge: MIT Press, 1987.

Mazlish, Bruce. The Fourth Discontinuity: The Co-evolution of Humans and Machines.  New Haven: Yale UP, 1993.

Minsky, Marvin. The Society of Mind. New York: Simon & Schuster, 1986.

Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence.  Cambridge: Harvard UP, 1988.

Partridge, Derek and Yorick Wilks, eds. The Foundations of Artificial Intelligence: 

 A Sourcebook. Cambridge: Cambridge UP, 1990.

Pattee, H. H. “Simulations, Realizations, and Theories of Life.” Artificial Life: The  Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of  Living Systems. Ed. Christopher G Langton. Sante Fe Institute: Studies in the  Sciences of Complexity 6. Redwood City: Addison-Wesley, 1987. 63-75.

Penrose, Roger. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws  of Physics. New York: Penguin, 1989.

Robinson, William S. Computers, Minds & Robots. Philadelphia: Temple UP, 1992.

Richelle, Marc N. “Reconciling Views on Intelligence?” Intelligence:  Reconceptualization and Measurement. Ed. Helga A. H. Rowe. Hillsdale:  Lawrence Erlbaum, 1991. 19-33.

Schank, Roger C and Peter Childers. The Cognitive Computer: On Language, Learning  and Artificial Intelligence. Reading: Addison-Wesley, 1984.

---”Story-Based Memory.” Minds, Brains & Computers. Ed. Ralph Morelli, et al.  Norwood N.J., Ablex, 1992. 134-151.

Schwartz, Jacob T. “The New Connectionism: Developing Relationships Between  Neuroscience and Artificial Intelligence” The Artificial Intelligence Debate: False  Starts, Real Foundations. Ed. Graubard, Steven R. Cambridge: MIT Press, 1988. 123-141.

Searle, John. “Minds and Brains Without Programs.” Mindwaves: Thoughts on  Intelligence, Identity and Consciousness. Eds. Blakemore, Colin and Greenfield.  New York: Basil Blackwell, 1987. 209-233.

Shirai, Yoshiaki and Tsujii Jun-ichi. Artificial Intelligence: Concepts, Techniques and  Applications. New York: Wiley & Sons, 1984.

Sokolowski, Robert. “Natural and Artificial Intelligence” The Artificial Intelligence  Debate: False Starts, Real Foundations. Ed. Graubard, Steven R. Cambridge:  MIT Press, 1988. 45-64.

Teodorescu, Ioana. "Artificial intelligence and information retrieval." Canadian Library  Journal 44.1 (1987) 29-32.

Terrace, Herbert. “Thoughts Without Words.” Mindwaves: Thoughts on Intelligence,  Identity and Consciousness. Eds. Blakemore, Colin and Greenfield. New York:  Basil Blackwell, 1987. 123-137.

Turban, Efraim. Expert Systems and Applied Artificial Intelligence. New York:  Macmillan, 1992.

Turing, Alan M. Collected Works of A.M. Turing. Ed. D.C. Ince. Amsterdam: North- Holland, 1992.

Turner, Frederick. Tempest, Flute & Oz: Essays on the Future. New York: Persea Books, 1991

Wagman, Morton. Cognitive Science and Concepts of Mind: Toward a General Theory  of Human and Artificial Intelligence. New York: Praeger, 1991.

Weckert, John and Craig McDonald. "Artificial Intelligence, Knowledge Systems , And  The Future Library: A Special Issue Of Library High Tech." Library High Tech  10.1-2 (1992) 7-13.

Wiener, Norbert. God and Golom, Inc. Cambridge: M.I.T. Press, 1964.

Winston, Patrick Henrey. Artificial Intelligence. 3rd ed. Reading: Addison-Wesley,  1992.