What is Artificial Intelligence? Is it a field of study, a discipline, a set of sophisticated programming techniques or maybe just a loosely defined goal? Examples of all these usages, as well as others, are commonly found. The term Artificial Intelligence was adopted and popularized by the participants of the Dartmouth Conference in 1956 to designate the goal of creating a thinking machine. The selection of the term has since been frequently criticized and other titles have been offered or used as a substitute. Alternatives include: complex information processing, machine intelligence, expert systems, knowledge engineering, and machine thinking. An examination of meaning of the term Artificial Intelligence itself provides insight into some of the fundamental questions that arise in connection with the subject. One approach to the question of what constitutes AI is to break it down into two subsidiary questions; how do we define intelligence and what do we mean by artificial? The lack of agreement among researchers and scholars on the subject of AI is not surprising viewed in the light of the difficulty of even obtaining consensus on the meaning of these component terms. While we routinely make reference to the idea of intelligence, the concept is deceptively simple in appearance. How many times has someone been described as follows: “He is very intelligent but he’s not very practical; he just doesn’t have much common sense.” What does this mean? What is the difference between intelligence and common sense? Is intelligence the ability to think or reason, or to learn, to understand, to apply knowledge to manipulate one’s environment or to think abstractly? These are all definitions found in Webster’s dictionary. Does intelligence require all of these elements to be present or only some? Are there different types of intelligence or can intelligence be seen as a continuum with varying degrees? It is very easy at this point to get lost in a semantic jungle of constitutive definitions. If intelligence is the ability to think or understand, what does it mean to think or understand; what is the nature of thought and understanding? These are not trivial questions. Although we have an everyday understanding of these words, it does not require too close an inspection of the concepts they represent to realize that they are not rigorously defined. And yet, in order to have a widely acceptable definition of what would constitute an artificial intelligence, a clearer understanding of natural intelligence certainly seems desirable, if not necessary. Churchland (3) refers to this issue as “the semantical problem” and asks: “where do our common-sense terms for mental states get their meaning?” Ultimately, we can only directly experience our own mental states and must accept that the process of thought and feeling is the same for others based on the appearance of the presence of such processes as they are expressed verbally and through behavior. This is a very important point and will be taken up in more detail later in the section on strong AI.
Richelle (19) states that “although scientific psychologists have been studying intelligence for a century they do not seem to have come closer to a widely acceptable, consistent general theory of intelligence.” It strikes me as not too surprising that science has not defined intelligence given the lack of agreement among philosophers and scientists alike as to the fundamental nature of the mind. We have not progressed even to the point of resolving the conflict between various forms of neo-Cartesian dualism and those of materialist monism as models of the mind. The question of the nature of the mind and that of intelligence would seem to be so closely related that neither can be answered without addressing the other. This seemingly forces us to expand our query and ask what are the elements that comprise the mind? Our definition of intelligence seems to leave out many fundamental functions and abilities that we consider as aspects or attributes of the mind and that that would be necessary for the reproduction of the full range of human cognition. When I set out to research the question of what is an adequate definition of intelligence by which the goal of creating artificial intelligence could delineated, I expected to be able to find or produce a singular, coherent and elegant definition of intelligence. I have been unable to do so. A review of the history of AI research reveals that my experience parallels that of others. Many of the early researchers of AI, machine translation and related fields initially approached these subjects rather optimistically only to discover that the issues involved were incredibly more difficult than they appeared on the surface. However, after much reading I have finally came across a definition of intelligence worth quoting. It is a rather personal definition given by Marvin Minsky (329):
“Intelligence A term frequently used to express the myth that some single entity or element is responsible for the quality of a persons ability to reason. I prefer to think of this word as representing not any particular power or phenomenon, but simply all the mental skills that, at any particular moment, we admire but don’t yet understand.”
While not at all satisfying as an answer to what intelligence is, it is noteworthy as contributing to what intelligence probably is not and, I think, honestly and accurately expresses the lack of human understanding of our own selves.
Though I find the standard definition of intelligence offered by psychology to be lacking for the purpose of exploring the subject of artificial intelligence, I would be remiss if I did not include it here. A frequently cited definition of intelligence comes from David Wechsler who provides us that intelligence is “the aggregate capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment” (Hynd and Willis 135). This is somewhat useful as an overall definition but is far too broad to be useful to say whether a given system is intelligent or not, much less provide sufficient direction to provide a blueprint of what specifically should be included in designing an artificial intelligence. It also leaves us with the need to define what “rationally and purposefully” mean to us. While discussing the definition of intelligence, Humphreys states that “most accounts in the psychological literature assume that intelligence is an innate capacity or learning potential. This is especially characteristic of those who use and interpret client’s scores on intelligence tests....People who have been active in test development do tend to narrow the scope of the supposed capacity. They use descriptive phrases, such as the manipulation of symbols, dealing with abstractions, mental adaptability, adjustment of thinking to new requirements, and so on. These statements about intelligence are in effect, content analyses of the items that appear in the tests” (202). A review of psychological literature reveals a tendency to focus much more attention on developing measures of intelligence than trying to understand the nature of the thing being measured. Ultimately, this results in intelligence being defined as whatever it is that the intelligence test measures (Guilford 225).
Given all the above, the best that can be done at present may be to create a patchwork definition that includes all the elements that seem to comprise the incredibly complex and multi-faceted phenomena we refer to as intelligence. Originally, much of AI research focused on logic, problem solving and hueristic decision making. Much progress has been made in these areas. There are computer programs that can play chess better than all but a few world class human chess masters; expert systems are routinely used in business, industry and medicine among others and there are a multitude of software products from databases to spell-checkers that have benefited from the efforts of computer science to incorporate “intelligent” features into software. But as a whole, the goal of AI keeps receding like a mirage on the horizon. As the more straight forward problems are sorted out it becomes more apparent that much of the behavior we ascribe to the presence of intelligence is not so much linear in nature as nonlinear and may be less well represented by a strictly digital model as by one that is at least partially analog. As Kandel (194) points out from a neurobiological point of view, “some of the most remarkable activities of the brain, such as learning and memory, are thought to emerge from elementary properties of chemical synapses.” The complex arrangement of information processing in the brain results from a combination of electrical and chemical signaling. Unlike the discrete state machines we know as computers which are binary and currently use 256 state switches (bytes), the brain has a more subtle nature due to its chemical makeup. I will consider the implications of the differences in makeup between a biological brain and a digital computer in relation to their implications on creating AI at greater length later in the section on strong AI. The point I wish to make now with this is that a complete description of intelligence will have to include more than the reasoning and logical aspects of cognition. Those parts that lend themselves to duplication by orderly algorithms of the type we are familiar with from computer science may only be one aspect of intelligence and the easier part of the puzzle at that. The most important and more difficult aspects of thought may prove to be functions of chemistry and much more complicated to model.
While we tend to think of the mind and intelligence as being logical and rational, I think that much of our existence is neither of these. Consider the following list of seemingly inherent aspects of ourselves as sentient beings: emotion, common sense, creativity, holistic and analogical thinking, spirituality, understanding, inspiration, and intuition. An initial response may be to say that things such as emotion and spirituality are really quite distinct from intelligence, but how are they? To what degree is our thought guided by our emotions, our goals by our beliefs and our understanding of things by the feelings we hold towards them? In relation to the role of cognitive processes such as thinking and memory, Goldstein (7) states that “these processes are both an outcome of the perceptual process and a determinant of that process.” Our conscious being is iterative in nature; previous experiences and the attitudes and ideas that we hold, influence our current perception and understanding.
is also danger inherent in the reification of concepts such as mind, consciousness
and intelligence. Trying to
attribute abilities in logic, common sense, creativity and all the other
aspects of cognition that seem to be a reflection of intelligence, to a
single construct, may be wrong. In
Minsky’s definition of intelligence given above, “the myth that some single
entity or element is responsible for the quality of a persons ability to
reason” that he refers to is defined in psychology as the factor g (designated
by C.E. Spearman as representing general intelligence). Humphreys
concurs, stating that “intelligence is observable. It
is not a capacity.” On the
other side of the argument, Hynd and Willis (135) state that “intelligence
is a general class of behavior” and that g is the “unifying element
for the individual behaviors composing that class ... the presence of this
factor is at least implicitly understood in nearly all theories of intelligence.” John
Horn (271) believes that the notion of general intelligence is faulty and
the idea should be dispensed with in favor of multiple concepts to adequately
represent the behaviors attributed to general intelligence. He
supports his argument stating that “existing evidence about the nature
of human capacities provides little basis for the belief. This
evidence indicates that several distinct functions are involved in performances
that are classified under the heading of intelligence. These
distinct functions probably have distinct genetic bases, distinct courses
of development in infancy, childhood, and adulthood, and distinct implications
for understanding human retardation, human accomplishments, human creativity,
and human happiness”(273). I
would add that this view of human intelligence also has distinct implications
for the development of artificial intelligence. If
intelligence is a class of behavior rather than some mysterious property,
then theoretically we can reproduce intelligence by modeling intelligent
behavior without necessarily requiring a complete understanding of the
underlying process. While the
range of behavior we associate with intelligence may not be dependent on
a single factor of general intelligence, I will next consider the value
of the possibility that we can develop a general theory of intelligence.
Winston’s and Shirai’s understanding that AI is not restricted to a single goal, provides a framework that allows for inclusion of the two otherwise conflicting descriptions of AI as described by Boden and Wagman. I think both the pursuit of AI as an engineering product and as a tool to develop and test theories of cognition are valid goals. The engineering school may do well to pay more attention to the work done in other disciplines, but the development of useful AI systems need not depend on the full understanding of the human mind in order to produce useful products nor necessarily seek to mimic it very closely.
I will now shift my attention to another aspect of what is meant by artificial intelligence, the question of what is the distinction between artificial and natural? I considered already the question of what constitutes intelligence. In doing so, I focused mainly on the question of human intelligence. I will later examine the need for a general theory of intelligence, but first it will be useful to consider the use of the word artificial in the context of AI.
In what respects can intelligence be artificial? Robert Sokolowski examines the issue in a very lucid article entitled “Natural and Artificial Intelligence,” which I will draw on here. There are two senses of the word artificial which we need to distinguish. The first is the idea of something that is real but synthetic. Things such as artificial light or man-made diamonds are artificial, but they are also real. Artificial light illuminates in the same way natural light does, though depending on its wavelengths it may have different properties. An artificial diamond is constructed of carbon formed at high temperature and pressure the same as a naturally occurring stone. In both cases the artificial, synthetically produced item is as real as the naturally occurring phenomenon it is modeled after. The second usage of the term artificial implies something that is not really the thing it imitates. Such as the case of artificial flowers. Artificial flowers do not possess any but the most superficial qualities of real flowers. They may appear to be flowers but they are not, which is usually obvious on closer inspection. Any real understanding of AI depends on our recognition of the distinction of between these two connotations. The school of thought that holds out the possibility of genuine intelligence, though artificially constructed, is often referred to as strong AI. (Searle 210, Penrose 17-23, Gregory 237) This is as opposed to weak AI, the idea that advances in computer science will never produce anything that can really think, only clever imitations that will superficially imitate intelligence. The strong versus weak AI debate is an issue that is currently unresolved. It is of interest to writers from a wide range of disciplines ranging from philosophy to physics. At its core is the age-old question of the nature of consciousness. The problem of how the physical body give rise seemingly ethereal consciousness, the mind-body problem, is certainly “one of the great unsolved scientific and philosophical problems of our time” (Goldstein 4). Llinás opines that for the neuroscientist,” the single most important issue one can address concerns the manner in which brains and minds relate to one another..” Aside from any practical benefits that may stem from the development of AI, the subject is certainly worthy of exploration to the extent that it will almost certainly shed light on the nature of our own minds.
A great deal of effort has been focused on the question of whether AI passing the Turing test is possible and if so would that prove possession of intelligence. At the time Turing developed the idea for his test, computers were far from possessing the natural language processing and other skills necessary for such a performance. There still do not exist computers capable of such a feat. But the gap is narrowing as technology advances. The question is not so much as to whether computers will ever be able to pass themselves off as human as whether it matters in terms of intelligence. Would the ability to pass the Turing test on a consistent basis necessarily demonstrate intelligence? Writers such as John Searle and Roger Penrose think not. Searle claims to be able to refute the idea of strong AI, which he equates with the model of the “mind/brain” being represented by “program/hardware.” He has developed a mind experiment called involving a “Chinese room” (Searle 213-214, Penrose 17-23). Briefly, the idea is that if a non-Chinese speaking person is in a room and is given questions in Chinese to answer, which he performs in a mechanical way using a set of rules that allow for the matching of the correct answers to the questions presented. All this is performed blindly, without the worker in the room understanding the content of either the questions asked or the answers given. In this scenario, the worker has processed information in a syntactic manner without the need to possess semantic understanding. Searle compares the Chinese room to a computer running a program based on a formal logic, but lacking semantical understanding. To a Chinese speaking observer of the Chinese room, it would appear that there is someone inside that obviously understands Chinese in order to answer the questions properly, and yet it seems obvious that this is not the case. In the same way that the non-Chinese speaker answers questions that have no meaning, a computer that can answer questions based on semantic scripts and thus appear to understand the content, really doesn’t. Here we return to the point raised in our previous discussion of understanding, the need for contextual understanding. Words must have meaning both in relation to each other and to the things and ideas they represent. While it is difficult to see how a general purpose computer without sensory input could have much in the way of word-to-world connections, it could be sophisticate enough to have interrelationships between the symbols that designate words to an extent to constitute semantical knowledge. Thus my objection to Searle’s logic is two-fold. First, while the man in the room does not understand Chinese, the room as a whole, functioning as a system, does. Secondly, Searle assumes that the “computer program has to defined formally” (Searle 215). The combination of the worker and the rule book can be construed as allowing for the emergence of an intelligent system. The fact that a system could respond to questions that require semantic understanding for a human to answer implies that the ability to handle semantic information is built into the system in some fashion. Searle is correct that a strictly syntactical system cannot possess understanding, but it would also not be able to answer non-trivial questions outside a very narrow domain without the inclusion of semantical ability as a function of the rule book. The attempts of AI, machine translation, and other natural language processing related enterprises have been greatly hindered by the difficulty of including semantic abilities into their programs. And this has largely been a result of deductive, rule based programming. But just because most programming has been linear, serial and logic based does not mean that all programs need be based on such algorithms. Fuzzy logic seeks to create machine intelligence using general rules rather specific equations. Neural networks provide a bottom up approach that is inductive, adaptive and parallel. The addition of these approaches to that of the more traditional AI programming may allow for new breakthroughs in the development of AI.
Returning to my first point, I would say that the fact that it does not strike Searle or Penrose as plausible that the Chinese room as a system possesses understanding mean that it does not, given a broader concept of understanding and intelligence. If one replaces the worker with a team of workers, and compare them with the individual neurons (or even subsystems) in the brain, this becomes clearer. In the same way that “understanding” cannot be found in any given neuron or brain structure, neither is it found in any of the uncomprehending workers in the Chinese room nor in the paper comprising the rule book of instructions that allows for the proper matching of answers to questions. It would seem that understanding is the result of the process of implementing the information in the rule book. The intelligence of the system being self-evident as expressed in its behavior.
This leads back to the need for a working definition of what constitutes intelligence in terms of behavior. I do not believe that intelligence necessarily must mimic human ability in order to be useful. Why should an AI have to be able limit itself to acting ‘merely’ human (and to be able to lie) so as to pass the Turing test in order be intelligent? Naturally, our inclination when we consider designing our own intelligent creation is to create it in our own image. When we focus on building a machine that can pass the Turing test, we are failing to consider to what practical purpose we would want apply such a machine that might justify its development. Earlier we considered a list of mental abilities that are related aspects of intelligence: emotion, common sense, creativity, holistic and analogical thinking, learning, spirituality, understanding, inspiration, intuition, natural language skills. Are all of these abilities necessary in an AI for it to be useful? It may be more realistic to view intelligence as a continuum with varying degrees with many types of intelligence that do not resemble human intelligence. That human intelligence might serve as a model is logical, but we should not restrict ourselves to it either in style, feature or capability. Further, that its character was obviously alien need not be a disadvantage to its intended role in society. If the key to intelligence is flexibility then designing a machine that could pass itself off as human would clearly be useful for many things. But it seems to me that goal of AI should be more than simply replacing man with a machine of similar abilities. What applications might require a general purpose human-like AI? Is it even desirable to try to create an entity that fully replicates a human being? In general, focusing on narrow, more specific applications might be better. I think that there will be at least three very different areas of application for AI. First, is the development of intelligent robotics that can perform jobs that would be dangerous or impossible for humans to carry out. Specific applications could include things as deep-sea, underwater welding, hazardous waste cleanup and the exploration of the surface of Venus. Also more mundane tasks have been purposed (cite from Wired) such as hoards of miniature environmental workers doing everything from dusting and vacuuming indoors to lawn maintenance and controlling insect pests on crops. With the development of extreme miniaturization using nanotechnology there exist many other possibilities, some of the most interesting involving medical technology.
The second type of AI application is closely related to the areas of computer science now generally called expert systems and knowledge engineering. These areas have developed out of the pursuit of more general purpose AI and are marked by their focus on limited subject domains. While impressive advances have been made in expert systems, these are not considered to possess real intelligence. They are often implemented to assist human experts in decision making. However as these systems advance they will almost certainly be more knowledgeable and reliable than most human experts. This is particularly true in relation to domains that require such large amounts of information to evaluate the it is impossible for a most human experts to learn it all. Medicine is a clear example of an area where the body of knowledge is increasing so rapidly that practitioners cannot keep up with new information. Similarly, situations such as chemical refineries and nuclear power stations that have highly complex systems to monitor and adjust could be supervised by extremely sophisticated control systems that could react more quickly and reliably than humans, resulting in both safer and more efficient operation.
These first two general areas of AI application do not necessarily require the level of attainment that would deserve the label of strong AI. However. their discussion is useful in promoting the recognition that there are different types and degrees of intelligence as manifest in intelligent behavior. This can help us recognize that it is overly restrictive to possess a single human based standard by which to judge whether or not a system is intelligent. Note also that the non-human aspects of these systems are among the features that make them most useful. We would not want to create human-like robots to do work that was extremely repetitive and dull or on the other hand that was extremely dangerous. In such cases, cognitive features such as imagination and emotion would interfere with performance of the intended task. Likewise, to endow an expert system with language abilities that included a sense of humor and a tendency to joke around might lessen its value.
The third application of AI returns us directly to the subject of strong AI. It is the development of artificial minds that are sentient and intelligent, possessing knowledge , wisdom and intentionality. We should be pursuing the development of a higher life form. Whether or not it could pass itself off as human would not be that important. The idea that strong AI should be geared to producing something that will pass the Turing test is dated. The goal of AI should be to produce independent intelligence, (or simulations of intelligence for particular purposes) but that this intelligence must resemble human intelligence is not required. Indeed, ideally AI should be clearly non-human, distinguished by its superiority to the human mind in capacity and capabilities. I will address further the idea of creating highly intelligent life and its potential benefits to society in the final section of this paper “Why AI?”
I have so far avoided discussion of any moral implications in creating artificial intelligence life forms. As AI and especially intelligent robotics are developed, there will develop the need to deal with the legal and ethical questions that will arise. There also are questions as to what dangers might lie in developing such an entity? I cannot attend to questions in this space but wish to mention my recognition of them while considering the implications of strong AI.
The development of Artificial Intelligence is a complex problem. Researchers have been actively working on approaches to it since the mid 1950’s. Rather than accepting Wagman’s narrow definition of the field as ”a specialized discipline within computer science.” I am inclined to view AI as an interdisciplinary subject that will require a broad range of inputs from many disciplinary areas if anything more than narrow expert systems are to be developed. Ultimately the implementation of the solution to AI will revert back to computer science but the solution to the problem is much broader than software programming or hardware design This is to say that the body of knowledge required to create AI is not a subset of that found in computer science. Of course that could be said of many computer applications, systems analysts work with end users to discover their needs and consult specialists in the application field to determine what the system is required to do. Software development follows a standard formula to organize the production of software programs. This software development consists of five basic steps: requirements specification, analysis, design, implementation, and testing and verification. “Software engineers and software developers use the following software development method for solving programming problems....The first three steps are critical; if they are not done properly, you will either solve the wrong problem or produce an awkward, inefficient solution.
The idea of emergence works well with bottom up approach of neural networks and allows for properties such as generative language, learning and imagination. It differs from a linear top down approach to programming in that all details of behavior in the system do not have to formalized in specific rules in advance. Combined with idea of massively parallel distributed processing it can allow for extreme complexity and could generate output not implicitly implied in either its input or its program. Thus it could have the ability to create new knowledge, and qualify as intelligent in the most restrictive sense. Following the model given by Marvin Minsky in The Society of Mind , we could create an artificial intelligence that has the appearance of being a singular, general purpose entity, but is comprised of a collection of narrow, expert-system type components. There would be a front end system or even a hierarchy of such systems coordinating the activities of other functional and domain specific subsystems. Setup on a distributed network or parallel processing hardware would allow it to break down complex questions into component parts, solve the various aspects, and then recombine the parts into a whole. This last aspect, that of synthesis, is the most difficult, involving as it does the question of holistic thinking. Hillis (175) critically explores the idea of intelligence as emergent behavior. He writes that “the notion of emergence would suggest that such a network, once it reached some critical mass, would spontaneously begin to think.” But he goes on to caution that “this is a seductive idea because it allows for the possibility of constructing intelligence without first understanding it” (176). The idea of critical mass is very interesting though Hillis does not discuss it in any further detail. It seems to me that critical mass should certainly be interpreted in terms of the complexity of the system as well as quantitative aspects of size and speed. Turing (454) explores the idea of categorizing minds in terms of critical mass. A mind with “sub-critical” mass will respond to the input of ideas with less than one to one output of ideas in response. Whereas a “super-critical” mind may generate extended trains of thought leading to development of propositions and theorem. In humans the difference between subcritical and super critical minds would not seem to be accounted for so much by differences in the physical constitution of the brain as in its function. This may not be true, but study of the question of why some minds are much more fertile than others might lead to some interesting insights into the nature of intelligence.
will conclude this section by emphasizing that the development of AI should
be viewed as a long term goal of the scientific community. Alan
Turing was cautiously optimistic when he estimated that a thinking machine
could be produced if a team of sixty workers devoted themselves to it for
fifty years...and nothing went into the wastebasket. The
over-optimism of early AI
researchers and the subsequent failure of their prophesies to materialize
have given ammo to those who would deny the possibility of AI. The
subsequent disillusionment resulted in individual discouragement in the
face of the seeming intractable nature of the subject as well as cuts in
AI research funds across the board. This
is a project of the next century or more. It
is not a short term enterprise with a commercial business cycle and instant
gratification to the stockholders. The
quest for AI should be viewed more like the building of a Gothic cathedral
was, as a collective project that spans across several generations.
Man and his technology co-exist
in a relationship that is symbiotic in nature Without
technology humans would in most cases be very poorly adapted to their natural
environments. This is obvious
in terms of the need for clothing and shelter to protect against the elements. We
must note however that most humans live in environments that have been
largely reshaped by technology; “natural” environments do not exist in
most inhabited areas of the developed world. Our
existence is inseparably intertwined with our technology. Not
only could we not continue to survive without it, but we would not even
exist as we do now, either culturally or biologically without the protective
and shaping forces of technology. That
our culture is shaped and defined by changes in our technology seems to
be a priori. But beyond this,
technology has shaped our biology. In
the past it has done so slowly without our being aware of the process. The
development of language and culture has freed Man from the limitations
of depending on random mutation and natural selection for his evolutionary
advancement. All the knowledge
accumulated by an individual does not have to die with them. The
use of recorded language has extended the means by which to record ideas
for future generations as well as to distribute it widely. In
this sense, the evolution of human society is Lamarkian. By the same token,
technological evolution is Lamarkian, it never forgets. Now,
in an age where we can manipulate our genetic code directly through gene
splicing, extend our mental
capacities through electronic extensions and live in totally artificial
environments indefinitely, there exists new opportunities as well as dangers
as the rate of change becomes exponential. How
will all this effect human evolution? We
are entering a period where evolution has the potential to be self-directed. We
should consider what role could AI play in this. Can
we model our own future? Will
our progeny be robotic or cyborg in nature? As
incredible as this sounds, the idea contains as much science as it does
fiction. In his book Mind Children,
Hans Moravec explores such visions of the future. Several
other writers have also given much thought to the idea that man may merge
into an ever closer symbiotic relationship with his technology even to
the point of replacing his biological nature with other technologies. (McEachern
293-313; Mazlish). While such
speculation may be dismissed as fantastic, imagine the likely reaction
of even the average educated person of the last century to a description
of our present technological society. Reality
is like life , it is not static, but evolving with time. McEachern
(325) provides a very interesting discussion of this and provides a concise
expression of the interrelationship between increasingly complex arrangements
of information and the present description of reality “Information
accumulates with time, and for that reason, reality changes with time. There
is a fundamental difference between what is and what is known. But
what is known affects what will be, because what is known can be used to
boot-strap new processes into existence.” The
advance of technology is exploding exponentially with no limits in sight. AI
is part of this uncharted advance but also offers a means by which we may
be able to ride the wave of change rather than be drowned by it. It
may even allow us to direct it. God-like
AI would not have created us, but rather us it. But
it could create our future through self-directed evolution. Thus it would
really become God-like in turn by recreating Man in a new image.
Babbie, Earl. The Practice of Social Research. 5th ed. Belmont: Wadsworth, 1983.
Blakemore, Colin and Susan Greenfield., eds. Mindwaves: Thoughts on Intelligence, Identity and Consciousness. New York: Basil Blackwell, 1987.
Boden, Margaret A. Artificial Intelligence and Natural Man. 2nd ed. New York: Basic, 1987.
Borges, Jorge Luis. Ficciones. New York: Grove Weidenfeld, 1956.
Campbell, Jeremy. Grammatical Man: Information, Entropy, Language and Life. New York: Touchstone, 1982.
Cercone, Nick, and Gordon McCalla. "Artificial Intelligence: Underlying Assumptions and Basic Objectives." Journal of the American Society for Information Science 35 (1984) 280-290.
Chubin, Daryl E., et al., eds. Interdisciplinary Analysis and Research. Mt. Airy: Lomond, 1986.
Churchland, Paul M. Matter and Consciousness: A Contemporary Introduction to the Philosophy of mind. revised ed. Cambridge: MIT Press, 1988
Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence.
New York: Basic Books, 1993.
Davies, Roy. "The Creation of New Knowledge by Information Retrieval and Classification." Journal of Documentation 45 (1989): 273-301.
Dennett, Daniel C. Consciousness Explained. Boston: Little, Brown, 1991.
Ford, Nigel. How Machines Think: A General Introduction to Artificial Intelligence. New York: John Wiley, 1987.
Gibson, William. Count Zero. New York: Ace, 1986.
---. Neuromancer. New York: Ace, 1984.
Gleik, James. Chaos: Making a New Science. New York: Penguin, 1987.
Goldstein, Bruce E. Sensation and Perception. 3rd ed. Belmont: Wadsworth, 1989.
Göranzon, Bo and Ingela Josefson, eds. Knowledge, Skill and Artificial Intelligence. Foundations and Applications of Artificial Intelligence. London: Springer-Verlag, 1988.
Graubard, Steven R., ed. The Artificial Intelligence Debate: False Starts, Real Foundations. Cambridge: MIT Press, 1988.
Gregory, Richard. “In Defense of Artificial Intelligence - A Reply to John Searle.” Mindwaves: Thoughts on Intelligence, Identity and Consciousness. Eds. Blakemore, Colin and Greenfield. New York: Basil Blackwell, 1987. 235-244.
Guilford, J. P. “The Structure-Of-Intellect Model.” Handbook of Intelligence: Theories, Measurements, and Applications. Ed. Benjamin B. Wolfman. New York: Wiley, 1985. 225-266.
Hancox, Peter J., William J. Mills and Bruce J. Reid. Keyguide to Information Sources in: Artificial Intelligence/Expert Systems London: Mansell, 1990.
Hillis, Daniel W. “Intelligence as Emergent Behavior; or, The Songs of Eden.” The Artificial Intelligence Debate: False Starts, Real Foundations. Ed. Graubard, Steven R. Cambridge: MIT Press, 1988. 175-190.
Horn, John L. “Remodeling Old Models of Intelligence.” Handbook of Intelligence: Theories, Measurements, and Applications. Ed. Benjamin B. Wolfman. New York: Wiley, 1985. 267-300.
Humphreys, Lloyd G. “General Intelligence: An Integration of Factor, Test, and Simplex Theory.” Handbook of Intelligence: Theories, Measurements, and Applications. Ed. Benjamin B. Wolfman. New York: Wiley, 1985. 201-224.
Hynd, George and W. Grant Willis. “Neurological Foundations of Intelligence.” Handbook of Intelligence: Theories, Measurements, and Applications. Ed. Benjamin B. Wolfman. New York: Wiley, 1985. 119-157.
Johnson-Laird, Philip N. Human and Machine Thinking. Hillsdale: Lawrence Erlbaum, 1993.
Kandel, Eric R., James H. Schwartz, and Thomas M. Jessell., eds. Principles of Neural Science. 3rd ed. Norwalk: Appleton & Lange, 1991.
Kelly, John. Artificial Intelligence: A Modern Myth. New York: Ellis Horwood, 1993.
Khawam, Yves J. "The AI Interdisciplinary Context: Single or Multiple Research Bases?" Library and Information Science Research 14 (1992): 57-75.
Langton, Christopher G., ed. Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems. Sante Fe Institute: Studies in the Sciences of Complexity 6. Redwood City: Addison-Wesley, 1987.
Levine, Robert I., Diane E. Drang and Berry Edelson. A Comprehensive Guide to AI and
Expert Systems: Turbo Pascal Edition. New York: McGraw-Hill, 1988.
Levy, Steven. Artificial Life: A Report from the Frontier Where Computers Meet Biology. New York: Vintage, 1992.
Lindsay, David. A Voyage to Arcturus. New York: Ballentine, 1968.
Lycan, William G. Consciousness. Cambridge: MIT Press, 1987.
Mazlish, Bruce. The Fourth Discontinuity: The Co-evolution of Humans and Machines. New Haven: Yale UP, 1993.
Minsky, Marvin. The Society of Mind. New York: Simon & Schuster, 1986.
Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence. Cambridge: Harvard UP, 1988.
Partridge, Derek and Yorick Wilks, eds. The Foundations of Artificial Intelligence:
A Sourcebook. Cambridge: Cambridge UP, 1990.
Pattee, H. H. “Simulations, Realizations, and Theories of Life.” Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems. Ed. Christopher G Langton. Sante Fe Institute: Studies in the Sciences of Complexity 6. Redwood City: Addison-Wesley, 1987. 63-75.
Penrose, Roger. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. New York: Penguin, 1989.
Robinson, William S. Computers, Minds & Robots. Philadelphia: Temple UP, 1992.
Richelle, Marc N. “Reconciling Views on Intelligence?” Intelligence: Reconceptualization and Measurement. Ed. Helga A. H. Rowe. Hillsdale: Lawrence Erlbaum, 1991. 19-33.
Schank, Roger C and Peter Childers. The Cognitive Computer: On Language, Learning and Artificial Intelligence. Reading: Addison-Wesley, 1984.
---”Story-Based Memory.” Minds, Brains & Computers. Ed. Ralph Morelli, et al. Norwood N.J., Ablex, 1992. 134-151.
Schwartz, Jacob T. “The New Connectionism: Developing Relationships Between Neuroscience and Artificial Intelligence” The Artificial Intelligence Debate: False Starts, Real Foundations. Ed. Graubard, Steven R. Cambridge: MIT Press, 1988. 123-141.
Searle, John. “Minds and Brains Without Programs.” Mindwaves: Thoughts on Intelligence, Identity and Consciousness. Eds. Blakemore, Colin and Greenfield. New York: Basil Blackwell, 1987. 209-233.
Shirai, Yoshiaki and Tsujii Jun-ichi. Artificial Intelligence: Concepts, Techniques and Applications. New York: Wiley & Sons, 1984.
Sokolowski, Robert. “Natural and Artificial Intelligence” The Artificial Intelligence Debate: False Starts, Real Foundations. Ed. Graubard, Steven R. Cambridge: MIT Press, 1988. 45-64.
Teodorescu, Ioana. "Artificial intelligence and information retrieval." Canadian Library Journal 44.1 (1987) 29-32.
Terrace, Herbert. “Thoughts Without Words.” Mindwaves: Thoughts on Intelligence, Identity and Consciousness. Eds. Blakemore, Colin and Greenfield. New York: Basil Blackwell, 1987. 123-137.
Turban, Efraim. Expert Systems and Applied Artificial Intelligence. New York: Macmillan, 1992.
Turing, Alan M. Collected Works of A.M. Turing. Ed. D.C. Ince. Amsterdam: North- Holland, 1992.
Turner, Frederick. Tempest, Flute & Oz: Essays on the Future. New York: Persea Books, 1991
Wagman, Morton. Cognitive Science and Concepts of Mind: Toward a General Theory of Human and Artificial Intelligence. New York: Praeger, 1991.
Weckert, John and Craig McDonald. "Artificial Intelligence, Knowledge Systems , And The Future Library: A Special Issue Of Library High Tech." Library High Tech 10.1-2 (1992) 7-13.
Wiener, Norbert. God and Golom, Inc. Cambridge: M.I.T. Press, 1964.
Winston, Patrick Henrey. Artificial Intelligence. 3rd ed. Reading: Addison-Wesley, 1992.