Russell and Norvig (1995) say that along with modern genetics, AI is regularly cited as the "field I would most like to be in" by scientists in other disciplines. Symbolic AI is a very attractive area of AI with its focus on the Mind's symbolic processing. Perhaps this is because our Minds, besides being very important to us, are both exceedingly familiar and yet also scientifically mysterious. The challenge of having to build mental systems that behave intelligently is an ultimate challenge to many scientists.
Computer Scientists use their minds (some of the time at least!) to program the computer with instructions So it can execute tasks. AI programmers write programs to enable the computer to work out itself to various degrees as to what to do. They aim to leave more of their minds on the machine than with 'ordinary' programs. This sounds cool, but is it really feasible?
Symbolic AI people are touchy about defining their subject. This is due to Symbolic AI having been more of a craft arising from a technology than a science with a philosophy. Whereas a science would be concerned with principle, and in particular with definitions, Symbolic AI has grabbed concepts from where it can find them and put them to work in its techniques.
Definitions of Symbolic AI have been until recently, perversely enough, about avoiding a principled definition:
(a) (Winston, 1984, p1) "Artificial Intelligence is the study of ideas that enable computers to be intelligent."
From this we glean the notion that AI is to do with artefacts called computers. Intelligence remains undefined.
(b) (Minsky, 1968, pv): "Artificial intelligence is the science of making machines do things that would require intelligence if done by men".
People's natural ability to recognise intelligent things 'defines' intelligence without any reference to any principles. The classic AI definition.
(c) (Rich, 1983, p1): "Artificial Intelligence (A.I.) is the study of how to make computers do things at which, at the moment, people are better."
A somewhat offbeat definition which is nevertheless important. The referral to the current moment seems a little awry. It was probably introduced to help, like (b), with labelling certain activities as intelligent. However, do AI programs cease to be such when they reach a human level of performance?
The more important point is the extra reference compared to (b) to the activity being something people do better. The motivation for this seems to come from 1.4 where Rich describes associative learning, which for people requires intelligence, as uninteresting since a computer can simply employ a look-up table <beware neural nets!>. The essential point is that AI should be intelligence as far as the artefact is concerned. <This raises the question of whether intelligence is a universal or a relative norm.>
(3) The physical symbol hypothesis
P.J. Hayes' definition of AI gives the flavour of this section (1973, p.40):
"the study of intelligence as computation".
A physical symbol system has been defined by Newell & Simon, 1976, p.116:
"A physical symbol system consists of a set of entities, called symbols, which are physical patterns that can occur as components of another type of entity called an expression (or symbol structure). Thus, a symbol structure is composed of a number of instances (or tokens) of symbols related in some physical way (such as one token being next to another). At any instant of time the system will contain a collection of these symbol structures. Besides these structures, the system also contains a collection of processes that operate on expressions to produce other expressions: processes of creation, modification, reproduction and destruction. A physical symbol system is a machine that produces through time an evolving collection of symbol structures. Such a system exists in a world of objects wider than just these symbolic expressions themselves."
I interpret the key terms in Newell's definition to be as follows:
(i) token: a (physical) instantiation of a symbol
(ii) symbol: an entity having a determinable meaning within a formal symbol system
(iii) formal symbol system: a system made up of symbols and expressions which are manipulable according to (formal) rules
(iv) expression: a sequence of symbols
Example: electronic calculator
The symbols are '0', '1', ... '9', '.', '+', '=', ... (at the moment the marks on the page are not tokens of the symbols - that's why quotes are included). The tokens are the particular electronic pulses representing the symbols at any moment. The logical symbol system is the number system base 10 together with the laws of arithmetic. An expression would be 3 + 24 / (5 x 2). Note that the '2' has different meanings in the expression (20 (in 24), and 2).
The computer is able to simulate any physical symbol system due to its capacity to emulate any formal symbol system. As you should recall, this in turn is because the computer's completeness enables it to transform any bit pattern to any other by processing through appropriate binary operators. The computer is a universal formal symbol manipulator.
Newell and Simon use their idea of a physical symbol system in the following hypothesis (p.116):
"The Physical Symbol System Hypothesis. A physical symbol system has the necessary and sufficient means for general intelligent action."
Note firstly that Newell and Simon should refer to a more powerful system, e.g. a universal physical symbol system, unless they wish electronic calculators to be seen as intelligent.
This hypothesis can be used to distinguish a particular branch of AI - which I shall call symbolic AI. If the hypothesis is not true, then the basis for symbolic AI, in as much as it is tied to the computer, is undermined. AI done using neural networks may be affected to a far leser degree as it is asymbolic.
Rich says that due to the hypothesis not being able to be logically proven or disproven, it awaits experimental verification. To my mind though, it awaits a principled understanding of general intelligent action (which experiments may aid).
(4) Weak and Strong AI
Searle has made explicit a widely held distinction between those aspects which are technological and those which are scientific and philosophical (1980, p.417):
"According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. . . But according to strong AI, the computer is not merely a tool in the study of the mind; rather the appropriately programmed computer really is a mind, in the sense that computers given the right programs can literally be said to understand ... etc"
In this course our interests will be confined to the weak AI approach, that is, seeing AI programming as powerful programming. We shall nonetheless be modelling the real thing, that is intelligence, reasoning, knowing, etc. It will be useful to have a feel as to whether or in what way the models are complete enough to become reality when put into soft or hardware <wetware?!>.
(5) Cognitive Science and modelling.
Cognitive Science is a term that embraces many disciplines: Psychology , Computer Science, Linguistics are three often mentioned. The common aim is to establish a science of cognition in entities both human and artificial. Cognition is a general term covering all the modes of knowing: perceiving, imagining, remembering, conceiving, judging, and reasoning (After George (1962))
One part of the activities going on in Cognitive Science is to establish computational models of the mind. The emphasis on modelling comes about through being primarily concerned with understanding the mind through pushing the latest relevant technological metaphor, in this case the computational metaphor, to its limits. The acknowledgement is that the metaphor should teach us something but maybe not everything about the mind, and so the computation has a modelling rather than a replication status.
If AI is placed as sub-discipline of Cognitive Science we now have, for example, Charniak and McDermott (1985, p6):
"AI is the study of mental faculties through the use of computational models."
A problem with this definition is that it takes AI away from Computer Science and focusses on what may at the moment be seen by many as exclusively a human feature, albeit a crucial one, to the exclusion of techniques for artificial purposes. There is also the problem that intelligence has simply become replaced by mentation as something requiring definition.
(6) AI technique
Bench-Capon (1990, p7):
"The ... way of defining the area of AI ... is ... by reference to the techniques employed ... two techniques are of particular concern ... the technique of search ... involving heuristics, ... and the separating of a knowledge base from a program which operates on that knowledge base. An AI program is simply one which works by having a declarative representation of some body of knowledge relevant to a problem and by manipulating this knowledge."
This is fine as far as it goes and is particularly suited to symbolic AI. A question arises though as to where the techniques come from.
(7) The design of rational agents
In your main text book by Norvig and Russell, the authors say that a system is rational if it does the right thing (p4). They go on to say what the right thing is by defining acting rationally as acting to achieve one's goals, given one's beliefs (p7).
This approach places AI somewhere other than pure logic or human behaviour and thought. The correct inferences of pure logic are not always involved in AI since some actions in the world are not provably correct or arrived at using correct inference e.g. pulling a hand away from a hot stove. The behaviour and thought of an AI agent need not be modelled on humans, non-human goals and beliefs may be used.
Their emphasis on rational agents is a fair summary of a broad theme running implicitly through the development of AI. It's also a broader theme than was made explicit in previous text books or definitions.
However, be warned that it leaves out important aspects as well in its pure form. For example, another important feature of AI is that processing is heuristic rather than algorithmic. A heuristic is a procedure that is not necessarily always going to achieve its goal. This clashes with the notion of rationality so that Norvig and Russell are forced to talk about limited rationality where behaviour is imperfect and limited attempts rather than achievements are made. Another point is that intelligence seems to have disappeared from the menu. If goals are achieved given certain beliefs, does this make the system intelligent?
Also, you should be aware that Rationalism is quite a spooky philosophy. People that are Rationalists try to act rationally all the time, they believe there is only one right thing to do - the right thing - for a given goal and set of beliefs. You may spot one or two people who are Rationalists - they stand out! Are we being sensible in creating only rational computers? You have been warned!
(8) The definitive version
The final version:
"The design of rational agents through the incorporation of mental models of intelligence in computational techniques".
Note that the model may be of a human or artificial aspect of intelligence and may be symbolic or subsymbolic. You may not think that this is what AI ought to be. It is though, my attempt to portray what Symbolic AI currently is.
Last updated at 11:54 on 27th September 2000