instance, is an ideal AI Problem. There is no formal algorithm for its
realization, i.e., given a starting and a goal state, one cannot say prior to
execution of the tasks the sequence of steps required to get the goal from the
starting state. Such problems are called the ideal AI problems. The well-
known water-jug problem [35], the Travelling Salesperson Problem (TSP)
[35], and the n-Queen problem [36] are typical examples of the classical AI
problems. Among the non-classical AI problems, the diagnosis problems and
the pattern classification problem need special mention. For solving an AI
problem, one may employ both AI and non-AI algorithms. An obvious
question is: what is an AI algorithm? Formally speaking, an AI algorithm
generally means a non-conventional intuitive approach for problem solving.
The key to AI approach is intelligent search and matching. In an intelligent
search problem / sub-problem, given a goal (or starting) state, one has to reach
that state from one or more known starting (or goal) states. For example,
consider the 4-puzzle problem, where the goal state is known and one has to
identify the moves for reaching the goal from a pre-defined starting state.
Now, the less number of states one generates for reaching the goal, the better
is the AI algorithm. The question that then naturally arises is: how to control
the generation of states. This, in fact, can be achieved by suitably designing
some control strategies, which would filter a few states only from a large
number of legal states that could be generated from a given starting /
intermediate state. As an example, consider the problem of proving a
trigonometric identity that children are used to doing during their schooldays.
What would they do at the beginning? They would start with one side of the
identity, and attempt to apply a number of formulae there to find the possible
resulting derivations. But they won’t really apply all the formulae there.
Rather, they identify the right candidate formula that fits there best, such that
the other side of the identity seems to be closer in some sense (outlook).
Ultimately, when the decision regarding the selection of the formula is over,
they apply it to one side (say the L.H.S) of the identity and derive the new
state. Thus they continue the process and go on generating new intermediate
states until the R.H.S (goal) is reached. But do they always select the right
candidate formula at a given state? From our experience, we know the answer
is “not always”. But what would we do if we find that after generation of a
few states, the resulting expression seems to be far away from the R.H.S of
the identity. Perhaps we would prefer to move to some old state, which is
more promising, i.e., closer to the R.H.S of the identity. The above line of
thinking has been realized in many intelligent search problems of AI. Some of
these well-known search algorithms are:
a) Generate and Test
b) Hill Climbing
c) Heuristic Search
d) Means and Ends analysis
(a) Generate and Test Approach:
This approach concerns the
generation of the state-space from a known starting state (root) of the problem
and continues expanding the reasoning space until the goal node or the
terminal state is reached. In fact after generation of each and every state, the
generated node is compared with the known goal state. When the goal is
found, the algorithm terminates. In case there exist multiple paths leading to
the goal, then the path having the smallest distance from the root is preferred.
The basic strategy used in this search is only generation of states and their
testing for goals but it does not allow filtering of states.
(b) Hill Climbing Approach:
Under this approach, one has to first
generate a starting state and measure the total cost for reaching the goal from
the given starting state. Let this cost be f. While f ≤ a predefined utility value
and the goal is not reached, new nodes are generated as children of the current
node. However, in case all the neighborhood nodes (states) yield an identical
value of f and the goal is not included in the set of these nodes, the search
algorithm is trapped at a hillock or local extrema. One way to overcome this
problem is to select randomly a new starting state and then continue the above
search process. While proving trigonometric identities, we often use Hill
Climbing, perhaps unknowingly.
(c) Heuristic Search:
Classically heuristics means rule of thumb. In
heuristic search, we generally use one or more heuristic functions to determine
the better candidate states among a set of legal states that could be generated
from a known state. The heuristic function, in other words, measures the
fitness of the candidate states. The better the selection of the states, the fewer
will be the number of intermediate states for reaching the goal. However, the
most difficult task in heuristic search problems is the selection of the heuristic
functions. One has to select them intuitively, so that in most cases hopefully
it would be able to prune the search space correctly. We will discuss many of
these issues in a separate chapter on Intelligent Search.
(d) Means and Ends Analysis:
This method of search attempts to
reduce the gap between the current state and the goal state. One simple way to
explore this method is to measure the distance between the current state and
the goal, and then apply an operator to the current state, so that the distance
between the resulting state and the goal is reduced. In many mathematical
theorem- proving processes, we use Means and Ends Analysis.
Besides the above methods of intelligent search, there exist a good
number of general problem solving techniques in AI. Among these, the most
common are: Problem Decomposition and Constraint Satisfaction.
Problem Decomposition:
Decomposition of a problem means breaking
a problem into independent (de-coupled) sub-problems and subsequently sub-
problems into smaller sub-problems and so on until a set of decomposed sub-
problems with known solutions is available. For example, consider the
following problem of integration.
I =
∫
(x
2
+ 9x +2) dx,
which may be decomposed to
∫
(x
2
dx) +
∫
(9x dx) +
∫
(2 dx) ,
where fortunately all the 3 resulting sub-problems need not be decomposed
further, as their integrations are known.
Constraint Satisfaction:
This method is concerned with finding the
solution of a problem by satisfying a set of constraints. A number of
constraint satisfaction techniques are prevalent in AI. In this section, we
illustrate the concept by one typical method, called hierarchical approach for
constraint satisfaction (HACS) [47]. Given the problem and a set of
constraints, the HACS decomposes the problem into sub-problems; and the
constraints that are applicable to each decomposed problem are identified and
propagated down through the decomposed problem. The process of re-
decomposing the sub-problem into smaller problems and propagation of the
constraints through the descendants of the reasoning space are continued until
all the constraints are satisfied. The following example illustrates the principle
of HACS with respect to a problem of extracting roots from a set of
inequality constraints.
Example 1.2:
The problem is to evaluate the variables X
1
, X
2
and X
3
from
the following set of constraints:
{ X
1
≥ 2; X
2
≥3 ; X
1
+ X
2
≤ 6; X
1
, X
2
, X
3
∈ I }.
For solving this problem, we break the ‘
≥’ into ‘>’ and ‘=’ and propagate the
sub-constraints through the arcs of the tree. On reaching the end of the arcs,
we attempt to satisfy the propagated constraints in the parent constraint and
reduce the constraint set. The process is continued until the set of constraints
is minimal, i.e., they cannot be broken into smaller sets (fig. 1.3).
There exists quite a large number of AI problems, which can be solved
by non-AI approach. For example, consider the Travelling Salesperson
Problem. It is an optimization problem, which can be solved by many non-AI
algorithms. However, the Neighborhood search AI method [35] adopted for
this problem is useful for the following reason. The design of the AI
algorithm should be such that the time required for solving the problem is a
polynomial (and not an exponential) function of the size (dimension) of the
problem. When the computational time is an exponential function of the
dimension of the problem, we call it a combinatorial exploration problem.
Further, the number of variables to be used for solving an AI problem should
also be minimum, and should not increase with the dimension of the
problem. A non-AI algorithm for an AI problem can hardly satisfy the above
two requirements and that is why an AI problem should be solved by an AI
approach.
{ X
1
≥ 2; X
2
≥3 ; X
1
+ X
2
≤ 6; X
1
, X
2
, X
3
∈ I}
X
1
= 2 X
1
> 2
{ X
1
=2, X
2
≥3 ; { X
1
=3, X
2
≥3 ;
X
1
+ X
2
≤ 6; X
j
∈ I, ∀j} X
1
+ X
2
≤ 6; X
j
∈ I, ∀j}
X
2
=3 X
2
>3 X
2
=3 X
2
> 3
{X
1
=2, X
2
=3} { X
1
=2, X
2
=4} {X
1
=3, X
2
=3} No solution
Fig. 1.3: The constraint tree, where the arcs propagate the constraints, and
the nodes down the tree hold the reduced set of constraints.
1.4 The Disciplines of AI
The subject of AI spans a wide horizon. It deals with the various kinds
of knowledge representation schemes, different techniques of intelligent
search, various methods for resolving uncertainty of data and knowledge,
different schemes for automated machine learning and many others. Among
the application areas of AI, we have Expert systems, Game-playing, and
Theorem-proving, Natural language processing, Image recognition, Robotics
and many others. The subject of AI has been enriched with a wide discipline
of knowledge from Philosophy, Psychology, Cognitive Science, Computer
Artificial
Intelli
gence
Game
Playing
Theorem
Proving
Language & Image
Understanding
Robotics &
Navigation
Philosophy
& Cog. Sc.
Psychology
Computer
Science
Maths.
Science, Mathematics and Engineering. Thus in fig. 1.4, they have been
referred to as the parent disciplines of AI. An at-a-glance look at fig. 1.4 also
reveals the subject area of AI and its application areas.
PARENT DISCIPLINES OF AI
.
* Reasoning * Learning * Planning * Perception
* Knowledge acquisition * Intelligent search
* Uncertainty management *Others
APPLICATION AREAS OF AI
Fig. 1.4: AI, its parent disciplines and application areas.
1.4.1 The Subject of AI
The subject of AI was originated with game-playing and theorem-proving
programs and was gradually enriched with theories from a number of parent
Subjects covered under AI
Voice System of
the Child
Voice System of the
Mother
BRAIN
Learning System of
the Child
Auditory
_ Nerve
+
Hearing System
of the Child
disciplines. As a young discipline of science, the significance of the topics
covered under the subject changes considerably with time. At present, the
topics which we find significant and worthwhile to understand the subject are
outlined below:
Fig. 1. 5: Pronunciation learning of a child from his mother.
Learning Systems:
Among the subject areas covered under AI, learning
systems needs special mention. The concept of learning is illustrated here
with reference to a natural problem of learning of pronunciation by a child
from his mother (vide fig. 1.5). The hearing system of the child receives the
pronunciation of the character “A” and the voice system attempts to imitate it.
The difference of the mother’s and the child’s pronunciation, hereafter
called the error signal, is received by the child’s learning system through the
Motor Nerve
Child’s pronunciation
Mother’s pronunciation
Tongue position adjustment
auditory nerve, and an actuation signal is generated by the learning system
through a motor nerve for adjustment of the pronunciation of the child. The
adaptation of the child’s voice system is continued until the amplitude of the
error signal is insignificantly low. Each time the voice system passes through
an adaptation cycle, the resulting tongue position of the child for speaking
“A” is saved by the learning process.
The learning problem discussed above is an example of the well-known
parametric learning, where the adaptive learning process adjusts the
parameters of the child’s voice system autonomously to keep its response
close enough to the “sample training pattern”. The artificial neural networks,
which represent the electrical analogue of the biological nervous systems, are
gaining importance for their increasing applications in supervised (parametric)
learning problems. Besides this type, the other common learning methods,
which we do unknowingly, are inductive and analogy-based learning. In
inductive learning, the learner makes generalizations from examples. For
instance, noting that “cuckoo flies”, “parrot flies” and “sparrow flies”, the
learner generalizes that “birds fly”. On the other hand, in analogy-based
learning, the learner, for example, learns the motion of electrons in an atom
analogously from his knowledge of planetary motion in solar systems.
Knowledge Representation and Reasoning:
In a reasoning
problem, one has to reach a pre-defined goal state from one or more given
initial states. So, the lesser the number of transitions for reaching the goal
state, the higher the efficiency of the reasoning system. Increasing the
efficiency of a reasoning system thus requires minimization of intermediate
states, which indirectly calls for an organized and complete knowledge base.
A complete and organized storehouse of knowledge needs minimum search to
identify the appropriate knowledge at a given problem state and thus yields
the right next state on the leading edge of the problem-solving process.
Organization of knowledge, therefore, is of paramount importance in
knowledge engineering. A variety of knowledge representation techniques are
in use in Artificial Intelligence. Production rules, semantic nets, frames, filler
and slots, and predicate logic are only a few to mention. The selection of a
particular type of representational scheme of knowledge depends both on the
nature of applications and the choice of users.
Example 1. 3:
A semantic net represents knowledge by a structured
approach. For instance, consider the following knowledge base:
Knowledge Base: A bird can fly with wings. A bird has wings. A bird has
legs. A bird can walk with legs.
The bird and its attributes here have been represented in figure 1.6 using a
graph, where the nodes denote the events and the arcs denote the relationship
between the nodes.
Fig. 1.6:
A semantic net
representation of "birds"
.
Planning:
Another significant area of AI is planning. The problems of
reasoning and planning share many common issues, but have a basic
difference that originates from their definitions. The reasoning problem is
mainly concerned with the testing of the satisfiability of a goal from a given
set of data and knowledge. The planning problem, on the other hand, deals
with the determination of the methodology by which a successful goal can be
achieved from the known initial states [1]. Automated planning finds
extensive applications in robotics and navigational problems, some of which
will be discussed shortly.
Knowledge Acquisition:
Acquisition (Elicitation) of knowledge is
equally hard for machines as it is for human beings. It includes generation of
new pieces of knowledge from given knowledge base, setting dynamic data
structures for existing knowledge, learning knowledge from the environment
and refinement of knowledge. Automated acquisition of knowledge by
machine learning approach is an active area of current research in Artificial
Intelligence [5], [20].
Intelligent Search:
Search problems, which we generally encounter in
Computer Science, are of a deterministic nature, i.e., the order of visiting the
elements of the search space is known. For example, in depth first and breadth
first search algorithms, one knows the sequence of visiting the nodes in a tree.
However, search problems, which we will come across in AI, are
Fly
A
Bird
Wings
Walk
Legs
can
with
has
has
can with