Tải bản đầy đủ (.pdf) (22 trang)

DSpace at VNU: A Framework for Linguistic Logic Programming

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (180.82 KB, 22 trang )

A Framework for Linguistic Logic
Programming
Tru H. Cao,1,∗ Nguyen V. Noi2,†
1
Faculty of Computer Science and Engineering Ho Chi Minh City University
of Technology, Viet Nam
2
Faculty of Engineering Tien Giang University, Viet Nam

Lawry’s label semantics for modeling and computing with linguistic information in natural language provides a clear interpretation of linguistic expressions and thus a transparent model for
real-world applications. Meanwhile, annotated logic programs (ALPs) and its fuzzy extension
AFLPs have been developed as an extension of classical logic programs offering a powerful
computational framework for handling uncertain and imprecise data within logic programs. This
paper proposes annotated linguistic logic programs (ALLPs) that embed Lawry’s label semantics
into the ALP/AFLP syntax, providing a linguistic logic programming formalism for development
of automated reasoning systems involving soft data as vague and imprecise concepts occurring
frequently in natural language. The syntax of ALLPs is introduced, and their declarative semantics is studied. The ALLP SLD-style proof procedure is then defined and proved to be sound and
complete with respect to the declarative semantics of ALLPs. C 2010 Wiley Periodicals, Inc.

1. INTRODUCTION
Linguistic rules with vague and imprecise concepts frequently occur in uncertainty reasoning systems. For example, in the design for safe navigation of
autonomous planetary rovers1 like Mars Exploration Rovers by NASA, many
linguistic rules with vague linguistic labels are used in the reasoning modules
such as
if
if
if
if
if
if
if





Ct is low, then v is slow
Ct is medium, then v is moderate
Ct is high, then v is fast
Rc is few and Rs is small, then β is smooth
Rc is few and Rs is large, then β is rough
Rc is many and Rs is small, then β is rough
Rc is many and Rs is large, then β is rocky

Author to whom all correspondence should be addressed: e-mail:
e-mail:

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, VOL. 25, 559–580 (2010)
C 2010 Wiley Periodicals, Inc.
Published online in Wiley InterScience
(www.interscience.wiley.com). • DOI 10.1002/int.20421


560

CAO AND NOI

where Ct is traction coefficient, v is rover speed, Rc is rock concentration, Rs is
rock size, and β is terrain roughness.
The problem is how to model these linguistic rules with vague linguistic labels
and then how to reason on them. For computer dealing with the semantics of vague
linguistic labels, a framework for linguistic modeling is required and it also needs a
logical formalism for automatic reasoning.

A general methodology for computing with words was proposed by Zadeh2,3
on the basis of fuzzy set theory and fuzzy logic and, in particular, the idea of
linguistic variables. A linguistic variable is defined as one taking natural language
terms as values and where the meaning of the words is given by fuzzy sets on
some underlying domain of discourse. However, the main problem in applying that
methodology is that membership functions do not have a clear interpretation, and
consequently it is difficult to see how to obtain them.
Recently, Lawry4,5 introduced a new framework for linguistic modeling, where
labels are assumed to be chosen from a finite predefined set of labels and the set of
appropriate labels for a value is defined as a random set from a population of individuals into the set of subsets of labels. Then the appropriateness degree, in contrast
to the membership degree, of a value to a label is derived from that probability distribution, or mass assignment, on the set of label subsets. The framework also provides
a coherent calculus for linguistic expressions composed by logical connectives on
linguistic labels. However, it still lacks a formalism for development of linguistic
logic programs to build up automated reasoning systems for soft computing.
In Ref. 6, for instance, a formalism was made to embed fuzzy terms, i.e., fuzzy
sets as values, into logic programs to create linguistic logic programs. In Ref. 7, a
sound and complete fuzzy linguistic logic programming system was developed using
a hedge algebra of linguistic truth variables. However, the semantics of those logical
formalisms were still based on classical fuzzy sets and fuzzy logic. Meanwhile,
annotated logic programs (ALPs)8 offered a powerful computational formalism for
quantitative reasoning. The key idea of annotated logic programming is annotating
usual atomic formulas with multivalued truth-values in a truth-value lattice. An
annotated logic program is a Horn clause-like one consisting of clauses of the form
A: μ ← B1 : μ1 &. . . & Bn : μn , where A: μ and Bi : μi s are annotated atoms. A
Herbrand-like interpretation is then a mapping from a conventional Herbrand base,
i.e., the set of all ground atoms without annotations, to an annotation lattice. In
annotated fuzzy logic programs (AFLPs),9 the framework was extended, where
both atoms and terms were considered as objects that can be annotated by fuzzy sets
on different domains.
The main purpose of this paper is to propose the annotated linguistic logic

program (ALLP) formalism that embeds Lawry’s label semantics into the annotated
logic program syntax, for automated reasoning with vague and imprecise data expressed as linguistic expressions. The syntax and the declarative semantics of ALLPs
were first introduced and studied in Ref. 10 and 11. Section 2 gives an overview of
Lawry’s framework for linguistic modeling. Sections 3 and 4, respectively, present
the syntax and the declarative semantics of a logic programming language, such as
interpretations and satisfaction relations, for ALLPs, and study their fixpoint semantics, which is the bridge between the declarative and the procedural semantics of
International Journal of Intelligent Systems

DOI 10.1002/int


A FRAMEWORK FOR LINGUISTIC LOGIC PROGRAMMING

561

logic programs. In Section 5, the ALLP SLD-style proof procedure is developed and
proved to be sound and complete with respect to (wrt) the declarative semantics of
ALLPs. Section 6 briefly presents an implementation for ALLPs. Finally, Section 7
is for conclusions and suggestions for future research.
2. LAWRY’S FRAMEWORK FOR LINGUISTIC MODELING
The framework starts with the intuition that a proposition such as Bill is tall
means tall is an appropriate label for Bill’s height. Then its main assumption is
that the appropriate degree of a value to a linguistic expression is obtained from a
probability distribution on a set of subsets of linguistic labels for that value. The
following definitions are given in Ref. 5.
2.1. Label Semantics
Let Bill’s height be represented by a variable x in a domain of discourse , LA
be a fixed finite set of possible labels such as short, medium, tall, . . . to label values
of x, and V = {I1 , I2 ,. . . , Im } be a set of individuals who will make or interpret a
statement regarding Bill’s height. All labels in LA are both known and completely

identical for any individual I ∈ V . For each a ∈ , each individual I ∈ V identifies
a set DaI ⊆ LA to stand for the description of a ∈ given by I , as a set of words
appropriate to label a. For example, an expression like Bill is tall, as asserted by
individual I , is interpreted to mean tall ∈ DhI , where h denotes the value of Bill’s
height. Let I vary across the population of individuals V , a random set is obtained
as a mapping from V to the power set of LA, where Dx (I ) = DxI . The random set
Dx can be viewed as the description of variable x in terms of the labels in LA.
The definition of mass assignment associated with Dx is then dependent on a prior
distribution PV over V .
DEFINITION 2.1. Let x ∈ and Dx be a random set from V to 2LA . A mass
assignment for Dx is defined by
mDx (S) = PV

I ∈ V |DxI = S

∀x ∈

, ∀S ⊆ LA.

A higher level measure associated with mDx is quantification of the degree of
appropriateness of a particular word L ∈ LA as a label of x.
DEFINITION 2.2. For L ∈ LA and x ∈
is defined by

μL (x) =

, the appropriateness degree of label L to x

mDx (S)∀x ∈


, ∀S ⊆ LA.

S⊆LA,L∈S

As such, μL is a mapping from to [0, 1] and thus can be technically viewed
as the membership function of a fuzzy set. Lawry used the term “appropriateness
International Journal of Intelligent Systems

DOI 10.1002/int


562

CAO AND NOI

degree” to reflect the underlying semantics more accurately and to highlight the
distinct calculus for μL .
Example 2.3 (cf. Ref. 12). Students are categorized as being weak, average, or
good on their marks graded between 1 and 5 by three professors. In this case
LA = {weak, average, good}, = {1, 2, 3, 4, 5} and V = {I1 , I2 , I3 }. A possible
assignment of appropriate labels is as follows:
D1I1 = D1I2 = D1I3 = {weak}, D2I1 = D2I2 = {weak}, D2I3 = {weak, average},
D3I1 = {average}, D3I2 = {weak, average}, D3I3 = {average, good},
D4I1 = {average, good}, D4I2 = D4I3 = {good}, D5I1 = D5I2 = D5I3 = {good}.
Assuming a uniform distribution PV , this generates the following mass assignments:
mDx (S) = PV ({I ∈ V |DxI = S}) =

|{I ∈ V |DxI = S}|
|V |


mD1 ({weak}) = 1, mD2 ({weak}) =

2
1
, mD2 ({weak, average}) = ,
3
3

mD3 ({average}) =

1
,
3

mD3 ({weak, average}) =

1
1
2
, mD3 ({average, good}) = , mD4 ({good}) = ,
3
3
3

mD4 ({average, good}) =

1
, mD5 ({good}) = 1.
3


The appropriateness degrees for weak, average and good are then evaluated by
1
1
μweak (1) = μweak (2) = 1, μweak (3) = , μaverage (2) = , μaverage (3) = 1,
3
3
1
1
μaverage (4) = , μgood (3) = , μgood (4) = μgood (5) = 1.
3
3
2.2. Linguistic Expressions
For more general linguistic reasoning, a mechanism is required for evaluating
compound label expressions built up using some set of logical connectives such as
∧, ∨, →, and ¬, as to interpret an expression like Bill is not tall. This statement
means that tall is not an appropriate label for x or {tall} ⊆ Dx . That is, negation is
International Journal of Intelligent Systems

DOI 10.1002/int


A FRAMEWORK FOR LINGUISTIC LOGIC PROGRAMMING

563

used to express the nonsuitability of a label. Conjunction and disjunction are then
taken as having the obvious meanings so that Bill is tall and medium, for instance,
is interpreted as saying that both tall and medium are appropriate for x (i.e., {tall,
medium} ⊆ Dx ), and Bill is tall or medium is interpreted as saying that either tall
is an appropriate label for x or medium is an appropriate label for x (i.e., {tall}

⊆ Dx or {medium} ⊆ Dx ). In the case of implication, for instance, very tall implies
tall means that whenever very tall is an appropriate label for x so is tall. Formal
definitions are presented below.
DEFINITION 2.4. The logical connectives on labels are interpreted as follows:
(i)
(ii)
(ii)
(iv)

L1 means that L1 is not an appropriate label.
L1 ∧ L2 means that both L1 and L2 are appropriate labels.
L1 ∨ L2 means that either L1 or L2 are appropriate labels.
L1 → L2 means that L2 is an appropriate label whenever L1 is.

DEFINITION 2.5. Let LA = {L1 , L2 , . . . ., Ln } be a set of possible labels. Then the
set LE of linguistic expressions of LA is recursively defined as follows:
(i) Li ∈ LE for i = 1..n.
(ii) If ϕ, ρ ∈ LE then ¬ϕ, ϕ ∧ ρ, ϕ ∨ ρ, ϕ → ρ ∈ LE.

Each linguistic expression identifies a set of subsets of LA that captures its
meaning as defined below.
DEFINITION 2.6. The appropriate label set of ϕ ∈ LE is a set of subsets of LA
denoted by λ(ϕ) and recursively defined as follows:
(i)
(ii)
(iii)
(iv)
(v)

λ(Li ) = {S ⊆ LA |Li ∈ S}.

λ(¬ϕ) = λ(ϕ).
λ(ϕ ∧ ρ) = λ(ϕ) ∩ λ(ρ).
λ(ϕ ∨ ρ) = λ(ϕ) ∪ λ(ρ).
λ(ϕ → ρ) = λ(¬ϕ) ∪ λ(ρ).

Example 2.7. For LA = {weak, average, good}, the appropriate label sets are
evaluated as follows:
λ(weak) = {{weak}, {weak, average}, {weak, good}, {weak, average, good}}
λ(average) = {{average}, {average, weak}, {average, good}, {weak, average, good}}
λ(weak ∧ average) = {{weak, average}, {weak, average, good}}
λ(weak ∨ average) = {{weak}, {average}, {weak, average}, {weak, good},
{average, good}, {weak, average, good}}
International Journal of Intelligent Systems

DOI 10.1002/int


564

CAO AND NOI

λ(weak → average) = {{weak, average}, {weak, average, good}, {average, good},
{average}, {good}, Ø}
λ(¬weak) = {{average}, {good}, {average, good}, Ø}.

Then the following definition generalizes Definition 2.2 for the appropriateness
degree of a linguistic expression to a value on the domain of discourse:
DEFINITION 2.8. For ϕ ∈ LE and x ∈
expression ϕ to x is defined by


, the appropriateness degree of the linguistic

μϕ (x) =

mDx (S)
S∈λ(ϕ)

Example 2.9. (See Examples 2.3 and 2.7.)
For ϕ = weak ∧ average, one has

μweak∧average (1) = mD1 ({weak, average}) + mD1 ({weak, average, good}) = 0
μweak∧average (2) = μweak∧average (3) =

1
3

μweak∧average (4) = μweak∧average (5) = 0.
For ϕ = weak ∨ average, one has

μweak∨average (1) = μweak∨average (2) = μweak∨average (3) = 1
μweak∨average (4) =

1
3

μweak∨average (5) = 0.

2.3. Defuzzification
In some cases, one may want to obtain a single real value from a linguistic
expression. For example, it is to know whether what the fact Bill is tall tells us about

Bill’s height. Lawry introduced a defuzzification technique to estimate a real value
for a linguistic expression. For a linguistic expression ϕ on a domain of discourse
, Bayes’s theorem gives the following posterior distribution:
International Journal of Intelligent Systems

DOI 10.1002/int


A FRAMEWORK FOR LINGUISTIC LOGIC PROGRAMMING

P r(x = a|ϕ) =

565

P r(ϕ|x = a)P (x = a)
∀a ∈
P r(ϕ|x)P (x)
x∈

According to the label semantics, one has
P r(ϕ|x = a) = P r(Dx ∈ λ(ϕ)|x = a) = P r(Da ∈ λ(ϕ)) =

mDa (S) = μϕ (a)
S∈λ(ϕ)

⇒ P r(x = a|ϕ) =

μϕ (a)P (x = a)
μϕ (x)P (x)
x∈


Then the mean value for ϕ can be obtained from this distribution by
a.Pr(x = a|ϕ)
aϕ =

a∈

Pr(x = a|ϕ)
a∈

a.μϕ (a)P (x = a)
=

a∈

μϕ (a)P (x = a)
a∈

In the continuous case, such a defuzzified value is derived similarly, using
integration instead of addition.
3. SYNTAX OF ALLPS
3.1. Annotation Base
As for ALPs8 and AFLPs9 , an annotation base provides values to be annotated
to objects in ALLPs. These values are linguistic expressions on different domains
of discourse. A set of linguistic expressions based on Lawry’s label semantics forms
a complete lattice.10 However, given a possible label set LA, the computational
complexity of the appropriate label set and appropriateness degrees of a linguistic
expression ϕ is O(2n ), where n = |LA|. Meanwhile, there exist label sets S ⊆ LA
such that mDx (S) = 0∀x ∈ . Removal of those label sets from the appropriate
label set λ(ϕ) reduces the computational complexity. Therefore, we introduce the

restricted label semantics as defined below.
DEFINITION 3.1. Let Z = {S ⊆ LA | mDx (S) = 0∀x ∈ } and F = 2LA \Z. The
restricted appropriate label set of ϕ ∈ LE is denoted and defined by λr (ϕ) = λ(ϕ)\Z.
In Ref. 5, F in Definition 3.1 was called a set of focal elements.
PROPOSITION 3.2. Let ϕ, ρ ∈LE:
(i) λr (Li ) = {S ∈ F |Li ∈ S}.
(ii) λr (¬ϕ) = F \λr (ϕ).
(iii) λr (ϕ ∧ ρ) = λr (ϕ) ∩ λr (ρ).
International Journal of Intelligent Systems

DOI 10.1002/int


566

CAO AND NOI
(iv) λr (ϕ ∨ ρ) = λr (ϕ) ∪ λr (ρ).
(v) λ(ϕ → ρ) = λr (¬ϕ) ∪ λr (ρ).

Proof. This proof is straightforward from Definition 2.6 and Definition 3.1.
PROPOSITION 3.3. For ϕ ∈ LE and x ∈
expression ϕ to x is

, the appropriateness degree of linguistic

μϕ (x) =

mDx (S)
S⊆λγ (ϕ)


Proof. This proof is straightforward from Definition 2.8 and Definition 3.1.
To evaluate the complexity of computing the appropriate label set and appropriateness degrees of a linguistic expression with respect to the restricted label
semantics, we define k-overlap possible label sets as follows:
DEFINITION 3.4. A possible label set LA = {L1 , L2 , . . . , Ln } for a domain
k-overlap if there exists k (1 ≤ k ≤ n) such that for every j > k:
∀i = 1..n − j + 1, ∀x ∈

is

:mDx ({Li , Li+1 , . . . Li+j −1 }) = 0.

PROPOSITION 3.5. Let LA be k-overlap. Then |F |max = 1 +
|LA|.

k(2n−k+1)
,
2

where n =

Proof. Suppose that a focal set of cardinality j is of the form {Li , Li+1 , . . . ,
Li+j −1 } such that ∃x ∈ : mDx ({Li , Li+1 , . . . Li+j −1 }) > 0. Clearly, there are at
most n − j + 1 such focal sets. Meanwhile, according to the definition of a k-overlap
possible label set, the maximum cardinality of focal sets is k. So the maximum
number of focal sets is the sum of the maximum numbers of focal sets of cardinality
j , for j from 1 to k, plus 1 for the empty set. Hence
k

|F |max = 1 +


(n − j + 1) = 1 +
j =1

k(2n − k + 1)
2

As a result from Definition 3.4 and Proposition 3.5, for a k-overlap possible
label set, the complexity of computing the appropriate label set and appropriateness
degrees of a linguistic expression ϕ is O(n) instead of O(2n ).
International Journal of Intelligent Systems

DOI 10.1002/int


A FRAMEWORK FOR LINGUISTIC LOGIC PROGRAMMING

567

DEFINITION 3.6. The set of linguistic expressions constructed from a finite possible
label set LA on a domain forms a complete lattice T , where
(i) The partial order is denoted by “≤ι ” and defined by ∀ϕ, ρ ∈ T : ϕ ≤ι ρ if and only if
(iff) λr (ρ) ⊆ λr (ϕ).
(ii) Let S be a subset of T , lub(S) and glb(S) are linguistic expressions defined by
lub(S) = ∧ϕ∈S (ϕ)
glb(S) = ∨ϕ∈S (ϕ).
(iii) The greatest element T and the least element ⊥ of T are such that λr (T ) = Ø and
λr (⊥) = F .

Example 3.7. Let LA = {small, medium, large} be 2-overlap and = [0, 10], mass
assignments are defined as in Example 23 in Ref. 5. Then Z = {{small, large},

{small, medium, large}} and F = {Ø, {small}, {small, medium}, {medium},
{medium, large}, {large}}. Since n = |LA| = 3 and k = 2, one has
|F |max = 1 +

k(2n − k + 1)
= 6.
2

The restricted appropriate label sets are evaluated as follows:
λr (small) = {{small}, {small, medium}}
λr (medium) = {{medium}, {medium, small}, {medium, large}}
λr (small ∧ medium) = {{small, medium}}
λr (small ∨ medium) = {{small}, {medium}, {small, medium}, {medium,
large}}
λr (small → medium) = {{small, medium}, {medium, large}, {medium},
{large}, Ø}
λr (¬small) = {{medium}, {large},{medium, large}, Ø}
λr (¬large) = {{small}, {medium}, {small, medium}, Ø}.
Therefore,
small ∨ medium ≤ι small ≤ι small ∧ medium
¬large ≤ι small ∧ medium.

DEFINITION 3.8. An annotation base comprises a set of complete lattices of linguistic
expressions defined by Definition 3.6. An annotation is either a linguistic expression
(i.e., constant annotation, or c-annotation for brevity) of a lattice in an annotation
base, an annotation variable (v-annotation), or an annotation term (t-annotation).
International Journal of Intelligent Systems

DOI 10.1002/int



568

CAO AND NOI

An annotation term is recursively defined to be of either of the following forms:
(i) A c-annotation.
(ii) A v-annotation.
(iii) τ (τ1 , τ2 , . . . ,τm ), where each τi (1 ≤ i ≤ m) is an annotation term and τ ( ) is a
continuous and monotonic computable function whose value is a linguistic expression.

Annotation terms of the first two forms are called simple annotation terms.
Following Ref. 8, the notion of ideals of a complete lattice defined below is
used to define the general semantics of ALLPs.
DEFINITION 3.9. An ideal of a complete lattice T is any subset S of T such that
(i) S is downward closed, i.e., if a ∈ S, b ∈ T , and b ≤ι a then b ∈ S, and
(ii) S is closed under finite least upper bounds, i.e., if a, b ∈ S then lub(a, b) ∈ S.

The classical set intersection of two ideals is also an ideal, whence the least
ideal containing all ideals in a given set of ideals is unique. Thus, the set of all ideals
of T forms a complete lattice with the classical subset relation as the partial order,
also denoted by ≤ι . That is, given two ideals s and t, s ≤ι t iff s ⊆ t. The glb and
the lub of a set of ideals are, respectively, their set intersection and the least ideal
containing them. The greatest element is T itself and the least element is the empty
set Ø.
Since linguistic expressions on a domain form a complete lattice wrt the restricted label semantics, ALLPs can be developed in the framework of ALPs8 and
its fuzzy extension AFLPs.9 In the rest of this paper, the definitions for ALLPs are
adapted from the corresponding ones for ALPs and AFLPs, and the proofs for the
stated propositions are similar to those for the corresponding propositions therein.
3.2. Annotated Objects

An ALLP language consists of a conventional first-order language,13 an annotation base, and a conformity relation that maps each predicate and function symbol
of the first-order language to a lattice of the annotation base. As for AFLP,9 in an
ALLP language, both atoms and terms of its first-order language are called objects.
DEFINITION 3.10. An annotated object of an ALLP language is Obj: ϕ, where Obj
is an object and ϕ is an annotation, satisfying the object-annotation conformity
relation of the language.
DEFINITION 3.11. An ALLP clause is a Horn clause-like one of the form Obj: ϕ ←
Obj1 : ϕ1 & Obj2 : ϕ2 & . . . Objn : ϕn , where ϕ is a t-annotation, each ϕi is a
International Journal of Intelligent Systems

DOI 10.1002/int


A FRAMEWORK FOR LINGUISTIC LOGIC PROGRAMMING

569

c-annotation or a v-annotation, and every annotation variable (if any) occurring in
ϕ also occurs in at least one of ϕi s. An ALLP is a finite set of ALLP clauses.
Example 3.12. (cf. Ref. 14). Let SAL be employee’s salary defined on the domain
SAL = [0, 10] and LASAL = {low, moderate, good, very good } with the following
mass assignments:
mD0 ({low}) = mD1 ({low}) = 1
mD2 ({low, moderate}) = 0.5
mD3 ({moderate}) = mD4 ({moderate}) = 1
mD5 ({moderate, good}) = 0.5
mD6 ({good}) = mD7 ({good}) = 1
mD8 ({good, very good}) = 0.5
mD9 ({very good}) = mD10 ({very good}) = 1.
Let YRS be employee’s number of years of experience defined on the domain

= [0, 40] and LAYRS = {junior, experienced, senior} with the following
partial mass assignments:
YRS

mD0 ({junior}) = 1; mD1 ({junior}) = 0.8;
mD2 ({junior}) = 0.6; mD3 ({junior}) = 0.4; mD4 ({junior}) = 0.2;
mD1 ({junior, experienced}) = 0.2; mD2 ({junior, experienced}) = 0.4; . . . .
mD5 ({junior, experienced}) = 1; mD6 ({junior, experienced}) = 0.8; . . . .
mD6 ({experienced}) = 0.2; mD7 ({experienced}) = 0.4;
mD8 ({experienced}) = 0.6; mD9 ({experienced}) = 0.8;
mD10 ({experienced}) = 1; mD11 ({experienced}) = 0.8; . . . .
The appropriateness degrees for SAL and YRS are illustrated in Figure 1.
Linguistic rules can be expressed by ALLP clauses as follows:
“Senior employees have good or very good salaries.”
sal(x) : good ∨ very good ← yrs(x) : senior
International Journal of Intelligent Systems

DOI 10.1002/int

(1)


570

CAO AND NOI

Figure 1. Appropriateness degrees for SAL, YRS.

“Experienced employees have moderate salaries.”
sal(x) : moderate ← yrs(x) : experienced


(2)

“Junior employees have low salaries.”
sal(x) : low ← yrs(x) : junior

(3)

“John has worked for about fifteen years.”
yrs(john) : experienced ∧ senior

(4)

4. DECLARATIVE SEMANTICS OF ALLPS
Let L be an ALLP language of discourse, AL be set of all linguistic expressions
of its annotation base, and ideal(AL ) be the set of all ideals of the complete lattices
forming AL .
4.1. General and Restricted Semantics
DEFINITION 4.1. The Herbrand object base of L, denoted by BL , is a set of all
ground objects that can be formed out of the predicate, function, and constant
symbols of Ls first-order language.
DEFINITION 4.2. A general Herbrand interpretation (g-interpretation) is a mapping Ig : BL → ideal(AL ). A restricted Herbrand interpretation (r-interpretation)
is a mapping Ir : BL → AL . The mappings have to satisfy the object-annotation
conformity relation of L.
DEFINITION 4.3. The set of all g-interpretations forms a complete lattice with the
partial order denoted by “≤ι ” and defined as follows:
(i) ∀ I1 , I2 : I1 ≤ι I2 ⇔ ∀ Obj ∈ BL : I1 (Obj) ≤ι I2 (Obj).
(ii) The lub and glb of a set S of g-interpretations are defined by
∀Obj ∈ BL : lub(S)(Obj) = lubI ∈S {I (Obj)}
International Journal of Intelligent Systems


DOI 10.1002/int


A FRAMEWORK FOR LINGUISTIC LOGIC PROGRAMMING

571

∀Obj ∈ BL : glb(S)(Obj) = glbI ∈S {I (Obj)}.
(iii) The greatest element and the least element are the g-interpretations that map every object
of BL to the greatest element and the least element, respectively, of its corresponding
lattice among those forming ideal(AL ). We denote the least element of g-interpretations
by IØ , which maps every object to the empty set Ø.

DEFINITION 4.4. The set of all r-interpretations forms a complete lattice with the
partial order denoted by “≤ι ” and defined as follows:
(i) ∀I1 , I2 : I1 ≤ι I2 ⇔ ∀ Obj ∈ BL : I1 (Obj) ≤ι I2 (Obj).
(ii) The lub and glb of a set S of r-interpretations are defined by
∀Obj ∈ BL : lub(S)(Obj) = lubI ∈S {I (Obj)}
∀Obj ∈ BL : glb(S)(Obj) = glbI ∈S {I (Obj)}.
(iii) The greatest element and the least element are the r-interpretations that map every object
of BL to the greatest element and the least element, respectively, of its corresponding
lattice among those forming AL . We denote the least element of r-interpretations by I⊥ ,
which maps every object to ⊥.

DEFINITION 4.5. Let I be a g-interpretation, ϕ be a linguistic expression, and Obj
be a ground object. Then the general satisfaction relation is defined as follows:
(i) I
(ii) I
(iii) I


Obj: ϕ iff ϕ ∈ I (Obj).
F1 &. . . & Fn iff I Fi ∀i = 1..n.
(F ← F1 &. . . & Fn ) iff I F or I

I is a g-model of a formula F iff I
for every clause C in P .

F1 &. . . & Fn .

F . I is a g-model of an ALLP P iff I

C

DEFINITION 4.6. Let I be an r-interpretation, ϕ be a linguistic expression, and Obj
be a ground object. Then the restricted satisfaction relation is defined as follows:
(i) I
(ii) I
(iii) I

Obj: ϕ iff ϕ ≤ι I (Obj).
F1 &. . . & Fn iff I Fi ∀i = 1..n.
(F ← F1 &. . . & Fn ) iff I F or I

I is an r-model of a formula F iff I
C for every clause C in P .

F1 &. . . & Fn .

F . I is an r-model of an ALLP P iff I


International Journal of Intelligent Systems

DOI 10.1002/int


572

CAO AND NOI

A program Q is a logical consequence of a program P wrt the general semantics
(or restricted semantics) iff, for every g-interpretation (or r-interpretation) I , if I
P then I Q.
4.2. Interpretation Mappings
As for ALPs and AFLPs, each ALLP P is associated with two functions TP and
RP that, respectively, map g-interpretations to g-interpretations and r-interpretations
to r-interpretations.
DEFINITION 4.7. Let I be a g-interpretation and Obj ∈ BL then TP (I )(Obj) is defined
to be the least ideal containing {τ (ϕ1 , ϕ2 , . . . ,ϕn ) | Obj: τ (ϕ1 , ϕ2 , . . . ,ϕn ) ←
Obj1 : ϕ1 & Obj2 : ϕ2 & . . . Objn : ϕn is a ground instance of a clause in P and I
Obj1 : ϕ1 & Obj2 : ϕ2 & . . . Objn : ϕn }.
DEFINITION 4.8. Let I be an r-interpretation and Obj ∈ BL then RP (I )(Obj) is
defined to be lub{τ (ϕ1 , ϕ2 , . . . ,ϕn ) | Obj: τ (ϕ1 , ϕ2 , . . . ,ϕn ) ← Obj1 : ϕ1 & Obj2 :
ϕ2 & . . . Objn : ϕn is a ground instance of a clause in P and I Obj1 : ϕ1 & Obj2 :
ϕ2 & . . . Objn : ϕn }.
The following results are similar to those of classical and annotated logic
programs.8,9,13
PROPOSITION 4.9. TP and RP are monotonic, that is
(i) If I1 ≤ι I2 then TP (I1 ) ≤ι TP (I2 ), where I1 , I2 are g-interpretations.
(ii) If I1 ≤ι I2 then RP (I1 ) ≤ι TP (I2 ), where I1 , I2 are r-interpretations.


PROPOSITION 4.10. Let P be an ALLP and Ig and Ir be ag-interpretation and an
r-interpretation, respectively. Then
(i) Ig is a g-model of P iff TP (Ig ) ≤ι Ig .
(ii) Ir is an r-model of P iff RP (Ir ) ≤ι Ir .

PROPOSITION 4.11. Let P be an ALLP. Then
(i) lfp(TP ) = glb{I |TP (I ) = I } = glb{I |TP (I ) ≤ι I } = MgP , where lfp(TP ) and MgP are
the least fixpoint of TP and the least g-model of P , respectively.
(i) lfp(RP ) = glb{I |RP (I ) = I } = glb{I |RP (I ) ≤ι I } = MrP , where lfp(RP ) and MrP are
the least fixpoint of RP and the least r-model of P , respectively.

PROPOSITION 4.12. TP is continuous. That is, TP (lubi {Ii }) = lubi {TP (Ii )}, where
Ii s are g-interpretations.
As for ALPs,8 unlike TP , RP may not be continuous.
International Journal of Intelligent Systems

DOI 10.1002/int


A FRAMEWORK FOR LINGUISTIC LOGIC PROGRAMMING

573

4.3. Fixpoint Semantics
DEFINITION 4.13. The upward iterations of TP and RP are defined as follows:
(i) TP ↑ 0 = I∅ (∀ Obj ∈ BL : I∅ (Obj) = ∅),
TP ↑ α = TP (TP ↑ (α − 1)) if α is a successor ordinal,
TP ↑ α = lub{TP ↑ β|β < α} if α a limit ordinal.
(ii) RP ↑ 0 = I⊥ (∀ Obj∈ BL : I⊥ (Obj) = ⊥),

RP ↑ α = RP (RP ↑ (α − 1)) if α is a successor ordinal,
RP ↑ α = lub{RP ↑ β|β < α} if α a limit ordinal.

Example 4.14. Let P be the ALLP in Example 3.12, one has
BL
RP ↑
RP
RP
RP
RP
RP

↑0
↑1
↑2
↑3
↑ω

Rules

yrs(john)

sal(john)

(4)
(1)
(2)


experienced ∧ senior

experienced ∧ senior
experienced ∧ senior
experienced ∧ senior



good ∨ very good
moderate ∧ (good ∨ very good)
moderate ∧ (good ∨ very good)

PROPOSITION 4.15. Let P be an ALLP. Then
(i) TP ↑ ω = lfp(TP ) = MgP .
(ii) P Obj: ϕ iff ϕ ∈ TP ↑ ω(Obj) for any annotated object Obj: ϕ.
(iii) There is an integer n such that ϕ ∈ TP ↑ n(Obj) for any annotated object Obj: ϕ.

So, for the ALLP P in Example 3.12, P sal(john) : moderate ∧ (good ∨ very
good). If needed, a real value of sal(john) can be derived as a defuzzified value of
the fuzzy set corresponding to the linguistic expression moderate ∧ (good ∨ very
good) as presented in Section 2.3.
5. PROCEDURAL SEMANTICS OF ALLPS
As for ALPs and AFLPs, in the rest of this paper, we assume the general
semantics so that finite proofs of logical consequences of ALLPs are guaranteed
to exist. Also, the SLD-style proof procedure for ALLPs, which is similar to SLDresolution for classical logic programs, selects reductants rather than clauses of an
ALLP in resolution steps and involves solving constraints on annotation terms.
International Journal of Intelligent Systems

DOI 10.1002/int


574


CAO AND NOI

5.1. Reductants and Constraints
DEFINITION 5.1. Let P be an ALLP and C1 , C2 ,. . . , Cn be different clauses in P.
Suppose that no pair of Ci and Cj (i = j) share common variables, and each Cr (1
≤ r ≤ n) is of the form:
Objr :ρr ← Objr1 : ϕr1 &Objr2 : ϕr2 & . . . &Objrm : ϕrmr
Suppose further that Obj1 , Obj2 , . . . , and Objn are unifiable via an mgu θ (most
general unifier13 ) and ρ = lub{ρ1 ,. . . ,ρn }. Then the clause
[Obj1 : ρ ← Obj11 : ϕ11 &Obj12 : ϕ12 & . . . &Obj1m : ϕ1m1 &Obj21 : ϕ21 &Obj22 :

ϕ22 & . . . &Obj2m : ϕ2m2 &Objn1 : ϕn1 &Objn2 : ϕn2 & . . . &Objnm : ϕnmn ]θ
is called a reductant of P .
PROPOSITION 5.2. Let P be an ALLP and C be a reductant of P then P

C.

PROPOSITION 5.3. Let P be an ALLP and Obj: ϕ be an annotated object. If P
Obj: ϕ, then there is a reductant of P having the form Obj: ρ ← Obj1 : ϕ1 ,. . . ,
Objm : ϕm such that ϕ ≤ι ρ and P Obj1 : ϕ1 ,. . . , Objm : ϕm .
DEFINITION 5.4. An ALLP constraint is defined to be of the form: ρ1 ≤ι ϕ1 &
ρ2 ≤ι ϕ2 &. . . & ρm ≤ι ϕm , where for each i from 1 to m, ρi and ϕi are two
annotation terms on the same domain. The constraint is said to be normal iff, for
each i from 1 to m, ρi is a simple annotation term and if ρi contains a variable then
this variable does not occur in ϕ1 , ϕ2 ,. . . ,ϕi .
DEFINITION 5.5. A solution for an ALLP constraint C is a substitution ψ for
annotation variables in C such that every ground instance of Cψ holds. An ALLP
constraint is said to be solvable iff there is an algorithm to decide whether the
constraint has a solution or not, and to identify a solution if it exists.

PROPOSITION 5.6. Any normal ALLP constraint is solvable.

5.2. ALLP SLD-Style Proof Procedure
DEFINITION 5.7. An ALLP goal G is defined to be of the form QG || CG , where QG
is the query part which is of the form Obj1 : ϕ1 & . . . . & Objm : ϕm with ϕi s being
International Journal of Intelligent Systems

DOI 10.1002/int


A FRAMEWORK FOR LINGUISTIC LOGIC PROGRAMMING

575

simple annotation terms, and CG is the constraint part, which is an ALLP constraint.
The goal is said to be normal iff CG is a normal ALLP constraint.

DEFINITION 5.8. Let P be an ALLP and G be an ALLP goal. An answer for G wrt
P is a pair <θ, ψ >, where θ is a substitution for object variables in G, and ψ is a
substitution for annotation variables in G. The answer is said to be correct iff ψ is
a solution for CG and every annotation variable-free instance of QG θψ is a logical
consequence of P .

DEFINITION 5.9. Let G be a goal O1 : ϕ1 &. . . & Oi−1 : ϕi−1 & Oi : ϕi & Oi+1 : ϕi+1
&. . . & Om : ϕm || CG and C be a reductant Obj: ρ ← Obj1 : ρ1 & Obj2 : ρ2 & . . . &
Objr : ρr of an ALLP (G and C have no variable in common). Suppose that Oi and
Obj are unifiable via an mgu θ. The corresponding resolvent of G and C is a new
ALLP goal, denoted by Rθ (G, C), and defined to be
[O1 : ϕ1 &. . . & Oi−1 : ϕi−1 & (Obj1 : ρ1 & Obj2 : ρ2 & . . . & Objr : ρr ) & Oi+1 :
ϕi+1 &. . . & Om : ϕm ]θ|| (ϕi ≤ι ρ) & CG .

When θ is required to be a unifier but not necessarily an mgu, Rθ (G, C) is
called an unrestricted resolvent.

PROPOSITION 5.10. Let G and C be, respectively a goal and a reductant of an ALLP.
If G is a normal ALLP goal, then any unrestricted resolvent of G and C is also a
normal ALLP goal.

DEFINITION 5.11. Let P be an ALLP and G be an ALLP goal. A refutation of G
and P is a finite sequence
G <C0 , θ0 > G1 <C1 , θ1 > . . . .Gn−1 <Cn , θn > Gn
such that
(i) For each i from 1 to n, Gi = Rθi (Gi−1 , Ci ), where G0 = G, andCi is a reductant of P ,
and
(ii) QGn is empty, and
(iii) QGn is solvable and has a solution.

When θi s are required to be unifiers but not necessarily mgus, the refutation is
called an unrestricted refutation.

PROPOSITION 5.12 (Soundness). Let P be an ALLP and G be an ALLP goal. If
G <C0 , θ0 > G1 <C1 , θ1 > . . . . Gn−1 <Cn , θn > Gn is a refutation of G and
International Journal of Intelligent Systems

DOI 10.1002/int


576

CAO AND NOI


P and ψ is a solution for CGn , then <θ1 θ2 . . . θn , ψ> is a correct answer for G
wrt P .

PROPOSITION 5.13 (Mgu Lemma). Let P be an ALLP and G be an ALLP goal. If
there exists an unrestricted refutation of G and P, then there exists a refutation of G
and P .

PROPOSITION 5.14 (Lifting Lemma). Let P be an ALLP, G be a normal ALLP goal,
and < θ, ψ > be an answer for G wrt P. If there exists a refutation of Gθψ and P,
then there exists a refutation of G and P .

PROPOSITION 5.15 (Completeness). Let P be an ALLP and G be a normal ALLP
goal. If there exists a correct answer for G wrt P, then there exists a refutation of G
and P .

Example 5.16. Let P be the ALLP in Example 3.12, some reductants of P are
(C1 ) sal(x): (good ∨ very good) ∧ moderate ← yrs(x): senior & yrs(x):
experienced
(C2 ) sal(x): good ∨ very good ← yrs(x): senior
(C3 ) sal(x): moderate ← yrs(x): experienced
(C4 ) yrs(john): experienced ∧ senior.
Goal: ?- sal(john): S. /* How John’s salary is? */
G0 = G = sal(john): S.
C1 = sal(x): (good ∨ very good) ∧ moderate ← yrs(x): senior & yrs(x):
experienced
θ1 = {x/john}
G1 = yrs(john): senior & yrs(john): experienced || (S ≤ι (good ∨ very good)
∧ moderate)
C4 = yrs(john): experienced ∧ senior
θ2 = {}

G2 = yrs(john): experienced || (S ≤ι (good ∨ very good) ∧ moderate) &
(senior ≤τ experienced ∧ senior).
C4 = yrs(john) : experienced ∧ senior.
θ3 = {}.
G3 = (S ≤ι (good ∨ very good) ∧ moderate) & (senior ≤ι experienced ∧
senior) &
(experienced ≤ι experienced ∧ senior).
So G3 is satisfied with the least specific solution for S being (good ∨ very
good) ∧ moderate.
International Journal of Intelligent Systems

DOI 10.1002/int


A FRAMEWORK FOR LINGUISTIC LOGIC PROGRAMMING

577

6. IMPLEMENTATION
6.1. XSB and Annotated Logic Programs
XSB15 is a research-oriented logic programming system developed by XSBGroup, Department of Computer Science, SUNY at Stony Brook (.
sunysb.edu/∼sbprolog). XSB provides all the functionalities of Prolog. XSB also
contains several features not usually found in logic programming systems such as
preprocessors and interpreters, Hilog, Annotated Logic Programs, and so on.
ALPs8 has been implemented as a metainterpreter and integrated as a package
in XSB. To use this package for reasoning, users must define upper semilattices with
their partial order, the least upper bound function, and the least element. Here is an
ALP example for the shortest path problem in XSB:
shortest path(X,Y): [min, D1] <- shortest path(X, Z): [min, D2], edge(Z,Y,D),
D1 is D2 + D.

shortest path(X,Y): [min, C] <- edge(X, Y, C).
edge(a, b, 1). edge(b, b, 1). edge(b, c, 1).
edge(b, d, 1). edge(c, e, 2). edge(d, e, 1).
join(min, A, B, Min) :– min(A, B, Min). /* Definition for the least upper bound
*/
bottom(min, infinity). /* Definition for the least element */
gt1(min, A, B) :– min(A, B, A). /* Definition for the partial order */
min(One, infinity, One) :– !.
min(infinity, Two, Two) :– !.
min(One, Two, Min) :– One > Two -> Min = Two; Min = One.
Query: ?- gapmeta(shortest path(a, e): [min, D])
The answer is D = 3, which is the shortest path from node a to node e of the
graph.
6.2. ALLP Implementation
We have implemented ALLPs in XSB as a metainterpreter by defining the
linguistic lattices in Definition 3.6 as follows:
1. The least element ⊥ of Lattice: bottom(Lattice, true).
2. The partial order: ϕ ≤τ ρ iff λr (ρ) ⊆ λr (ϕ) gt(Lattice, X, Y) :- lamdar(Lattice, X,
SetX), lamdar(Lattice, Y, SetY), ord subset(SetX, SetY).
3. The least upper bound lub(ϕ, ρ) = ϕ ∧ ρ lub(Lattice, X, Y, X ∧ Y).

Beside this, an ALLP contains
1. Declaration for label sets la(Lattice, LabelSet), such as:
la(salary, [low, moderate, good, very good])
la(year, [junior, experienced, senior]).
International Journal of Intelligent Systems

DOI 10.1002/int



578

CAO AND NOI
2. Declaration for mass assignments ma(Lattice, LabelSubset, ElementOfUniverse, Value),
such as:
ma(salary, [low], 1, 1)
ma(salary, [low, moderate], 2, 0.5).
3. Declaration for comformity relations cr(Object, Arity, Lattice), such as:
cr(sal, 1, salary).
cr(yrs, 1, year).

Example 6.1. The ALLP in example 3.12 can be written in XSB as follows:
%%la/2 - Lattice declaration la(Lattice,LabelSet);
la(salary, [low, moderate, good, very good]).
la(year, [junior, experienced, senior]).
%% ma/4 - Mass assignment declaration: ma(Lattice, LabelSubset,
ElementOfUniverse, Value).
ma(salary, [low], 0, 1).
ma(salary, [low], 1, 1).
ma(salary, [low, moderate], 2, 0.5).
ma(salary, [], 2, 0.5).
ma(salary, [moderate], 3, 1).
ma(salary, [moderate], 4, 1).
ma(salary, [moderate, good], 5, 0.5).
ma(salary, [], 5, 0.5) . . .
ma(year, [senior],35, 1).
ma(year, [senior],40, 1) . . . .
%% cr/3 - Comformity relations cr(Object, Arity, Lattice).
cr(sal, 1, salary).
cr(yrs, 1, year).

%% Clauses
/* Senior project managers have good or very good salaries. */
sal(X): [good ∨ very good] <- yrs(X): [senior].
/* Experienced project managers have moderate salaries. */
sal(X): [moderate] <- yrs(X): [experienced].
/* John has worked as a project manager for about fifteen years. */
yrs(john): [experienced ∧ senior] <- true.
/* - - - - - - - - - - - - - - - - End of file testa.p - - - - - - - - - - - - - - - - */
Then, it can be executed in XSB as follows:
F:\code>xsb
XSB Version 2.6 (Duff) of June 24, 2003
?- [allp]. /* Load the meta-interpreter for ALLP */
[allp loaded]
ANNOTATED LINGUISTIC LOGIC PROGRAMS – ALLPs
By Nguyen Van Noi - 2003
?- [testa]. /* Load example file testa.p */
[testa loaded]
International Journal of Intelligent Systems

DOI 10.1002/int


A FRAMEWORK FOR LINGUISTIC LOGIC PROGRAMMING

579

?- allp(sal(john): [S]). /* How John’s salary is */
S = moderate ∧ (good ∨ very good)
?- allpdfu(sal(john): [S]). /* Defuzzification of John’s salary */
S = moderate ∧ (good ∨ very good): 5

?- allp(yrs(john): [Y]). /* How John’s working years is */
Y = experienced ∧ senior
?- allpdfu(yrs(john): [Y]. /* Defuzzification of John’s working year */
Y = experienced ∧ senior: 15
?- halt.

7. CONCLUSION
The ALLP framework has been developed as a linguistic logic programming
formalism for development of automated reasoning systems dealing with linguistic expressions as soft data. The syntax of ALLPs has been introduced and their
declarative semantics based on Lawry’s label semantics has been studied. As in
classical and annotated logic programming, the fixpoint semantics of ALLPs is the
basis for defining and proving the completeness of the procedural semantics. The
ALLP SLD-style proof procedure has also been defined and proved to be sound
and complete wrt the declarative semantics of ALLPs. We have also implemented
ALLPs as a metainterpreter in XSB.
Future work for ALLPs is to consider the fuzzy matching problem where a fact
does not exactly match with the body of a clause in the partial order relation, e.g.,
the following ALLP with a linguistic rule and a linguistic fact:
/*Tall people have large shoe sizes. */
shoes(x): large ← height(x): tall.
/* John is rather tall. */
height(john): rather tall.
Since rather tall and tall do not fully match, the current ALLP semantics does
not give the result on how John’s shoe size is. This semantic unification problem for
ALLPs is among the topics that we are investigating.
References
1.
2.
3.
4.

5.
6.

Zilouchian A, Jamshidi M. Intelligent control systems using soft computing methodologies.
Boca Raton, FL: CRC Press; 2001.
Zadeh LA. The concept of linguistic variable and its application to approximate reasoning.
Inform Sci 1975;8:199–251 (part I), 301–357 (part II), 43–80 (part III).
Zadeh LA. fuzzy logic = computing with words. IEEE Trans Fuzzy Syst 1996;4:103–111.
Lawry J. Modelling and reasoning with vague concepts. In: Studies in computational intelligence, Vol. 12. Berlin: Springer-Verlag; 2006.
Lawry J. A framework for linguistic modeling. Artificial Intell 2003;2036:1–39.
Virtanen HE. Linguistic logic programming. In: Martin TP, Fontana FA. editors. Logic
programming and soft computing. Bristol, PA: Research Studies Press; 1998. pp 91–110.
International Journal of Intelligent Systems

DOI 10.1002/int


580
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.

CAO AND NOI

Le HV, Liu F. Tran DK. Fuzzy linguistic logic programming and its applications. Theory
Pract Logic Program 2009;9:309–341.
Kifer M. Subrahmanian VS. Theory of generalized annotated logic programming and its
applications. J Logic Program 1992;12:335–367.
Cao TH. Annotated fuzzy logic programs. Fuzzy Sets and Syst 2000;113:277–298.
Noi NV, Cao TH. Annotated linguistic logic programs for soft computing. In: Proc 2nd Int
Conf on Vietnam & Francophone Informatics Research, 2004. pp 187–192.
Noi NV. Annotated linguistic logic programming. Master Thesis, Ho Chi Minh City University of Technology, Ho Chi Minh City, Vietnam, 2003.
Lawry J. A new calculus for linguistic prototypes in data analysis. In: Soft methods in
probability, statistics, and data analysis. Berlin: Springer-Verlag; 2002. pp 116–125.
Lloyd JW. Foundations of logic programming. Berlin: Springer-Verlag; 1987.
Lawry J. Alternative interpretation of linguistic variables and computing with words. In:
Proc 8th Int Conf on Information Processing and Management of Uncertainty, 2000.
pp 1743–1750.
Swift T, Warren DS, Kifer M. The XSB system—Programmer’s manual version 2.5. 2003.
Baldwin JF, Lawry J, Martin TP. A mass assignment theory of the probability of fuzzy
events. Fuzzy sets Syst 1996;83:353–367.

International Journal of Intelligent Systems

DOI 10.1002/int



×