Tải bản đầy đủ (.pdf) (10 trang)

Tài liệu Constituent Structure - Part 21 pdf

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (112.91 KB, 10 trang )

(10) (a) Mary
NP Mary
(b) apple NP apples
(c) left SnNP ºx(left ’(x))
(d) ate (SnNP)/NP ºxºy (ate ’(x,y))
The categories of Mary and apple should be self-evident, but the
categories of the predicates left and ate are less transparent. The
category of an intransitive verb, SnNP, indicates that the predicate
represents a function that looks to the left (indicated by the back
slash n) for an NP, and results in a clause (S). (SnNP)/NP represents
a transitive verb that looks to the right (/) for an NP object. Satisfaction
of that part of the function outputs another function (SnNP), which
represents a verb phrase, which like an intransitive verb, looks to the
left for an NP and results in an S. The material to the right of the
categories in (10) represents the semantics of the word. (10a and b)
represent the entities of Mary and apples respectively. Examples (10c
and d) are functions (the lambdas (º) indicate this—these indicate that
these forms are incomplete or unsaturated and require missing infor-
mation (x, y)). Application of left to some NP, such as Mary will mean
that Mary left. Application of ate to two NPs indicates that the entity
characterized by the Wrst NP ate the second.
The fact that the meaning of an expression such as Mary left is in fact
Left’ (Mary) can be shown by means of a proof. Such proofs use rules
such as the rules of forward and backward application shown in (11).
(11) (a) X/Y:F Y:a ! X: f (
a) forward application (>A)
(b) Y:a XnY:F ! X: f(a) backward application (<A)
It should be noted that although these rules look like phrase structure
rules, they are not. They are rules of inference—that is, they are rules
that allow one to prove that the meaning of a complete expression can
be determined from the categorial combination of the words. They do


not create constituent structures, although these might be derived
from the proofs if necessary. Because of this, these proofs are more
like the derivations of early generative grammar and very unlike the
projection rules of current practice.
The rule in (11a) says that given a forward slash category (X/Y),
which is a function that is followed by a Y with a semantic value of a,
we can deduce a category of X, with a semantic interpretation of the
function f being applied to a (f (a)). Example (11b) is the same rule but
applying backwards, with the argument Y on the left.
180 controversies
To see how this works, consider the proofs in (12) and (13). In (12),
on the top line we have the words in the sentence Mary left. We want
to prove that the categories and their semantic values add up to a
sentence with the semantic value of the (backward) application of the
open function represented by the predicate to the argument. The
application of this rule is indicated by the underscore followed by
the <A symbol. This rule, in essence, takes the NP on the left and
substitutes it for the NP in the SnNP category, canceling out the NPs,
resulting in category S. The entity Mary is substituted in for the (x) in
the formula for the open function represented by the predicate, can-
celing out the lambda(s), and resulting in the semantic category
left’(Mary).
(12) Mary left
NP:Mary SnNP: ºx(left’(x))
________________________
<A
S: left’(Mary)
(13) shows a more complicated proof showing a transitive verb.7
(13) Mary ate apples
NP:Mary (SnNP)/NP ºxºy((ate’(x, y)) NP:apples

_________________________________
>A
SnNP: ºx(ate’(x, apples))
______________________________
<A
S: ate’(Mary, apples)
The individual words and their contributions are on the Wrst line of
(13). Using the rule of forward application, we cancel out the outer-
most NP and substitute its denotation in for the variable y, which
results in the open function representing the traditional VP. The next
underscore indicates the application of backwards application to the
preceding NP, substituting its denotation in for the variable x. The
resulting line indicates that these words can compose to form a
sentence meaning that Mary ate apples.
7 I’ve slightly simpliWed how the semantic structure is calculated here for expository
purposes. In particular, I’ve ignored the procedures for ensuring that the object NP is
substituted in for the y variable and the subject NP for the x variable.
dependency and constituency 181
Like the derivations of early phrase structure grammars, it is a
relatively trivial matter to translate these proofs into trees. If we take
(13) as an example, we need simply turn the derivation upside down so
the last line represents the root node, connected to the each of the
elements involved in forming it by branches, and doing the same for
the middle and top lines:
() S: ateЈ(Mary, apples)
NP: Mary S\NP: lx (ateЈ(x, apples))
(S\NP)/NP lxl
y ((ateЈ(x, y)) NP: apples
Such trees are common, for example, in the Montegue Grammar variant
of categorial grammar, but they are also present in the work of type-

logical semanticists who work in parallel with generative syntacticians
(see for example the semantic system described in Heim and Kratzer
1997) and in the proofs and representations in HPSG. However, it should
be noted that ontologically speaking the proofs (and any resultant trees)
are not a constituency representation per se in traditional categorial
grammar. The reason for this lies in the fact that the rules of inference
in this system include rules that would create a structure that does not
correspond to our usual understanding of the clause. For example we
have the rule of swapping (associativity), which allows the system to
combine a transitive verb with the subject before the object:
(15)(XnY)/Z:ºv
z
ºv
x
[f(v
x
v
z
)] ! (X/Z)nY: ºv
z
ºv
x
[f(v
x
v
z
)]
This rule takes a predicate that looks Wrst rightwards for an object then
second leftwards for a subject and turns it into a predicate that looks
leftwards Wrst for a subject, then rightwards for the object. This rule

creates a ‘‘structural’’ ambiguity between the proofs, but because the
semantics remain the same on each side of the rule in (15) this does not
correspond to a semantic ambiguity. After swapping a valid proof for
our sentence would be:
(16) Mary ate apples
NP:Mary (S/NP)nNP ºxºy((ate’ (x, y)) NP:apples
______________________________________
<
A
S/NP: ºy(ate’ (Mary, y))
___________________________
>A
S: ate’(Mary, apples)
182 controversies
This corresponds to the tree in (17):
() S: ateЈ(Mary, apples)
S/NP: ly(ateЈ(Mary, y))
NP: apples
NP: Mar
y (S/NP)\NP l xly((ateЈ(x, y))
The reason for positing such rules as swapping is that they allow
accounts of, for example, non-constituent conjunction such as the
right-node raising example in (18).
(18) John loves and Mary hates semantics.
With swapped functions, we can combine loves and John, and hates
and Mary Wrst, conjoin them into a composite function, then satisfy
the requirement that the verbs have an object second. Such sentences
are largely mysterious under a phrase structure analysis.8 On the other
hand, without additional mechanisms, subject–object asymmetries are
unexplained; nor can non-constituent conjunction in languages that

put both arguments on the same side of the verb (VOS, SOV), since
there is no obvious way in which to compose the verb with the subject
before the object in such languages. The net eVect of these rules is that
the proofs cannot be equated straightforwardly with constituency
diagrams.
There are a number of notational variants and versions of categorial
grammar. They diVer in what rules of inference are allowed in the
system and in the meaning of the ‘‘slashed’’ categories and whether
these categories should be given in terms of traditional syntactic
categories or in terms of semantic types—for instance, <e> is an entity
(an NP), <t> is a truth value (an S), <e,t> is a function from an
entity to a truth value (an intransitive verb or a verb phrase). For a
relatively complete catalog of the various notations, see Wood’s (1993)
textbook.
9.4.2 Tree-Adjoining Grammar (TAG)
Somewhat ironically, one of the theories that belongs in this chapter
about theories without constituent trees is one that is based on the idea
that parts of trees are themselves primitives. This is a particular variant
on categorial grammar known as Tree-Adjoining Grammar (TAG)
8 See, however, the discussion in Phillips (2003).
dependency and constituency 183
(Joshi 1985; for a more detailed description see Joshi and Schabes
(1996) or the introductory chapter in Abeille
´
and Rambow (2000). In
the TAG formalism, the categories come along with a tree (treelets),
which include information about the co-occurrence restrictions place
on words. For example, the CCG category (SnNP)/NP is represented as
the lexical entry in (19b). The # arrow means that the category to its left
is required.

() (a) NP (b) S (c) NP
N
Mary
NP
VP N
apples
V
eats
NP
The NP in (19a) substitutes for the Wrst NP# in (19b) and the NP in (c)
substitutes for the lower one resulting in the familiar:
()S
NP VP
N
Mary
V
eats
NP
N
apples
This is the operation of substitution. TAG also has an adjunction
operation that targets the middle of trees, which are both generalized
transformations (see Chapters 6 and 8)
Since these tree structures are lexicalized—that is, they represent
constituency through a structure that is speciWed lexically, they
provide an interesting hybrid between a categorial grammar9 and a
constituency-based grammar. They also are a step in the direction of a
construction grammar: the topic of section 9.5. First, however, we
examine the feature-based version of categorial grammar found in
HPSG, which also has (under at least one conception) constructional

properties.
9 As an aside, a second kind of tree structure can be found in TAG. This is the derivation
tree, which indicates which treelets were attached to other treelets by the combinatorial
principles. Since the property of tag are head-oriented, it’s not surprising that these
derivation trees look very much like Stemma. See Abeille
´
and Rambow (2000) for more
discussion.
184 controversies

×