Tải bản đầy đủ (.pdf) (10 trang)

The Oxford Handbook of Cognitive Linguistics Part 56 doc

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (245.63 KB, 10 trang )

7. Labels and Uniqueness

This hierarchical analysis of link-types solves another problem. One of the char-
acteristics of a network is that the nodes are defined only by their links to other
nodes; for instance, the word CAT is the only word that is linked to the concept
Cat, to the pronunciation /kæt/, to the word class Noun, and so on. No two nodes
have exactly the same links to exactly the same range of other nodes, because if they
did, they would by definition be the same node. As Lamb (1966, 1999: 59) points
out, one consequence of this principle is that the labels on the nodes are entirely
redundant, in contrast with non-network approaches, in which labels are the only
way to show identity. For example, if two rules both apply to the same word class,
this is shown by naming this word class in both rules; as we all know, the name
chosen does not matter, but it is important to use the same name in both rules. In a
network, on the other hand, labels only serve as mnemonics to help the analyst, and
they could (in principle) all be removed without loss of information.
If we follow the logic of this argument by removing labels from nodes, we face a
problem because the labels on links appear to carry information which is not re-
dundant, because the links are not distinguished in any other way. This leads to a
paradoxical situation in which the elements which traditionally are always labeled
need not be, but those which (at least in simple associative networks) are tradi-
tionally not labeled must be labeled. The hierarchical classification of links resolves
this paradox by giving links just the same status as nodes, so that they too can be
distinguished by their relationships to other links—that is, by their place in the
overall classification of links. Bydefinition,every distinct link must have a unique set
Figure 19.5. Distinguishing relationships with and without the use of labels
520 richard hudson
of properties, so a link’s properties are always sufficient to identify it, and labels are
redundant. In principle, therefore, we could remove their labels too, converting a
labeled diagram such as the simple syntactic structure in figure 19.5a into the unla-
beled one in figure 19.5b. (The direction of the arrows in 19.5b is arbitrary and does
not indicate word order, but the two intermediate arrows in this diagram must


carry distinct features, such as word order, to make each one unique.)
8. ‘‘The Lexicon,’’ ‘‘the Grammar,’’
and Constructions

As in other Cognitive Linguistics theories, there is no distinction in Word Gram-
mar between the lexicon and the grammar. Instead, the ‘‘isa’’ hierarchy of words
covers the full range of concepts and facts from the most general facts to the most
specific, with no natural break in the hierarchy between ‘‘general’’ and ‘‘specific.’’ As
we have just seen, the same is true of dependency relationships, where the specific
dependencies found in individual sentences are at the bottom of the hierarchy
headed by the very general relationship ‘dependent’. There is no basis, therefore,
for distinguishing the lexicon from the grammar in terms of levels of generality,
because generality varies continuously throughout the hierarchy.
Take the sentence He likes her, diagramed above in figure 19.5a. One require-
ment for any grammar is to predict that the verb likes needs both a subject and an
object, but the rules concerned vary greatly in terms of generality. The subject is
needed because likes is a verb, whereas the object is needed because it is a form of the
lexeme LIKE; in traditional terms, one dependency is explained by ‘‘the grammar’’
while the other is a ‘‘lexical’’ fact, so different mechanisms are involved. In Word
Grammar, however, the only difference is in the generality of the ‘‘source’’ concept.
Figure 19.6 shows how likes inherits its two dependencies from the network.
This approach to lexicogrammar solves the problem of what may be called
‘‘special constructions,’’ syntactic patterns which are fully productive but which do
not fit any of the ‘‘ordinary’’ patterns and are tied to specific lexical items (Hudson
Figure 19.6. Where the subject and object of likes come from
word grammar 521
1984: 4; Holmes and Hudson 2005). For example, the preposition WITH can be
used as the root of a sentence provided that it is preceded by a direction expression
and followed by a noun phrase.
(1) Down with the government!

(2) Off with his head!
(3) Away with you!
(4) Into the basket with the dirty washing!
This construction is not generated by the rules for normal sentences, but a
grammar/lexicon split forces an arbitrary choice between a ‘‘grammatical’’ and a
‘‘lexical’’ analysis. In Word Grammar, there is no such boundary and no problem.
The subnetwork in figure 19.7 provides the basis of an analysis.
In words, what figure 19.7 says is that WITH-special (this special case of the
lexeme WITH) means ‘Do something to make Y go to X’, where Y is the meaning
(referent) of the object noun, and X is that of the ‘‘extracted’’ (front shifted) word.
(The relation ‘‘er’’ in the semantics stands for ‘‘go-er.’’) Given the ordinary gram-
mar for noun phrases and directionals, this pattern is sufficient to generate the
examples in (1)–(4), but some parts of the pattern could be omitted on the grounds
that they can be inherited from higher nodes which are partly ‘‘grammatical’’ (e.g.,
the word classes permitted as object) and partly ‘‘lexical’’ (e.g., the fact that WITH
has an obligatory object).
9. Morphology

The Word Grammar treatment of morphology separates two separate analyses:
a. The analysis of word structure in terms of morphemes or phonological
patterns
b. The linkage between word structure and lexeme or word class
Figure 19.7. A network for the X WITH Y construction
522 richard hudson
For example, the recognition of a suffix in dogs is separate from the recognition that
dogs is plural. The suffix and the plurality are clearly distinct—one is a morpheme,
that is, a word-part, while the other is a word class, and either can exist without the
other (as in the plural geese or the singular news). In this sense, therefore, Word
Grammar morphology belongs firmly within the ‘‘Word-and-Paradigm’’ tradition
in which a word’s internal structure is distinguished sharply from its morphosyn-

tactic features (Robins [ 1959] 2001).
The theory of morphology raises a fundamental question about the architec-
ture of language: how many ‘‘levels of analysis’’ are there? This actually breaks down
into two separate questions:
a. Is there a ‘‘syntactic’’ level, at which we recognize words?
b. Is there a ‘‘morphological’’ level, at which we recognize morphemes?
Word Grammar recognizes both of these levels (Creider and Hudson 1999), so the
relation between semantic and phonological structure is quite indirect: meanings
map to words, words to morphemes, and morphemes to phonemes (or whatever
phonological patterns there are). There is a range of evidence for this view:
a. Words and morphemes are classified differently from each other and
from the meanings they signal—meanings may be things or people,
words may be verbs or nouns, and morphemes may be roots or affixes;
and morphological ‘‘declension classes’’ are distinct from morphosyntac-
tic classes.
b. Morphological patterns are different from those of syntax; for example,
there is no syntactic equivalent of semitic interdigitation (whereby the
plural of Arabic kitaab ‘book’ is kutub), nor is there a morphological
equivalent of coordination or extraction; and many languages have free
word order, but none have free morpheme order.
Figure 19.8. The word cat analyzed on four linguistic levels
word grammar 523
c. The units of morphology need not match those of syntax; for example,
French syntax recognizes the two-word combination de le ‘of the’, which
corresponds to a single morphological unit du (see figure 19.11 below).
This position is quite controversial within linguistics in general and within
Cognitive Linguistics in particular. Cognitive Grammar at least appears to deny the
level of syntax, since its symbolic units are merely a pairing of a semantic pole with
a phonological pole (Langacker 2000: 5), so they cannot be independently cate-
gorized (e.g., in terms of nonsemantic word classes). But even if the symbolic units

do define a level of syntax, there is certainly no independent level of morphology,
‘‘a basic claim of Cognitive Grammar, namely, that morphology and syntax form a
continuum (fully describable as assemblies of symbolic structures)’’ (25). In other
words, in contrast with the Word-and-Paradigm model, morphology is merely
syntax within the word.
In Word Grammar, then, the word is linked to its phonological realization
only via the morpheme, just as it is linked to the semantic structure only via the
single concept that acts as its sense. The pattern for the word cat (or more precisely,
the lexeme CAT) is shown in figure 19.8. We follow a fairly traditional notation for
distinguishing levels: Cat is the concept of the typical cat, CAT is the lexeme, {cat}
is the morpheme, and /kæt/ are the phonemes.
Morphologically complex words map onto more than one morpheme at a
time, so we need to recognize a complex unit at the level of morphology, the ‘‘word
Figure 19.9. Inflectional morphology for English regular and irregular plural nouns
524 richard hudson
form’’ (or ‘‘morphological word’’—Rosta 1997). For example, the word form
{{cat} þ {s}} realizes the word CAT:plural (the plural of CAT), which isa both CAT
and another word type, Plural. These two word types contribute, respectively, its
stem and its suffix, so in all, there are three links between CAT:plural and its
morphology:
a. The ‘stem’ link to the stem morpheme, inherited from CAT
b. The ‘suffix’ link to the suffix morpheme, inherited from Plural
c. The ‘word form’ link to the entire word form, also inherited from Plural
These three links are called ‘‘morphological functions’’—functions from a word to
specific parts of its morphological structure (Hudson 2000c). (A slightly different
theory of this part of morphology is presented in Hudson, 2007; the main difference
is the use of a ‘variant’ relation between forms instead of the ‘suffix’ relation from
word to morpheme.)
Irregular forms can be accommodated easily thanks to default inheritance, as
shown in figure 19.9. The top part of this figure (above the dotted line) represents

stored information, while the bottom part is information that can be inferred by
default inheritance. In words, a plural noun has a word form (‘‘wf’’ in the diagram)
whose first and second parts are its stem and its suffix (which is {s}). The stem of CAT
is {cat}, so the word form of CAT:plural consists of {cat} followed by {s}. Excep-
tionally, the word form of PERSON:plural is stipulated as {people}; by default in-
heritance, this stipulation overrides the default. As in other areas of knowledge, we
probably store some regularly inheritable information such as the plural of some
very frequent nouns as well as the unpredictable irregular ones (Bybee 1995).
Derivational morphology uses the same combination of morphemes and mor-
phological functions, but in this case, the morphology signals a relationship between
Figure 19.10. Derivational morphology for agentive nouns
word grammar 525
two distinct lexemes, rather than between a lexeme and an inflectional category. For
example, take the agentive pattern in SPEAK-SPEAKER. The relationship between
these two lexemes exemplifies a more general relationship between verbs and nouns.
The relevant part of the grammar, figure 19.10, shows how lexemes which are related
in this way differ in terms of meaning, syntax, and morphology. In words, a verb
typically has an ‘‘agentive’’:
a. which isa Noun,
b. whose stem consists of the verb’s stem combined with the {er} suffix, and
c. whose sense is a person who is the agent (‘er’) of an event which isa the
verb’s sense.
One of the benefits mentioned earlier of the distinction between words and their
morphological realization is the possibility of gross mismatches between them, as
discussed extensively in Sadock (1991). Figure 19.11 illustrates the familiar case from
French of du, a single morpheme which realizes two words:
a. the preposition de ‘of’ and
b. the masculine definite article which is written le.
For example, alongside de la fille ‘of the daughter’, we find du fils ‘of the son’, rather
than the expected *de le fils. One of the challenges of this construction is the

interaction between morphology and phonology, since du is not used when le is
reduced to l’ before a vowel: de l’oncle ‘of the uncle’. The analysis in figure 19.11
meets this challenge by distinguishing the ‘full stem’ and the ‘short stem’ and
applying the merger with {de} only to the former. Some other rule will prevent the
full stem from occurring before vowels, thereby explaining the facts just outlined.
The analysis also ensures that de le only collapses to du when the le depends directly
on the de, as it would in du fils.
Figure 19.11. Why French de le is realized as du
526 richard hudson
10. Syntax

Syntax is the area in which Word Grammar has been applied most thoroughly
(Hudson 1984, 1990, 1998, 1999, 2000a, 2000b, 2003a, 2003b, 2007), so the follow-
ing discussion can be quite brief.
The most distinctive characteristic of syntax in Word Grammar is that it as-
sumes dependency structure rather than phrase structure. The syntactic structure
of a sentence is a network of nodes, in which there is a node for each word but no
nodes for phrases; and the nodes are all connected by syntactic dependencies. For
example, in the sentence Small babies cry, the noun babies depends on the verb
cries, and the adjective small depends on babies, but there is no node for the noun
phrase which consists of babies plus its dependent. It would be easy to add phrase
nodes, because they can be read unambiguously off the dependencies, but there is
no point in doing so because they would be entirely redundant. This way of viewing
sentence structure exclusively in terms of word-word dependencies has a long
history which goes back through the medieval European and Arabic grammarians
to classical Greece, but it has recently been overshadowed by the phrase-structure
approach (Percival 1976, 1990).
One of the advantages of the dependency approach is that grammatical func-
tions such as ‘subject’ and ‘object’ are subdivisions of ‘dependent’. Since rela-
tionships are classified in just the same way as nodes, a typical dependency inherits

by default from a number of higher-level dependencies; for example, in the sen-
tence He likes her, the relation between likes and her inherits from ‘object’,
‘complement’, ‘valent’, ‘post-dependent’, and ‘dependent’, each of which brings
with it inheritable characteristics. (These relations are defined by the hierarchy
shown in figure 19.4)
Figure 19.12. Default and exceptional word orders in English
word grammar 527
Syntactic structure is primarily linear, so it is important to be able to indicate
word order. A network has no inherent directionality, so linear order is shown as a
separate relationship between earlier and later, by means of an arrow which points
toward the earlier member of the pair; this arrow is labeled ‘‘(.’’ (The linear order
relationship has many other applications beside word order—it orders points of
time, so within language it is also used in the semantics of tense to relate the time of
the verb’s referent to the deictic time of its utterance.) Like any other relationships
they can be overridden in the inheritance hierarchy, so it is easy to model the idea
of a ‘‘basic’’ and ‘‘special’’ word order. For example, in English (a head-initial lan-
guage) the basic word order puts words before their dependents, but enough de-
pendents precede their heads to justify a general subtype ‘pre-dependent’, of which
‘subject’ is a subtype; so, exceptionally, a verb follows its subject. However, there
are also exceptions to this exception: an ‘‘inverting’’ auxiliary verb reverts to the
position before its subject.
This hierarchy is shown in figure 19.12. In words:
a. a typical dependent follows its parent (the word on which it depends):
likes her;
b. but a pre-dependent precedes its parent;
c. therefore, a subject (one kind of pre-dependent) precedes its parent:
He likes ;
d. but the subject of an inverting auxiliary follows its parent: Does he ?
A further source of flexibility in explaining word order is the fact that syntactic
structure is embedded in a network theory, which (in principle) allows unrestricted

links among nodes. This flexibility is in fact limited, but some multiple links are
permitted. Not only may one word have more than one dependent, but one word
may also have more than one parent. This is the source of most of the well-known
complications in syntax, such as raising, extraction, and extraposition. Sentence (5)
below contains examples of all three and shows how they can be explained in terms
of a tangle-free ‘‘surface’’ structure, which is displayed above the words, supple-
mented by extra dependencies below the words. The Word Grammar analysis is
summarized in figure 19.13 (which ignores all the ‘‘isa’’ links to the grammar net-
work).
Figure 19.13. An example illustrating raising, extraction, and extraposition
528 richard hudson
(5) It surprises me what she can do.
One of the attractions of this kind of grammar is that the structures combine
concreteness (there are no word orders other than the surface one) with abstract-
ness (the dependencies can show abstract relationships between nonadjacent words
and are generally in step with semantic relationships). This makes it relatively easy
to teach at an introductory level, where it is possible to teach a grammatical system
which can be applied to almost every word in any text (Hudson 1998). However, at
the same time, it allows relatively sophisticated analyses of most of the familiar
challenges for syntactic theory such as variable word order, coordination, and
Prepositional Pied-piping.
11. Lexical Semantics

According to Word Grammar, language is an area of our general conceptual net-
work which includes words and everything that we know about them. This area has
no natural boundaries, so there is no reason to distinguish between a word’s ‘‘truly
linguistic’’ meaning and the associated encyclopedic knowledge. For example, the
sense of the word CAT is the concept Cat, the same concept that we use in dealing
with cats in everyday life. It would be hard to justify an alternative analysis in which
either there were two ‘cat’ concepts, one for language and one for the encyclopedia,

or in which the characteristics of the Cat concept were divided into those which do
belong to language and those which do not. This general philosophy has been
applied in detail to the word CYCLE (Hudson and Holmes 2005).
In short, as in most other ‘‘cognitive’’ theories of semantics, a word’s meaning
is defined by a ‘‘frame of knowledge’’ (Fillmore 1985). In the case of Cat, the rele-
vant frame includes the links between this concept and other concepts such as Pet,
Mammal, Dog, Mouse, Fur, Milk, and Meowing. This frame of background knowl-
edge is highly relevant to the understanding of language; for example, the idea of
ownership in the concept Pet provides an easy interpretation for expressions like
our cat in contrast with, say, our mouse.
Moreover, any theory of language must make some attempt to formulate link-
ing rules which map semantic relations to syntactic relations. For instance, we must
at least be able to stipulate that with the verb HEAR, the hearer is identified by the
subject, in contrast with SOUND which links it to the prepositional object as in
That sounds good to me; and it would be even better if we could make these linkages
follow from more general facts. Word Grammar has the advantage of syntactic and
semantic structures that have very similar formal properties, so they should be rel-
atively easy to map onto one another. A syntactic structure consists of labeled
word grammar 529

×