Tải bản đầy đủ (.pdf) (704 trang)

cognitive psychology - a student's handbook 4th ed. - m. eyesenck, m. keane (psych. press, 2000)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (34.98 MB, 704 trang )



COGNITIVE PSYCHOLOGY

To Christine with love
(M.W.E.)
To Ruth with love all ways
(M.K.)
The only means of strengthening one’s intellect is to make up one’s mind about
nothing—to let the mind be a thoroughfare for all thoughts. Not a select party.
(John Keats)

Cognitive Psychology A Student’s
Handbook
Fourth Edition
Michael W. Eysenck
(Royal Holloway, University of London, UK)
Mark Keane
(University College Dublin, Ireland)
HOVE AND NEW YORK

First published 2000 by Psychology Press Ltd
27 Church Road, Hove, East Sussex BN3 2FA
www.psypress.co.uk
Simultaneously published in the USA and Canada
by Taylor & Francis Inc.
325 Chestnut Street, Philadelphia, PA 19106
Psychology Press is an imprint of the Taylor & Francis Group
This edition published in the Taylor & Francis e-Library, 2005.
“To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to
www.eBookstore.tandf.co.uk.”


Reprinted 2000, 2001
Reprinted 2002 (twice) and 2003
by Psychology Press
27 Church Road, Hove, East Sussex BN3 2FA
29 West 35th Street, New York, NY 10001
© 2000 by Psychology Press Ltd
All rights reserved. No part of this book may be reprinted or
reproduced or utilised in any form or by any electronic,
mechanical, or other means, now known or hereafter invented,
including photocopying and recording, or in any information
storage or retrieval system, without permission in writing from
the publishers.
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
ISBN 0-203-62630-3 Master e-book ISBN
ISBN 0-203-62636-2 (Adobe eReader Format)
ISBN 0-86377-550-0 (hbk)
ISBN 0-86377-551-9 (pbk)
Cover design by Hybert Design, Waltham St Lawrence, Berkshire

Contents
Preface xii
1. Introduction 1
Cognitive psychology as a science 1
Cognitive science 5
Cognitive neuropsychology 13
Cognitive neuroscience 18
Outline of this book 25
Chapter summary 26
Further reading 27

2. Visual perception: Basic processes 28
Introduction 28
Perceptual organisation 28
Depth and size perception 34
Colour perception 43
Brain systems 48
Chapter summary 56
Further reading 57
3. Perception, movement, and action 58
Introduction 58
Constructivist theories 59
Direct perception 64
Theoretical integration 68
Motion, perception, and action 70
Visually guided action 71

Perception of object motion 79
Chapter summary 87
Further reading 89
4. Object recognition 90
Introduction 90
Pattern recognition 91
Marr’s computational theory 96
Cognitive neuropsychology approach 106
Cognitive science approach 109
Face recognition 116
Chapter summary 128
Further reading 129
5. Attention and performance limitations 130
Introduction 130

Focused auditory attention 132
Focused visual attention 136
Divided attention 147
Automatic processing 155
Action slips 160
Chapter summary 165
Further reading 166
6. Memory: Structure and processes 167
Introduction 167
The structure of memory 167
Working memory 172
Memory processes 182
Theories of forgetting 187
Theories of recall and recognition 194
Chapter summary 203
Further reading 204
vi

7. Theories of long-term memory 205
Introduction 205
Episodic and semantic memory 205
Implicit memory 208
Implicit learning 211
Transfer appropriate processing 213
Amnesia 216
Theories of amnesia 223
Chapter summary 234
Further reading 235
8. Everyday memory 236
Introduction 236

Autobiographical memory 238
Memorable memories 245
Eyewitness testimony 249
Superior memory ability 256
Prospective memory 261
Evaluation of everyday memory research 263
Chapter summary 264
Further reading 265
9. Knowledge: Propositions and images 266
Introduction 266
What is a representation? 267
What is a proposition? 270
Propositions: Objects and relations 271
Schemata, frames, and scripts 276
What is an image? Some evidence 282
Propositions versus images 287
Kosslyn’s computational model of imagery 293
The neuropsychology of visual imagery 298
vii

Connectionist representations 299
Chapter summary 304
Further reading 305
10. Objects, concepts, and categories 306
Introduction 306
Evidence on categories and categorisation 307
The defining-attribute view 313
The prototype view 317
The exemplar-based view 320
Explanation-based views of concepts 322

Conceptual combination 325
Concepts and similarity 326
Evaluating theories of categorisation 331
Neurological evidence on concepts 332
Chapter summary 333
Further reading 334
11. Speech perception and reading 335
Introduction 335
Listening to speech 336
Theories of word recognition 340
Cognitive neuropsychology 345
Basic reading processes 348
Word identification 352
Routes from print to sound 357
Chapter summary 365
Further reading 367
12. Language comprehension 368
Introduction 368
Sentence processing 368
Capacity theory 376
viii

Discourse processing 379
Story processing 386
Chapter summary 397
Further reading 398
13. Language production 399
Introduction 399
Speech as communication 399
Speech production processes 401

Theories of speech production 403
Cognitive neuropsychology: Speech production 410
Cognitive neuroscience: Speech production 412
Writing: Basic processes 414
Cognitive neuropsychology: Writing 419
Speaking and writing compared 425
Language and thought 426
Chapter summary 428
Further reading 430
14. Problem solving: Puzzles, insight, and expertise 431
Introduction 431
Early research: The Gestalt school 433
Newell and Simon’s problem-space theory 438
Evaluating research on puzzles 446
Re-interpreting the Gestalt findings 449
From puzzles to expertise 452
Evaluation of expertise research 461
Learning to be an expert 461
Cognitive neuropsychology of thinking 465
Chapter summary 466
Further reading 467
15. Creativity and discovery 468
ix

Introduction 468
Genius and talent 468
General approaches to creativity 469
Discovery using mental models 473
Discovery by analogy 476
Scientific discovery by hypothesis testing 480

Evaluating problem-solving research 483
Chapter summary 486
Further reading 487
16. Reasoning and deduction 488
Introduction 488
Theoretical approaches to reasoning 491
How people reason with conditionals 492
Abstract-rule theory 502
Mental models theory 506
Domain-specific rule theories 513
Probabilistic theory 515
Cognitive neuropsychology of reasoning 518
Rationality and evaluation of theories 519
Chapter summary 520
Further reading 521
17. Judgement and decision making 522
Introduction 522
Judgement research 523
Decision making 531
How flawed are judgement and decision making? 534
Chapter summary 535
Further reading 536
18. Cognition and emotion 537
Introduction 537
x

Does affect require cognition? 537
Theories of emotional processing 543
Emotion and memory 549
Emotion, attention, and perception 556

Conclusions on emotional processing 561
Chapter summary 563
Further reading 564
19. Present and future 565
Introduction 565
Experimental cognitive psychology 565
Cognitive neuropsychology 568
Cognitive science 570
Cognitive neuroscience 573
Present and future directions 575
Chapter summary 576
Further reading 577
Glossary 579
References 591
Author index 657
Subject index 680
xi

Preface
Cognitive psychology has changed in several excit- ing ways in the few years since the third edition of this
textbook. Of all the changes, the most dramatic has been the huge increase in the number of studies making
use of sophisticated techniques (e.g., PET scans) to investigate human cognition. During the 1990s, such
studies probably increased tenfold, and are set to increase still further during the early years of the third
millennium. As a result, we now have four major approaches to cognitive psychology: experimental
cognitive psychology based mainly on laboratory experiments; cognit- ive neuropsychology, which points
up the effects of brain damage on cognition; cognitive science, with its emphasis on computational
modelling; and cognitive neuroscience, which uses a wide range of techniques to study brain functioning. It
is a worthwhile (but challenging) business to try to integrate information from these four approaches, and
that it is exactly what we have tried to do in this book. As before, our busy professional lives have made it
essential for us to work hard to avoid chaos. For example, the first author wrote several parts of the book in

China, and other parts were written in Mexico, Poland, Russia, Israel, and the United States. The second
author followed Joyce’s ghost, writing parts of the book between Dublin and Trieste.
I (Michael Eysenck) would like to express my profound gratitude to my wife Christine, to whom this
book (in common with the previous edition) is appropriately dedicated. I am also very grateful to our three
children (Fleur, William, and Juliet) for their tolerance and understanding, just as was the case with the
previous edition of this book. How- ever, when I look back to the writing of the third edition of this
textbook, it amazes me how much they have changed over the last five years.
Since I (Mark Keane) first collaborated on Cognitive Psychology: A Student’s Handbook in 1990 my
professional life has undergone considerable change, from a post-doc in psychology to Professor of
Computer Science. My original motivation in writing this text was to influence the course of cognitive
psychology as it was then developing, to encourage its extension in a computational direction. Looking back
over the last 10 years, I am struck by the slowness of change in the introduction of these ideas. The standard
psychology undergraduate degree does a very good job at giving students the tools for the empirical
exploration of the mind. However, few courses give students the tools for the theoretical elaboration of the
topic. In this respect, the discipline gets a “could do better” rather than an “excellent” on the mark sheet.
We are very grateful to several people for reading an entire draft of this book, and for offering valuable
advice on how it might be improved. They include Ruth Byrne, Liz Styles, Trevor Harley, and Robert
Logie. We would also like to thank those who commented on various chapters: John Towse, Steve
Anderson, James Hampton, Fernand Gobet, Evan Heit, Alan Parkin, David Over, Ken Manktelow, Ken
Gilhooly, Peter Ayton, Clare Harries, George Mather, Mark Georgeson, Gerry Altmann, Nick Wade, Mick
Power, David Hardman, John Richardson, Vicki Bruce, Gillian Cohen, and Jonathan St.B.T.Evans.
Michael Eysenck and Mark Keane

1
Introduction
COGNITIVE PSYCHOLOGY AS A SCIENCE
In the years leading up to the millennium, people made increased efforts to understand each other and their
own inner, mental space. This concern was marked with a tidal wave of research in the field of cognitive
psychology, and by the emergence of cognitive science as a unified programme for studying the mind.
In the popular media, there are numerous books, films, and television programmes on the more accessible

aspects of cognitive research. In scientific circles, cognitive psychology is currently a thriving area, dealing
with a bewildering diversity of phenomena, including topics like attention, perception, learning, memory,
language, emotion, concept formation, and thinking.
In spite of its diversity, cognitive psychology is unified by a common approach based on an analogy between
the mind and the digital computer; this is the information-processing approach. This approach is the
dominant paradigm or theoretical orientation (Kuhn, 1970) within cognitive psychology, and has been for
some decades.
Historical roots of cognitive psychology
The year 1956 was critical in the development of cognitive psychology. At a meeting at the Massachusetts
Institute of Technology, Chomsky gave a paper on his theory of language, George Miller presented a paper
on the magic number seven in short-term memory (Miller, 1956), and Newell and Simon discussed their
very influential computational model called the General Problem Solver (discussed in Newell, Shaw, &
Simon, 1958; see also Chapter 15). In addition, the first systematic attempt to consider concept formation
from a cognitive perspective was reported (Bruner, Goodnow, & Austin, 1956).
The field of Artificial Intelligence was also founded in 1956 at the Dartmouth Conference, which was
attended by Chomsky, McCarthy, Minsky, Newell, Simon, and Miller (see Gardner, 1985). Thus, 1956
witnessed the birth of both cognitive psychology and cognitive science as major disciplines. Books devoted
to aspects of cognitive psychology began to appear (e.g., Broadbent, 1958; Bruner et al., 1956). However, it
took several years before the entire information-processing viewpoint reached undergraduate courses
(Lachman, Lachman, & Butterfield, 1979; Lindsay & Norman, 1977).
Information processing: Consensus
Broadbent (1958) argued that much of cognition consists of a sequential series of processing stages. When a
stimulus is presented, basic perceptual processes occur, followed by attentional processes that transfer some

of the products of the initial perceptual processing to a short-term memory store. Thereafter, rehearsal
serves to maintain information in the short-term memory store, and some of the information is transferred to
a long-term memory store. Atkinson and Shiffrin (1968; see also Chapter 6) put forward one of the most
detailed theories of this type.
This theoretical approach provided a simple framework for textbook writers. The stimulus input could be
followed from the sense organs to its ultimate storage in long-term memory by successive chapters on

perception, attention, short-term memory, and long-term memory The crucial limitation with this approach
is its assumption that stimuli impinge on an inactive and unprepared organism. In fact, processing is often
affected substantially by the individual’s past experience, expectations, and so on.
We can distinguish between bottom-up processing and top-down processing. Bottom-up or stimulus-
driven processing is directly affected by stimulus input, whereas top-down or conceptually driven
processing is affected by what the individual contributes (e.g., expectations determined by context and past
experience). As an example of top-down processing, it is easier to read the word “well” in poor handwriting
if it is presented in the sentence context, “I hope you are quite___”, than when it is presented on its own.
The sequential stage model deals primarily with bottom-up or stimulus-driven processing, and its failure to
consider top-down processing adequately is its greatest limitation.
During the 1970s, theorists such as Neisser (1976) argued that nearly all cognitive activity consists of
interactive bottom-up and top-down processes occurring together (see Chapter 4). Perception and
remembering might seem to be exceptions, because perception depends heavily on the precise stimuli
presented (and thus on bottom-up processing), and remembering depends crucially on stored information
(and thus on top-down processing). However, perception is influenced by the perceiver’s expectations about
to-be-presented stimuli (see Chapters 2, 3, and 4), and remembering is influenced by the precise
environmental cues to memory that are available (see Chapter 6).
By the end of the 1970s, most cognitive psychologists agreed that the information-processing paradigm
was the best way to study human cognition (see Lachman et al., 1979):
• People are autonomous, intentional beings interacting with the external world.
• The mind through which they interact with the world is a general-purpose, symbol-processing system
(“symbols” are patterns stored in long-term memory which “designate or ‘point to’ structures outside
themselves”; Simon & Kaplan, 1989, p. 13).
• Symbols are acted on by processes that transform them into other symbols that ultimately relate to things
in the external world.
• The aim of psychological research is to specify the symbolic processes and representations underlying
performance on all cognitive tasks.
• Cognitive processes take time, and predictions about reaction times can often be made.
• The mind is a limited-capacity processor having structural and resource limitations.
• The symbol system depends on a neurological substrate, but is not wholly constrained by it.

Many of these ideas stemmed from the view that human cognition resembles the functioning of computers.
As Herb Simon (1980, p. 45) expressed it, “It might have been necessary a decade ago to argue for the
commonality of the information processes that are employed by such disparate systems as computers and
human nervous systems. The evidence for that commonality is now over-whelming.” (See Simon, 1995, for
an update of this view.)
The information-processing framework is continually developing as information technology develops.
The computational metaphor is always being extended as computer technology develops. In the 1950s and
2 COGNITIVE PSYCHOLOGY: A STUDENT’S HANDBOOK

1960s, researchers mainly used the general properties of the computer to understand the mind (e.g., that it
had a central processor and memory registers). Many different programming languages had been developed
by the 1970s, leading to various aspects of computer software and languages being used (e.g., Johnson-
Laird, 1977, on analogies to language understanding). After that, as massively parallel machines were
developed, theorists returned to the notion that cognitive theories should be based on the parallel processing
capabilities of the brain (Rumelhart, McClelland, & the PDP Research Group, 1986).
Information processing: Diversity
Cognitive science is a trans-disciplinary grouping of cognitive psychology, artificial intelligence,
linguistics, philosophy, neuroscience, and anthropology. The common aim of these disciplines is the
understanding of the mind. To simplify matters, we will focus mainly on the relationship between cognitive
psychology and artificial intelligence.
At the risk of oversimplification, we can identify four major approaches within cognitive psychology:
• Experimental cognitive psychology: it follows the experimental tradition of cognitive psychology, and
involves no computational modelling.
• Cognitive science: it develops computational models to understand human cognition.
• Cognitive neuropsychology: it studies patterns of cognitive impairment shown by brain-damaged patients
to provide valuable information about normal human cognition.
• Cognitive neuroscience: it uses several techniques for studying brain functioning (e.g., brain scans) to
understand human cognition.
There are various reasons why these distinctions are less neat and tidy in reality than we have implied. First,
terms such as cognitive science and cognitive neuroscience are sometimes used in a broader and more

inclusive way than we have done. Second, there has been a rapid increase in recent years in studies that
combine elements of more than one approach. Third, some have argued that experimental cognitive
psychologists and cognitive scientists are both endangered species, given the galloping expansion of
cognitive neuropsychology and cognitive neuroscience.
In this book, we will provide a synthesis of the insights emerging from all four approaches. The approach
taken by experimental cognitive psychologists has been in existence for several decades, so we will focus
mainly on the approaches of cognitive scientists, cognitive neuropsychologists, and cognitive
neuroscientists in the following sections. Before doing so, however, we will consider some traditional ways
of obtaining evidence about human cognition.
Empirical methods
In most of the research discussed in this book, cognitive processes and structures were inferred from
participants’ behaviour (e.g., speed and/or accuracy of performance) obtained under well controlled
conditions. This approach has proved to be very useful, and the data thus obtained have been used in the
development and subsequent testing of most theories in cognitive psychology. However, there are two
major potential problems with the use of such data:
1. Measures of the speed and accuracy of performance provide only indirect information about the internal
processes and structures of central interest to cognitive psychologists.
1. INTRODUCTION 3

2. Behavioural data are usually gathered in the artificial surroundings of the laboratory. The ways in
which people behave in the laboratory may differ greatly from the ways they behave in everyday life
(see Chapter 19).
Cognitive psychologists do not rely solely on behavioural data to obtain useful information from their
participants. An alternative way of studying cognitive processes is by making use of introspection, which is
defined by the Oxford English Dictionary as “examination or observation of one’s own mental processes”.
Introspection depends on conscious experience, and each individual’s conscious experience is personal and
private. In spite of this, it is often assumed that introspection can provide useful evidence about some
mental processes.
Nisbett and Wilson (1977) argued that introspection is practically worthless, supporting their argument
with examples. In one study, participants were presented with a display of five essentially identical pairs of

stockings, and decided which pair was the best. After they had made their choice, they indicated why they
had chosen that particular pair. Most participants chose the rightmost pair, and so their decisions were
actually affected by relative spatial position. However, the participants strongly denied that spatial position
had played any part in their decision, referring instead to slight differences in colour, texture, and so on
among the pairs of stockings as having been important.
Nisbett and Wilson (1977, p. 248) claimed that people are generally unaware of the processes influencing
their behaviour: “When people are asked to report how a particular stimulus influenced a particular
response, they do so not by consulting a memory of the mediating process, but by applying or generating
causal theories about the effects of that type of stimulus on that type of response.” This view was supported
by the discovery that an individual’s introspections about what is determining his or her behaviour are often
no more accurate than the guesses made by others.
The limitations of introspective evidence are becoming increasingly clear. For example, consider research
on implicit learning, which involves learning complex material without the ability to verbalise what has
been learned. There is reasonable evidence for the existence of implicit learning (see Chapter 7). There is
even stronger evidence for implicit memory, which involves memory in the absence of conscious
recollection. Normal and brain-damaged individuals can exhibit excellent memory performance even when
they show no relevant introspective evidence (see Chapter 7).
Ericsson and Simon (1980, 1984) argued that Nisbett and Wilson (1977) had overstated the case against
introspection. They proposed various criteria for distinguishing between valid and invalid uses of
introspection:
• It is preferable to obtain introspective reports during the performance of a task rather than retrospectively,
because of the fallibility of memory.
• Participants are more likely to produce accurate introspections when describing what they are attending
to, or thinking about, than when required to interpret a situation or their own thought processes.
• People cannot usefully introspect about several kinds of processes (e.g., neuronal processes; recognition
processes).
Careful consideration of the studies that Nisbett and Wilson (1977) regarded as striking evidence of the
worthlessness of introspection reveals that participants generally provided retrospective interpretations about
information that had probably never been fully attended to. Thus, their findings are consistent with the
proposed guidelines for the use of introspection (Crutcher, 1994; Ericsson & Simon, 1984).

4 COGNITIVE PSYCHOLOGY: A STUDENT’S HANDBOOK

In sum, introspection is sometimes useful, but there is no conscious awareness of many cognitive
processes or their products. This point is illustrated by the phenomena of implicit learning and implicit
memory, but numerous other examples of the limitations of introspection will be presented throughout this
book.
COGNITIVE SCIENCE
Cognitive scientists develop computational models to understand human cognition. A decent computational
model can show us that a given theory can be specified and allow us to predict behaviour in new situations.
Mathematical models were used in experimental psychology long before the emergence of the information-
processing paradigm (e.g., in IQ testing). These models can be used to make predictions, but often lack an
explanatory component. For example, committing three traffic violations is a good predictor of whether a
person is a bad risk for car insurance, but it is not clear why. One of the major benefits of the computational
models developed in cognitive science is that they can provide both an explanatory and predictive basis for
a phenomenon (e.g., Keane, Ledgeway, & Duff, 1994; Costello & Keane, 2000). We will focus on
computational models in this section, because they are the hallmark of the cognitive science approach.
FIGURE 1.1
A flowchart of a bad theory about how we understand sentences.

1. INTRODUCTION 5

Computational modelling: From flowcharts to simulations
In the past, many experimental cognitive psychologists stated their theories in vague verbal statements. This
made it hard to decide whether the evidence fitted the theory. In contrast, cognitive scientists produce
computer programs to represent cognitive theories with all the details made explicit. In the 1960s and
1970s, cognitive psychologists tended to use flowcharts rather than programs to characterise their theories.
Computer scientists use flowcharts as a sort of plan or blue-print for a program, before they write the
detailed code for it. Flowcharts are more specific than verbal descriptions, but can still be underspecified if
not accompanied by a coded program.
An example of a very inadequate flowchart is shown in Figure 1.1. This is a flowchart of a bad theory

about how we understand sentences. It assumes that a sentence is encoded in some form and then stored.
After that, a decision process (indicated by a diamond) determines if the sentence is too long. If it is too
long, then it is broken up and we return to the encode stage to re-encode the sentence. If it is ambiguous,
then its two senses are distinguished, and we return to the encode stage. If it is not ambiguous, then it is
stored in long-term memory. After one sentence is stored, we return to the encode stage to consider the next
sentence.
In the days when cognitive psychologists only used flowcharts, sarcastic questions abounded, such as,
“What happens in the boxes?” or “What goes down the arrows?”. Such comments point to genuine criticisms.
We need to know what is meant by “encode sentence”, how long is “too long”, and how sentence ambiguity
is tested. For example, after deciding that only a certain length of sentence is acceptable, it may turn out that
it is impossible to decide whether the sentence portions are ambiguous without considering the entire
sentence. Thus, the boxes may look all right at a superficial glance, but real contradictions may appear when
their contents are specified.
In similar fashion, exactly what goes down the arrows is critical. If one examines all the arrows
converging on the “encode sentence” box, it is clear that more needs to be specified. There are four
different kinds of thing entering this box: an encoded sentence from the environment; a sentence that has
been broken up into bits by the “split-sentence” box; a sentence that has been broken up into several senses;
and a command to consider the next sentence. Thus, the “encode” box has to perform several specific
operations. In addition, it may have to record the fact that an item is either a sentence or a possible meaning
of a sentence. Several other complex processes have to be specified within the “encode” box to handle these
tasks, but the flowchart sadly fails to addresses these issues. The gaps in the flowchart show some
similarities with those in the formula shown in Figure 1.2.
Not all theories expressed as flowcharts possess the deficiencies of the one described here. However,
implementing a theory as a program is a good method for checking that it contains no hidden assumptions
or vague terms. In the previous example, this would involve specifying the form of the input sentences, the
nature of the storage mechanisms, and the various decision processes (e.g., those about sentence length and
ambiguity). These computer programs are written in artificial intelligence programming languages, usually
LISP (Norvig, 1992) or PROLOG (Shoham, 1993).
There are many issues surrounding the use of computer simulations and the ways in which they do and do
not simulate cognitive processes (Cooper, Fox, Farrington, & Shallice, 1996; Costello & Keane, 2000; Palmer

& Kimchi, 1986). Palmer and Kimchi (1986) argued that it should be possible to decompose a theory
successively through a number of levels (from descriptive statement to flowchart to specific functions in a
program) until one reaches a written program. In addition, they argued that it should be possible to draw a
line at some level of decomposition, and say that everything above that line is psychologically plausible or
meaningful, whereas everything below it is not. This issue of separating psychological aspects of the
program from other aspects arises because there will always be parts of the program that have little to do
6 COGNITIVE PSYCHOLOGY: A STUDENT’S HANDBOOK

with the psychological theory, but which are there simply because of the particular programming language
being used and the machine on which the program is running. For example, in order to see what the program
is doing, it is necessary to have print commands in the program which show the outputs of various stages in
the computer’s screen. However, no-one would argue that such print commands form part of the
psychological model. Cooper et al. (1996) argue that psychological theories should not be
Three issues sorrounding computer simulation:
• Is it possible to decompose a theory until one reaches the level of a written program?
• Is it possible to separate psychological aspects of a program f rom other aspects?
• Are there differences in reaction time between programs and human participants?
described using natural language at all, but that a formal specification language should be used. This would
be a very precise language, like a logic, that would be directly executable as a program.
Other issues arise about the relationship between the performance of the program and the performance of
human participants (Costello & Keane, 2000). For example, it is seldom meaningful to relate the speed of
the program doing a simulated task to the reaction time taken by human participants, because the processing
times of programs are affected by psychologically irrelevant features. Programs run faster on more
FIGURE 1.2
The problem of being specific. Copyright © 1977 by Sidney Harris in American Scientist Magazine. Reproduced with
permission of the author.

1. INTRODUCTION 7

powerful computers, or if the program’s code is interpreted rather than compiled. However, the various

materials that are presented to the program should result in differences in program operation time that
correlate closely with differences in participants’ reaction times in processing the same materials. At the
very least, the program should be able to reproduce the same outputs as participants when given the same
inputs.
Computational modelling techniques
The general characteristics of computational models of cognition have been discussed at some length. It is
now time to deal with some of the main types of computational model that have been used in recent years.
Three main types are outlined briefly here: semantic networks; production systems; and connectionist
networks.
Semantic networks
Consider the problem of modelling what we know about the world (see Chapter 9). There is a long
tradition from Aristotle and the British empiricist school of philosophers (Locke, Hume, Mill, Hartley,
Bain) which proposes that all knowledge is in the form of associations. Three main principles of association
have been proposed:
• Contiguity: two things become associated because they occurred together in time.
• Similarity: two things become associated because they are alike.
• Contrast: two things become associated because they are opposites.
There is a whole class of cognitive models owing their origins to these ideas; they are called associative or
semantic or declarative networks. Semantic networks have the following general characteristics:
• Concepts are represented by linked nodes that form a network.
• These links can be of various kinds; they can represent very general relations (e.g., is-associated-with or
is-similar-to) or specific, simple relations like is-a (e.g., John is-a policeman), or more complete relations
like play, hit, kick.
• The nodes themselves and the links among nodes can have various activation strengths representing the
similarity of one concept to another. Thus, for example, a dog and a cat node may be connected by a link
with an activation of 0.5, whereas a dog and a pencil may be connected by a link with a strength of 0.1.
• Learning takes the form of adding new links and nodes to the network or changing the activation values
on the links between nodes. For example, in learning that two concepts are similar, the activation of a
link between them may be increased.
• Various effects (e.g., memory effects) can be modelled by allowing activation to spread throughout the

network from a given node or set of nodes.
• The way in which activation spreads through a network can be determined by a variety of factors For
example, it can be affected by the number of links between a given node and the point of activation, or
by the amount of time that has passed since the onset of activation.
Part of a very simple network model is shown in Figure 1.3. It corresponds closely to the semantic network
model proposed by Collins and Loftus (1975). Such models have been successful in accounting for a
various findings. Semantic priming effects in which the word “dog” is recognised more readily if it is
8 COGNITIVE PSYCHOLOGY: A STUDENT’S HANDBOOK

preceded by the word “cat” (Meyer & Schvaneveldt, 1971) can be easily modelled using such networks (see
Chapter 12). Ayers and Reder (1998) have used semantic networks to understand misinformation effects in
eyewitness testimony (see Chapter 8). At their best, semantic networks are both flexible and elegant
modelling schemes.
Production systems
Another popular approach to modelling cognition involves production systems. These are made up of
productions, where a production is an “IF… THEN” rule. These rules can take many forms, but an example
that is very useful in everyday life is, “If the green man is lit up, then cross the road”. In a typical production
system model, there is a long-term memory that contains a large set of these IF…THEN rules. There is also
a working memory (i.e., a system holding information that is currently being processed). If information from
the environment that “the green man is lit up” reaches working memory, it will match the IF-part of the rule
in long-term memory, and trigger the THEN-part of the rule (i.e., cross the road).
Production systems have the following characteristics:
• They have numerous IF…THEN rules.
• They have a working memory containing information.
• The production system operates by matching the contents of working memory against the IF-parts of the
rules and executing the THEN-parts.
• If some information in working memory matches the IF-part of many rules, there may be a conflict-
resolution strategy selecting one of these rules as the best one to be executed.
FIGURE 1.3
A schematic diagram of a simple semantic network with nodes for various concepts (i.e., dog, cat), and links

between these nodes indicating the differential similarity of these concepts to each other.

1. INTRODUCTION 9

Consider a very simple production system operating on lists of letters involving As and Bs (see Figure 1.4).
The system has two rules:
1. IF a list in working memory has an A at the end THEN replace the A with AB.
2. IF a list in working memory has a B at the end THEN replace the B with an A.
If we give this system different inputs in the form of different lists of letters, then different things happen. If
we give it CCC, this will be stored in working memory but will remain unchanged, because it does not
match either of the IF-parts of the two rules. If we give it A, then it will be notified by the rules after the A
is stored in working memory. This A is a list of one item and as such it matches rule 1. Rule 1 has the effect
of replacing the A with AB, so that when the THEN-part is executed, working memory will contain an AB.
On the next cycle, AB does not match rule 1 but it does match rule 2. As a result, the B is replaced by an A,
leaving an AA in working memory. The system will next produce AAB, then AAAB, and so on.
Many aspects of cognition can be specified as sets of IF…THEN rules. For example, chess knowledge
can readily be represented as a set of productions based on rules such as, “If the Queen is threatened, then move
the Queen to a safe square”. In this way, people’s basic knowledge of chess can be modified as a collection
of productions, and gaps in this knowledge as the absence of some productions. Newell and Simon (1972)
first established the usefulness of production system models in characterising cognitive processes like
problem solving and reasoning (see Chapter 14). However, these models have a wider applicability.
Anderson (1993) has modelled human learning using production systems (see Chapter 14), and others have
used them to model reinforcement behaviour in rats, and semantic memory (Holland et al., 1986).
Connectionist networks
Connectionist networks, neural networks, or parallel distributed processing models as they are variously
called, are relative newcomers to the computational modelling scene. All previous techniques were marked
by the need to program explicitly all aspects of the model, and by their use of explicit symbols to represent
concepts. Connectionist networks, on the other hand, can to some extent program themselves, in that they
can “learn” to produce specific outputs when certain inputs are given to them. Furthermore, connectionist
modellers often reject the use of explicit rules and symbols and use distributed representations, in which

concepts are characterised as patterns of activation in the network (see Chapter 9).
Early theoretical proposals about the feasibility of learning in neural-like networks were made by
McCulloch and Pitts (1943) and by Hebb (1949). However, the first neural network models, called
FIGURE 1.4
A schematic diagram of a simple production system.

10 COGNITIVE PSYCHOLOGY: A STUDENT’S HANDBOOK

Perceptrons, were shown to have several limitations (Minsky & Papert, 1988). By the late 1970s, hardware
and software develpments in computing offered the possibility of constructing more complex networks
overcoming many of these original limitations (e.g., Rumelhart, McClelland, & the PDP Research Group,
1986; McClelland, Rumelhart, & the PDP Research Group, 1986).
Connectionist networks typically have the following characteristics (see Figure 1.5):
• The network consists of elementary or neuron-like units or nodes connected together so that a single unit
has many links to other units.
• Units affect other units by exciting or inhibiting them.
• The unit usually takes the weighted sum of all of the input links, and produces a single output to another
unit if the weighted sum exceeds some threshold value.
• The network as a whole is characterised by the properties of the units that make it up, by the way they
are connected together, and by the rules used to change the strength of connections among units.
FIGURE 1.5
A multi-layered connectionist network with a layer of input units, a layer of internal representation units or hidden
units, and a layer of output units. Input patterns can be encoded, if there are enough hidden units, in a form that allows
the appropriate output pattern to be generated from a given input pattern. Reproduced with permission from David E.
Rumelhart & James L.McClelland, Parallel distributed processing: Explorations in the microstructure of cognition (Vol.
1), published by the MIT Press, © 1986, the Massachusetts Institute of Technology.

1. INTRODUCTION 11

• Networks can have different structures or layers; they can have a layer of input links, intermediate layers

(of so-called “hidden units”), and a layer of output units.
• A representation of a concept can be stored in a distributed manner by a pattern of activation throughout
the network.
• The same network can store many patterns without them necessarily interfering with each other if they
are sufficiently distinct.
• An important learning rule used in networks is called backward propagation of errors (BackProp).
In order to understand connectionist networks fully, let us consider how individual units act when activation
impinges on them. Any given unit can be connected to several other units (see Figure 1.6). Each of these
other units can send an excitatory or an inhibitory signal to the first unit. This unit generally takes a
weighted sum of all these inputs. If this sum exceeds some threshold, it produces an output. Figure 1.6 shows
a simple diagram of just such a unit, which takes the inputs from a number of other units and sums them to
produce an output if a certain threshold is exceeded.
These networks can model cognitive behaviour without recourse to the kinds of explicit rules found in
production systems. They do this by storing patterns of activation in the network that associate various
inputs with certain outputs. The models typically make use of several layers to deal with complex
behaviour. One layer consists of input units that encode a stimulus as a pattern of activation in those units.
Another layer is an output layer, which produces some response as a pattern of activation. When the
network has learned to produce a particular response at the output layer following the presentation of a
particular stimulus at the input layer, it can exhibit behaviour that looks “as if” it had learned a rule of the form
“IF such-and-such is the case THEN do so-and-so”. However, no such rules exist explicitly in the model.
Networks learn the association between different inputs and outputs by modifying the weights on the
links between units in the net. In Figure 1.6, we see that the weight on the links to a unit, as well as the
activation of other units, plays a crucial role in computing the response of that unit. Various learning rules
modify these weights in systematic ways. When we apply such learning rules to a network, the weights on
the links are modified until the net produces the required output patterns given certain input patterns.
One such learning rule is called “backward propagation of errors” or BackProp. BackProp allows a
network to learn to associate a particular input pattern with a given output pattern. At the start of the
learning period, the network is set up with random weights on the links among the units. During the early
stages of learning, after the input pattern has been presented, the output units often produce the incorrect
pattern or response. BackProp compares the imperfect pattern with the known required response, noting the

errors that occur. It then back-propagates activation through the network so that the weights between the
units are adjusted to produce the required pattern. This process is repeated with a particular stimulus pattern
until the network produces the required response pattern. Thus, the model can be made to learn the
behaviour with which the cognitive scientist is concerned, rather than being explicitly programmed to do so.
Networks have been used to produce very interesting results. Several examples will be discussed
throughout the text (see, for examples, Chapters 2, 10, and 16), but one concrete example will be mentioned
here. Sejnowski and Rosenberg (1987) produced a connectionist network called NETtalk, which takes an
English text as its input and produces reasonable English speech output. Even though the network is trained
on a limited set of words, it can pronounce the words from new text with about 90% accuracy. Thus, the
network seems to have learned the “rules of English pronunciation”, but it has done so without having
explicit rules that combine and encode sounds.
Connectionist models such as NETtalk have great “Wow!” value, and are the subject of much research
interest. Some researchers might object to our classification of connectionist networks as merely one among
12 COGNITIVE PSYCHOLOGY: A STUDENT’S HANDBOOK

×