Tải bản đầy đủ (.pdf) (561 trang)

An introduction to pattern recognition michael alder

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (4.38 MB, 561 trang )

An Introduction to
Pattern Recognition
Michael Alder
HeavenForBooks.com
HeavenForBooks.com
An Introduction
to Pattern
Recognition
by
Michael Alder
An Introduction to Pattern Recognition
HeavenForBooks.com
This Edition ©Mike Alder, 2001
Warning: This edition is not to be
copied, transmitted excerpted or printed except
on terms authorised by the publisher

Next: Contents
An Introduction to Pattern Recognition:
Statistical, Neural Net and Syntactic
methods of getting robots to see and
hear.
Michael D. Alder
September 19, 1997
Preface
Automation, the use of robots in industry, has not progressed with the speed that many had hoped it
would. The forecasts of twenty years ago are looking fairly silly today: the fact that they were produced
largely by journalists for the benefit of boardrooms of accountants and MBA's may have something to do
with this, but the question of why so little has been accomplished remains.
The problems were, of course, harder than they looked to naive optimists. Robots have been built that
can move around on wheels or legs, robots of a sort are used on production lines for routine tasks such as


welding. But a robot that can clear the table, throw the eggshells in with the garbage and wash up the
dishes, instead of washing up the eggshells and throwing the dishes in the garbage, is still some distance
off.
Pattern Classification, more often called Pattern Recognition, is the primary bottleneck in the task of
automation. Robots without sensors have their uses, but they are limited and dangerous. In fact one might
plausibly argue that a robot without sensors isn't a real robot at all, whatever the hardware manufacturers
may say. But equipping a robot with vision is easy only at the hardware level. It is neither expensive nor
technically difficult to connect a camera and frame grabber board to a computer, the robot's `brain'. The
problem is with the software, or more exactly with the algorithms which have to decide what the robot is
looking at; the input is an array of pixels, coloured dots, the software has to decide whether this is an
image of an eggshell or a teacup. A task which human beings can master by age eight, when they decode
the firing of the different light receptors in the retina of the eye, this is computationally very difficult, and
we have only the crudest ideas of how it is done. At the hardware level there are marked similarities
between the eye and a camera (although there are differences too). At the algorithmic level, we have only
a shallow understanding of the issues.
An Introduction to Pattern Recognition: Statistical, Neural Net and Syntactic methods of getting robots to see and hear.
(1 of 11) [12/12/2000 4:01:56 AM]
Human beings are very good at learning a large amount of information about the universe and how it can
be treated; transferring this information to a program tends to be slow if not impossible.
This has been apparent for some time, and a great deal of effort has been put into research into practical
methods of getting robots to recognise things in images and sounds. The Centre for Intelligent
Information Processing Systems (CIIPS), of the University of Western Australia, has been working in the
area for some years now. We have been particularly concerned with neural nets and applications to
pattern recognition in speech and vision, because adaptive or learning methods are clearly of great
potential value. The present book has been used as a postgraduate textbook at CIIPS for a Master's level
course in Pattern Recognition. The contents of the book are therefore oriented largely to image and to
some extent speech pattern recognition, with some concentration on neural net methods.
Students who did the course for which this book was originally written, also completed units in
Automatic Speech Recognition Algorithms, Engineering Mathematics (covering elements of Information
Theory, Coding Theory and Linear and Multilinear algebra), Artificial Neural Nets, Image Processing,

Sensors and Instrumentation and Adaptive Filtering. There is some overlap in the material of this book
and several of the other courses, but it has been kept to a minimum. Examination for the Pattern
Recognition course consisted of a sequence of four micro-projects which together made up one
mini-project.
Since the students for whom this book was written had a variety of backgrounds, it is intended to be
accessible. Since the major obstructions to further progress seem to be fundamental, it seems pointless to
try to produce a handbook of methods without analysis. Engineering works well when it is founded on
some well understood scientific basis, and it turns into alchemy and witchcraft when this is not the case.
The situation at present in respect of our scientific basis is that it is, like the curate's egg, good in parts.
We are solidly grounded at the hardware level. On the other hand, the software tools for encoding
algorithms (C, C++, MatLab) are fairly primitive, and our grasp of what algorithms to use is negligible. I
have tried therefore to focus on the ideas and the (limited) extent to which they work, since progress is
likely to require new ideas, which in turn requires us to have a fair grasp of what the old ideas are. The
belief that engineers as a class are not intelligent enough to grasp any ideas at all, and must be trained to
jump through hoops, although common among mathematicians, is not one which attracts my sympathy.
Instead of exposing the fundamental ideas in algebra (which in these degenerate days is less intelligible
than Latin) I therefore try to make them plain in English.
There is a risk in this; the ideas of science or engineering are quite diferent from those of philosophy (as
practised in these degenerate days) or literary criticism (ditto). I don't mean they are about different
things, they are different in kind. Newton wrote `Hypotheses non fingo', which literally translates as `I do
not make hypotheses', which is of course quite untrue, he made up some spectacularly successful
hypotheses, such as universal gravitation. The difference between the two statements is partly in the
hypotheses and partly in the fingo. Newton's `hypotheses' could be tested by observation or calculation,
whereas the explanations of, say, optics, given in Lucretius De Rerum Naturae were recognisably
`philosophical' in the sense that they resembled the writings of many contemporary philosophers and
literary critics. They may persuade, they may give the sensation of profound insight, but they do not
reduce to some essentially prosaic routine for determining if they are actually true, or at least useful.
Newton's did. This was one of the great philosophical advances made by Newton, and it has been
underestimated by philosophers since.
An Introduction to Pattern Recognition: Statistical, Neural Net and Syntactic methods of getting robots to see and hear.

(2 of 11) [12/12/2000 4:01:56 AM]
The reader should therefore approach the discussion about the underlying ideas with the attitude
of irreverence and disrespect that most engineers, quite properly, bring to non-technical prose.
He should ask: what procedures does this lead to, and how may they be tested? We deal with
high level abstractions, but they are aimed always at reducing our understanding of something
prodigiously complicated to something simple.
It is necessary to make some assumptions about the reader and only fair to say what they are.
I assume, first, that the reader has a tolerably good grasp of Linear Algebra concepts. The
concepts are more important than the techniques of matrix manipulation, because there are
excellent packages which can do the calculations if you know what to compute. There is a
splendid book on Linear Algebra available from the publisher HeavenForBooks.com
I assume, second, a moderate familiarity with elementary ideas of Statistics, and also of
contemporary Mathematical notation such as any Engineer or Scientist will have encountered in
a modern undergraduate course. I found it necessary in this book to deal with underlying ideas
of Statistics which are seldom mentioned in undergraduate courses.
I assume, finally, the kind of general exposure to computing terminology familiar to anyone
who can read, say, Byte magazine, and also that the reader can program in C or some similar
language.
I do not assume the reader is of the male sex. I usually use the pronoun `he' in referring to the
reader because it saves a letter and is the convention for the generic case. The proposition that
this will depress some women readers to the point where they will give up reading and go off
and become subservient housewives does not strike me as sufficiently plausible to be worth
considering further.
This is intended to be a happy, friendly book. It is written in an informal, one might almost say
breezy, manner, which might irritate the humourless and those possessed of a conviction that
intellectual respectability entails stuffiness. I used to believe that all academic books on difficult
subjects were obliged for some mysterious reason to be oppressive, but a survey of the better
writers of the past has shown me that this is in fact a contemporary habit and in my view a bad
one. I have therefore chosen to abandon a convention which must drive intelligent people away
from Science and Engineering in large numbers.

The book has jokes, opinionated remarks and pungent value judgments in it, which might serve
to entertain readers and keep them on their toes, so to speak. They may also irritate a few who
believe that the pretence that the writer has no opinions should be maintained even at the cost of
making the book boring. What this convention usually accomplishes is a sort of bland porridge
which discourages critical thought about fundamental assumptions, and thought about
fundamental assumptions is precisely what this area badly needs.
So I make no apology for the occasional provocative judgement; argue with me if you disagree. It is
quite easy to do that via the net, and since I enjoy arguing (it is a pleasant game), most of my
provocations are deliberate. Disagreeing with people in an amiable, friendly way, and learning something
about why people feel the way they do, is an important part of an education; merely learning the correct
things to say doesn't get you very far in Mathematics, Science or Engineering. Cultured men or women
should be able to dissent with poise, to refute the argument without losing the friend.
The judgements are, of course, my own; CIIPS and the Mathematics Department and I are not
responsible for each other. Nor is it to be expected that the University of Western Australia should ensure
that my views are politically correct. If it did that, it wouldn't be a university. In a good university, It is a
case of Tot homines, quot sententiae, there are as many opinions as people. Sometimes more!
I am most grateful to my colleagues and students at the Centre for assistance in many forms; I have
shamelessly borrowed their work as examples of the principles discussed herein. I must mention Dr.
Chris deSilva with whom I have worked over many years, Dr. Gek Lim whose energy and enthusiasm
for Quadratic Neural Nets has enabled them to become demonstrably useful, and Professor Yianni
Attikiouzel, director of CIIPS, without whom neither this book nor the course would have come into
existence.

Contents●
Basic Concepts
Measurement and Representation
From objects to points in space■
Telling the guys from the gals■
Paradigms■


Decisions, decisions
Metric Methods■
Neural Net Methods (Old Style)■
Statistical Methods
Parametric■
Non-parametric■

CART et al■

Clustering: supervised v unsupervised learning❍
Dynamic Patterns❍
Structured Patterns❍
Alternative Representations❍

An Introduction to Pattern Recognition: Statistical, Neural Net and Syntactic methods of getting robots to see and hear.
(4 of 11) [12/12/2000 4:01:57 AM]
Strings, propositions, predicates and logic■
Fuzzy Thinking■
Robots■
Summary of this chapter❍
Exercises❍
Bibliography❍
Image Measurements
Preliminaries
Image File Formats■

Generalities❍
Image segmentation: finding the objects
Mathematical Morphology■
Little Boxes■

Border Tracing■
Conclusions on Segmentation■

Measurement Principles
Issues and methods■
Invariance in practice■

Measurement practice
Quick and Dumb■
Scanline intersections and weights■
Moments■
Zernike moments and the FFT
Historical Note■

Masks and templates■
Invariants■
Simplifications and Complications■

Syntactic Methods❍
Summary of OCR Measurement Methods❍
Other Kinds of Binary Image❍
Greyscale images of characters
Segmentation: Edge Detection■

Greyscale Images in general❍

An Introduction to Pattern Recognition: Statistical, Neural Net and Syntactic methods of getting robots to see and hear.
(5 of 11) [12/12/2000 4:01:57 AM]
Segmentation■
Measuring Greyscale Images■

Quantisation■
Textures■
Colour Images
Generalities■
Quantisation■
Edge detection■
Markov Random Fields■
Measurements■

Spot counting❍
IR and acoustic Images❍
Quasi-Images❍
Dynamic Images❍
Summary of Chapter Two❍
Exercises❍
Bibliography❍
Statistical Ideas
History, and Deep Philosophical Stuff
The Origins of Probability: random variables■
Histograms and Probability Density Functions■
Models and Probabilistic Models■

Probabilistic Models as Data Compression Schemes
Models and Data: Some models are better than others■

Maximum Likelihood Models
Where do Models come from?■

Bayesian Methods
Bayes' Theorem■

Bayesian Statistics■
Subjective Bayesians■

Minimum Description Length Models
Codes: Information theoretic preliminaries■
Compression for coin models■


An Introduction to Pattern Recognition: Statistical, Neural Net and Syntactic methods of getting robots to see and hear.
(6 of 11) [12/12/2000 4:01:57 AM]
Compression for pdfs■
Summary of Rissanen Complexity■
Summary of the chapter❍
Exercises❍
Bibliography❍
Decisions: Statistical methods
The view into ❍
Computing PDFs: Gaussians
One Gaussian per cluster
Dimension 2■

Lots of Gaussians: The EM algorithm
The EM algorithm for Gaussian Mixture Modelling■

Other Possibilities■

Bayesian Decision
Cost Functions■
Non-parametric Bayes Decisions■
Other Metrics■


How many things in the mix?
Overhead■
Example■
The Akaike Information Criterion■
Problems with EM■

Summary of Chapter❍
Exercises❍
Bibliography❍

Decisions: Neural Nets(Old Style)
History: the good old days
The Dawn of Neural Nets■
The death of Neural Nets■
The Rebirth of Neural Nets■
The End of History■

Training the Perceptron
The Perceptron Training Rule■


An Introduction to Pattern Recognition: Statistical, Neural Net and Syntactic methods of getting robots to see and hear.
(7 of 11) [12/12/2000 4:01:57 AM]
Committees
Committees and XOR■
Training Committees■
Capacities of Committees: generalised XOR■
Four Layer Nets■
Building up functions■


Smooth thresholding functions
Back-Propagation■
Mysteries of Functional Analysis■
Committees vs Back-Propagation■

Compression: is the model worth the computation?❍
Other types of (Classical) net
General Issues■
The Kohonen Net■
Probabilistic Neural Nets■
Hopfield Networks
Introduction■
Network Characteristics■
Network Operation■
The Network Equations■
Theory of the Network■
Applications■

The Boltzmann Machine
Introduction■
Simulated Annealing■
Network Characteristics■
Network Operation■
Theory of the Network■
Applications■

Bidirectional Associative Memory
Introduction■
Network Characteristics■

Network Operation■


An Introduction to Pattern Recognition: Statistical, Neural Net and Syntactic methods of getting robots to see and hear.
(8 of 11) [12/12/2000 4:01:57 AM]
The Network Equations■
Theory of the Network■
Applications■
ART
Introduction■
Network Characteristics■
Network Operation■
Theory of the Network■
Applications■

Neocognitron
Introduction■
Network Structure■
The Network Equations■
Training the Network■
Applications■

References■
Quadratic Neural Nets: issues■
Summary of Chapter Five❍
Exercises❍
Bibliography❍
Continuous Dynamic Patterns
Automatic Speech Recognition
Talking into a microphone■

Traditional methods: VQ and HMM
The Baum-Welch and Viterbi Algorithms for Hidden Markov Models■

Network Topology and Initialisation■
Invariance■
Other HMM applications■
Connected and Continuous Speech■

Filters
Linear Systems■
Moving Average Filters■
Autoregressive Time Series■


An Introduction to Pattern Recognition: Statistical, Neural Net and Syntactic methods of getting robots to see and hear.
(9 of 11) [12/12/2000 4:01:58 AM]
Linear Predictive Coding or ARMA modelling■
Into ■
States■
Wiener Filters■
Adaptive Filters, Kalman Filters■
Fundamentals of dynamic patterns❍
Exercises❍
Bibliography❍
Discrete Dynamic Patterns
Alphabets, Languages and Grammars
Definitions and Examples■
ReWrite Grammars■
Grammatical Inference■
Inference of ReWrite grammars■


Streams, predictors and smoothers❍
Chunking by Entropy❍
Stochastic Equivalence❍
Quasi-Linguistic Streams❍
Graphs and Diagram Grammars❍
Exercises❍
Bibliography❍

Syntactic Pattern Recognition
Precursors❍
Linear Images❍
Curved Elements❍
Parameter Regimes❍
Invariance:
Classifying Transformations

Intrinsic and Extrisic Chunking (Binding)❍
Backtrack❍
Occlusion and other metric matters❍
Neural Modelling
Self-Tuning Neurons■


An Introduction to Pattern Recognition: Statistical, Neural Net and Syntactic methods of getting robots to see and hear.
(10 of 11) [12/12/2000 4:01:58 AM]
Geometry and Dynamics■
Extensions to Higher Order Statistics■
Layering■
Summary of Chapter❍

Exercises❍
Bibliography❍
About this document ●

Next: Contents Mike Alder
9/19/1997
An Introduction to Pattern Recognition: Statistical, Neural Net and Syntactic methods of getting robots to see and hear.
(11 of 11) [12/12/2000 4:01:58 AM]

Next: Basic Concepts Up: An Introduction to Pattern Previous: An Introduction to Pattern
Contents
Contents●
Basic Concepts
Measurement and Representation
From objects to points in space■
Telling the guys from the gals■
Paradigms■

Decisions, decisions
Metric Methods■
Neural Net Methods (Old Style)■
Statistical Methods
Parametric■
Non-parametric■

CART et al■

Clustering: supervised v unsupervised learning❍
Dynamic Patterns❍
Structured Patterns❍

Alternative Representations
Strings, propositions, predicates and logic■
Fuzzy Thinking■
Robots■

Summary of this chapter❍
Exercises❍
Bibliography❍

Image Measurements
Preliminaries
Image File Formats■

Generalities❍

Contents
(1 of 7) [12/12/2000 4:02:27 AM]
Image segmentation: finding the objects
Mathematical Morphology■
Little Boxes■
Border Tracing■
Conclusions on Segmentation■

Measurement Principles
Issues and methods■
Invariance in practice■

Measurement practice
Quick and Dumb■
Scanline intersections and weights■

Moments■
Zernike moments and the FFT
Historical Note■

Masks and templates■
Invariants■
Chaincoding■

Syntactic Methods❍
Summary of OCR Measurement Methods❍
Other Kinds of Binary Image❍
Greyscale images of characters
Segmentation: Edge Detection■

Greyscale Images in general
Segmentation■
Measuring Greyscale Images■
Quantisation■
Textures■

Colour Images
Generalities■
Quantisation■
Edge detection■
Markov Random Fields■
Measurements■

Contents
(2 of 7) [12/12/2000 4:02:27 AM]
Spot counting❍

IR and acoustic Images❍
Quasi-Images❍
Dynamic Images❍
Summary of Chapter Two❍
Exercises❍
Bibliography❍
Statistical Ideas
History, and Deep Philosophical Stuff
The Origins of Probability: random variables■
Histograms and Probability Density Functions■
Models and Probabilistic Models■

Probabilistic Models as Data Compression Schemes
Models and Data: Some models are better than others■

Maximum Likelihood Models
Where do Models come from?■

Bayesian Methods
Bayes' Theorem■
Bayesian Statistics■
Subjective Bayesians■

Minimum Description Length Models
Codes: Information theoretic preliminaries■
Compression for coin models■
Compression for pdfs■
Summary of Rissanen Complexity■

Summary of the chapter❍

Exercises❍
Bibliography❍

Decisions: Statistical methods
The view into ❍
Computing PDFs: Gaussians
One Gaussian per cluster
Dimension 2■



Contents
(3 of 7) [12/12/2000 4:02:27 AM]
Lots of Gaussians: The EM algorithm
The EM algorithm for Gaussian Mixture Modelling■

Other Possibilities■
Bayesian Decision
Cost Functions■
Non-parametric Bayes Decisions■
Other Metrics■

How many things in the mix?
Overhead■
Example■
The Akaike Information Criterion■
Problems with EM■

Summary of Chapter❍
Exercises❍

Bibliography❍
Decisions: Neural Nets(Old Style)
History: the good old days
The Dawn of Neural Nets■
The death of Neural Nets■
The Rebirth of Neural Nets■
The End of History■

Training the Perceptron
The Perceptron Training Rule■

Committees
Committees and XOR■
Training Committees■
Capacities of Committees: generalised XOR■
Four Layer Nets■
Building up functions■

Smooth thresholding functions
Back-Propagation■
Mysteries of Functional Analysis■
Committees vs Back-Propagation■


Contents
(4 of 7) [12/12/2000 4:02:27 AM]
Compression: is the model worth the computation?❍
Other types of (Classical) net
General Issues■
The Kohonen Net■

Probabilistic Neural Nets■
Hopfield Networks
Introduction■
Network Characteristics■
Network Operation■
The Network Equations■
Theory of the Network■
Applications■

The Boltzmann Machine
Introduction■
Simulated Annealing■
Network Characteristics■
Network Operation■
Theory of the Network■
Applications■

Bidirectional Associative Memory
Introduction■
Network Characteristics■
Network Operation■
The Network Equations■
Theory of the Network■
Applications■

ART
Introduction■
Network Characteristics■
Network Operation■
Theory of the Network■

Applications■

Neocognitron■

Contents
(5 of 7) [12/12/2000 4:02:27 AM]
Introduction■
Network Structure■
The Network Equations■
Training the Network■
Applications■
References■
Quadratic Neural Nets: issues■
Summary of Chapter Five❍
Exercises❍
Bibliography❍
Continuous Dynamic Patterns
Automatic Speech Recognition
Talking into a microphone■
Traditional methods: VQ and HMM
The Baum-Welch and Viterbi Algorithms for Hidden Markov Models■

Network Topology and Initialisation■
Invariance■
Other HMM applications■
Connected and Continuous Speech■

Filters
Linear Systems■
Moving Average Filters■

Autoregressive Time Series■
Linear Predictive Coding or ARMA modelling■
Into ■
States■
Wiener Filters■
Adaptive Filters, Kalman Filters■

Fundamentals of dynamic patterns❍
Exercises❍
Bibliography❍

Discrete Dynamic Patterns
Alphabets, Languages and Grammars❍

Contents
(6 of 7) [12/12/2000 4:02:27 AM]
Definitions and Examples■
ReWrite Grammars■
Grammatical Inference■
Inference of ReWrite grammars■
Streams, predictors and smoothers❍
Chunking by Entropy❍
Stochastic Equivalence❍
Quasi-Linguistic Streams❍
Graphs and Diagram Grammars❍
Exercises❍
Bibliography❍
Syntactic Pattern Recognition
Precursors❍
Linear Images❍

Curved Elements❍
Parameter Regimes❍
Invariance:
Classifying Transformations

Intrinsic and Extrisic Chunking (Binding)❍
Backtrack❍
Occlusion and other metric matters❍
Neural Modelling
Self-Tuning Neurons■
Geometry and Dynamics■
Extensions to Higher Order Statistics■
Layering■

Summary of Chapter❍
Exercises❍
Bibliography❍

Mike Alder
9/19/1997
Contents
(7 of 7) [12/12/2000 4:02:27 AM]

Next: Measurement and Representation Up: An Introduction to Pattern Previous: Contents
Basic Concepts
In this chapter I survey the scene in a leisurely and informal way, outlining ideas and avoiding the
computational and the nitty gritty until such time as they can fall into place. We are concerned in chapter
one with the overview from a great height, the synoptic perspective, the strategic issues. In other words,
this is going to be a superficial introduction; it will be sketchy, chatty and may drive the reader who is
expecting detail into frenzies of frustration. So put yourself in philosophical mode, undo your collar,

loosen your tie, take off your shoes and put your feet up. Pour yourself a drink and get ready to think in
airy generalities. The details come later.

Measurement and Representation
From objects to points in space❍
Telling the guys from the gals❍
Paradigms❍

Decisions, decisions
Metric Methods❍
Neural Net Methods (Old Style)❍
Statistical Methods
Parametric■
Non-parametric■

CART et al❍

Clustering: supervised v unsupervised learning●
Dynamic Patterns●
Structured Patterns●
Alternative Representations
Strings, propositions, predicates and logic❍
Fuzzy Thinking❍
Robots❍

Summary of this chapter●
Exercises●
Basic Concepts
(1 of 2) [12/12/2000 4:02:32 AM]
Bibliography●

Mike Alder
9/19/1997
Basic Concepts
(2 of 2) [12/12/2000 4:02:32 AM]

Next: From objects to points Up: Basic Concepts Previous: Basic Concepts
Measurement and Representation

From objects to points in space●
Telling the guys from the gals●
Paradigms●
Mike Alder
9/19/1997
Measurement and Representation
[12/12/2000 4:02:35 AM]

Next: Telling the guys from Up: Measurement and Representation Previous: Measurement and
Representation
From objects to points in space
If you point a video camera at the world, you get back an array of pixels each with a particular gray level
or colour. You might get a square array of 512 by 512 such pixels, and each pixel value would, on a gray
scale, perhaps, be represented by a number between 0 (black) and 255 (white). If the image is in colour,
there will be three such numbers for each of the pixels, say the intensity of red, blue and green at the
pixel location. The numbers may change from system to system and from country to country, but you can
expect to find, in each case, that the image may be described by an array of `real' numbers, or in
mathematical terminology, a vector in
for some positive integer n. The number n, the length of the
vector, can therefore be of the order of a million. To describe the image of the screen on which I am
writing this text, which has 1024 by 1280 pixels and a lot of possible colours, I would need 3,932,160
numbers. This is rather more than the ordinary television screen, but about what High Definition

Television will require.
An image on my monitor can, therefore, be coded as a vector in
. A sequence of images
such as would occur in a sixty second commercial sequenced at 25 frames a second, is a trajectory in this
space. I don't say this is the best way to think of things, in fact it is a truly awful way (for reasons we
shall come to), but it's one way.
More generally, when a scientist or engineer wants to say something about a physical system, he is less
inclined to launch into a haiku or sonnet than he is to clap a set of measuring instruments on it, whether it
be an electrical circuit, a steam boiler, or the solar system.
This set of instruments will usually produce a collection of numbers. In other words, the physical system
gets coded as a vector in
for some positive integer n. The nature of the coding is clearly important,
but once it has been set up, it doesn't change. By contrast, the measurements often do; we refer to this as
the system changing in time. In real life, real numbers do not actually occur: decimal strings come in
some limited length, numbers are specified to some precision. Since this precision can change, it is
inconvenient to bother about what it is in some particular case, and we talk rather sloppily of vectors of
real numbers.
I have known people who have claimed that
is quite useful when n is 1, 2 or 3, but that larger values
were invented by Mathematicians only for the purpose of terrorising honest engineers and physicists, and
can safely be ignored. Follow this advice at your peril.
It is worth pointing out, perhaps, that the representation of the states of a physical system as points in
From objects to points in space
(1 of 4) [12/12/2000 4:02:49 AM]

×