Autonomous
Mobile Robots
Sensing, Control, Decision
Making and Applications
DK6033_half-series-title.qxd 2/23/06 8:37 AM Page A
© 2006 by Taylor & Francis Group, LLC
CONTROL ENGINEERING
A Series of Reference Books and Textbooks
Editor
FRANK L. LEWIS, P
H.D.
Professor
Applied Control Engineering
University of Manchester Institute of Science and Technology
Manchester, United Kingdom
1. Nonlinear Control of Electric Machinery,
Darren M. Dawson, Jun Hu,
and Timothy C. Burg
2. Computational Intelligence in Control Engineering,
Robert E. King
3. Quantitative Feedback Theory: Fundamentals and Applications,
Constantine H. Houpis and Steven J. Rasmussen
4. Self-Learning Control of Finite Markov Chains,
A. S. Poznyak, K. Najim,
and E. Gómez-Ramírez
5. Robust Control and Filtering for Time-Delay Systems,
Magdi S. Mahmoud
6. Classical Feedback Control: With MATLAB
®
,
Boris J. Lurie
and Paul J. Enright
7. Optimal Control of Singularly Perturbed Linear Systems
and Applications: High-Accuracy Techniques,
Zoran Gajif
and Myo-Taeg Lim
8. Engineering System Dynamics: A Unified Graph-Centered Approach,
Forbes T. Brown
9. Advanced Process Identification and Control,
Enso Ikonen
and Kaddour Najim
10. Modern Control Engineering,
P. N. Paraskevopoulos
11. Sliding Mode Control in Engineering,
edited by Wilfrid Perruquetti
and Jean-Pierre Barbot
12. Actuator Saturation Control,
edited by Vikram Kapila
and Karolos M. Grigoriadis
13. Nonlinear Control Systems,
Zoran Vukić, Ljubomir Kuljača, Dali Donlagič,
and Sejid Tesnjak
14. Linear Control System Analysis & Design: Fifth Edition,
John D’Azzo,
Constantine H. Houpis and Stuart Sheldon
15. Robot Manipulator Control: Theory & Practice, Second Edition,
Frank L. Lewis, Darren M. Dawson, and Chaouki Abdallah
16. Robust Control System Design: Advanced State Space Techniques,
Second Edition,
Chia-Chi Tsui
17. Differentially Flat Systems,
Hebertt Sira-Ramirez
and Sunil Kumar Agrawal
DK6033_half-series-title.qxd 2/23/06 8:37 AM Page B
© 2006 by Taylor & Francis Group, LLC
18. Chaos in Automatic Control,
edited by Wilfrid Perruquetti
and Jean-Pierre Barbot
19. Fuzzy Controller Design: Theory and Applications,
Zdenko Kovacic
and Stjepan Bogdan
20. Quantitative Feedback Theory: Fundamentals and Applications,
Second Edition,
Constantine H. Houpis, Steven J. Rasmussen,
and Mario Garcia-Sanz
21. Neural Network Control of Nonlinear Discrete-Time Systems,
Jagannathan Sarangapani
22. Autonomous Mobile Robots: Sensing, Control, Decision Making
and Applications,
edited by Shuzhi Sam Ge and Frank L. Lewis
DK6033_half-series-title.qxd 2/23/06 8:37 AM Page C
© 2006 by Taylor & Francis Group, LLC
Shuzhi Sam Ge
The National University of Singapore
Frank L. Lewis
Automation and Robotics Research Institute
The University of Texas at Arlington
CRC is an imprint of the Taylor & Francis Group,
an informa business
Boca Raton London New York
Autonomous
Mobile Robots
Sensing, Control, Decision
Making and Applications
DK6033_half-series-title.qxd 2/23/06 8:37 AM Page i
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — pagex—#10
© 2006 by Taylor & Francis Group, LLC
MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not
warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software
or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical
approach or particular use of the MATLAB® software.
Published in 2006 by
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2006 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group
No claim to original U.S. Government works
Printed in the United States of America on acid-free paper
10987654321
International Standard Book Number-10: 0-8493-3748-8 (Hardcover)
International Standard Book Number-13: 978-0-8493-3748-2 (Hardcover)
This book contains information obtained from authentic and highly regarded sources. Reprinted material is
quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts
have been made to publish reliable data and information, but the author and the publisher cannot assume
responsibility for the validity of all materials or for the consequences of their use.
No part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic,
mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and
recording, or in any information storage or retrieval system, without written permission from the publishers.
Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration
for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate
system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only
for identification and explanation without intent to infringe.
Library of Congress Cataloging-in-Publication Data
Catalog record is available from the Library of Congress
Visit the Taylor & Francis Web site at
and the CRC Press Web site at
Taylor & Francis Group
is the Academic Division of Informa plc.
For permission to photocopy or use material electronically from this work, please access www.copyright.com
( or contact the Copyright Clearance Center, Inc. (CCC) 222 Rosewood Drive,
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page vii — #7
Preface
The creation of a truly autonomous and intelligentsystem— one that can sense,
learn from, and interact with its environment, one that can integrate seamlessly
into the day-to-day lives of humans — has ever been the motivating factor
behind the huge body of work on artificial intelligence, control theory and
robotics, autonomous (land, sea, and air) vehicles, and numerous other discip-
lines. The technology involved is highly complex and multidisciplinary, posing
immense challenges for researchers at both the module and system integra-
tion levels. Despite the innumerable hurdles, the research community has, as a
whole, made great progress in recent years. This is evidenced by technological
leaps and innovations in the areas of sensing and sensor fusion, modeling and
control, map building and path planning, artificial intelligence and decision
making, and system architecture design, spurred on by advances in related
areas of communications, machine processing, networking, and information
technology.
Autonomous systems are gradually becoming a part of our way of life,
whether we consciously perceive it or not. The increased use of intelligent
robotic systems in current indoor and outdoor applications bears testimony
to the efforts made by researchers on all fronts. Mobile systems have greater
autonomy than before, and new applications abound — ranging from fact-
ory transport systems, airport transport systems, road/vehicular systems, to
military applications, automated patrol systems, homeland security surveil-
lance, and rescue operations. While most conventional autonomous systems
are self-contained in the sense that all their sensors, actuators, and computers
are on board, it is envisioned that more and more will evolve to become open net-
workedsystems withdistributedprocessing power, sensors (e.g., GPS,cameras,
microphones, and landmarks), and actuators.
It is generally agreed that an autonomous system consists primarily of the
following four distinct yet interconnected modules:
(i) Sensors and Sensor Fusion
(ii) Modeling and Control
(iii) Map Building and Path Planning
(iv) Decision Making and Autonomy
These modules are integrated and influenced by the system architecture design
for different applications.
vii
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page viii — #8
viiiPreface
This edited book tries for the first time to provide a comprehensive treatment
of autonomous mobile systems, ranging from related fundamental technical
issues to practical system integration and applications. The chapters are writ-
ten by some of the leading researchers and practitioners working in this field
today. Readers will be presented with a complete picture of autonomous mobile
systems at the systems level, and will also gain a better understanding of the
technological and theoretical aspects involved within each module that com-
poses the overall system. Five distinct parts of the book, each consisting of
several chapters, emphasize the different aspects of autonomous mobile sys-
tems, starting from sensors and control, and gradually moving up the cognitive
ladder to planning and decision making, finally ending with the integration of
the four modules in application case studies of autonomous systems.
chapters treat in detail the operation and uses of various sensors that are crucial
for the operation of autonomous systems. Sensors provide robots with the cap-
ability to perceive the world, and effective utilization is of utmost importance.
The chapters also consider various state-of-the art techniques for the fusion
and utilization of various sensing information for feature detection and pos-
ition estimation. Vision sensors, RADAR, GPS and INS, and landmarks are
themselves in the form amenable to analysis as holonomic systems, and the
importance of nonholonomic modeling and control is evident. The four chapters
these highly complicated systems, focusing on discontinuous control, unified
neural fuzzy control, adaptive control with actuator dynamics, and the control
of car-like vehicles for vehicle tracking maneuvers, respectively.
of autonomous systems. This builds on technologies in sensing and control to
discusses the specifics of building an accurate map of the environment, using
either single or multiple robots, with which localization and motion planning
can take place. Probabilistic motion planning as a robust and efficient planning
chapters in this part treat in detail the issues of representing knowledge, high
level planning, and coordination mechanisms that together define the cognitive
capabilities of autonomous systems. These issues are crucial for the devel-
opment of intelligent mobile systems that are able to reason and manipulate
© 2006 by Taylor & Francis Group, LLC
discussed in detail in Chapters 1 to 4 respectively.
of this part, Chapters 5 to 8, thus present novel contributions to the control of
further improve the intelligence and autonomy of mobile robots. Chapter 9
scheme is examined in Chapter 10. Action coordination and formation control
available information. Specifically, Chapters 12 to 14 present topics pertaining
Modeling and control issues concerning nonholonomic systems are dis-
of multiple robots are investigated in Chapter 11.
Decision making and autonomy, the highest levels in the hierarchy of
The first part of the book is dedicated to sensors and sensor fusion. The four
cussed in the second part of the book. Real-world systems seldom present
The third part of the book covers the map building and path planning aspects
abstraction, are examined in detail in the fourth part of the book. The three
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page ix — #9
Preface ix
to knowledge representation and decision making, algorithms for planning
under uncertainties, and the behavior-based coordination of multiple robots.
In the final part of the book, we present a collection of chapters that deal
with the system integration and engineering aspects of large-scale autonom-
ous systems. These are usually considered as necessary steps in making new
technologies operational and are relatively neglected in the academic com-
munity. However, there is no doubt that system integration plays a vital role
in the successful development and deployment of autonomous mobile systems.
hierarchical system architecture that encompasses and links the various (higher
and lower level) components to form an intelligent, complex system.
We sincerely hope that this book will provide the reader with a cohesive
truly intelligent autonomous robots. Although the treatment of the topics is
by no means exhaustive, we hope to give the readers a broad-enough view of
the various aspects involved in the development of autonomous systems. The
authors have, however, provided a splendid list of references at the end of each
chapter, and interested readers are encouraged to refer to these references for
more information. This book represents the amalgamation of the truly excellent
work and effort of all the contributing authors, and could not have come to
fruition without their contributions. Finally, we are also immensely grateful
to Marsha Pronin, Michael Slaughter, and all others at CRC Press (Taylor &
Francis Group) for their efforts in making this project a success.
© 2006 by Taylor & Francis Group, LLC
Chapters 15 and 16 examine the issues involved in the design of autonomous
picture of the diverse, yet intimately related, issues involved in bringing about
commercial robots and automotive systems, respectively. Chapter 17 presents a
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page xi — #11
Editors
Shuzhi Sam Ge, IEEE Fellow, is a full professor with the Electrical and
Computer Engineering Department at the National University of Singapore.
He earned the B.Sc. degree from the Beijing University of Aeronautics and
Astronautics (BUAA) in 1986, and the Ph.D. degree and the Diploma of
Imperial College (DIC) from the Imperial College of Science, Technology and
Medicine in 1993. His current research interests are in the control of nonlinear
systems, hybrid systems, neural/fuzzy systems, robotics, sensor fusion, and
real-time implementation. He has authored and co-authored over 200 interna-
tional journal and conference papers, 3 monographs and co-invented 3 patents.
He was the recipient of a number of prestigious research awards, and has been
serving as the editor and associate editor of a number of flagship international
journals. He is also serving as a technical consultant for the local industry.
Frank L. Lewis, IEEE Fellow, PE Texas, is a distinguished scholar professor
and Moncrief-O’Donnell chair at the University of Texas at Arlington. He
earned the B.Sc. degree in physics and electrical engineering and the M.S.E.E.
at Rice University, the M.S. in Aeronautical Engineering from the University
of West Florida, and the Ph.D. at the Georgia Institute of Technology. He works
in feedback control and intelligent systems. He is the author of 4 U.S. pat-
ents, 160 journal papers, 240 conference papers, and 9 books. He received the
Fulbright Research Award, the NSF Research Initiation Grant, and the ASEE
Terman Award. He was selected as Engineer of the Year in 1994 by the Fort
Worth IEEE Section and is listed in the Fort Worth Business Press Top 200
Leaders in Manufacturing. He was appointed to the NAE Committee on Space
Station in 1995. He is an elected guest consulting professor at both Shanghai
Jiao Tong University and South China University of Technology.
xi
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page xiii — #13
Contributors
Martin Adams
School of Electrical and Electronic
Engineering
Nanyang Technological University
Singapore
James S. Albus
National Institute of Standards
and Technology
Gaithersburg, Maryland
Alessandro Astolfi
Electrical and Electronics
Engineering Department
Imperial College London
London, UK
Stephen Balakirsky
Intelligent Systems Division
National Institute of Standards
and Technology
Gaithersburg, Maryland
Anthony Barbera
National Institute of Standards
and Technology
Gaithersburg, Maryland
José A. Castellanos
Instituto de Investigación en
Ingeniería de Aragón
Universidad de Zaragoza
Zaragoza, Spain
Luiz Chaimowicz
Computer Science Department
Federal University of Minas
Gerais, Brazil
Jingrong Cheng
Department of Electrical Engineering
University of California
Riverside, California
Peng Cheng
Department of Computer Science
University of Illinois
Urbana-Champaign, Illinois
Sesh Commuri
School of Electrical & Computer
Engineering
University of Oklahoma
Norman, Oklahoma
Jay A. Farrell
Department of Electrical Engineering
University of California
Riverside, California
Rafael Fierro
MARHES Laboratory
School of Electrical & Computer
Engineering
Oklahoma State University
Norman, Oklahoma
Shuzhi Sam Ge
Department of Electrical and
Computer Engineering
National University of Singapore
Singapore
Héctor H. González-Baños
Honda Research Institute USA, Inc.
Mountain View, California
xiii
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page xiv — #14
xiv Contributors
Fan Hong
Department of Electrical and
Computer Engineering
National University of Singapore
Singapore
David Hsu
Department of Computer Science
National University of Singapore
Singapore
Huosheng Hu
Department of Computer Science
University of Essex
Colchester, UK
Chris Jones
Computer Science Department
University of Southern California
Los Angeles, California
Ebi Jose
School of Electrical and Electronic
Engineering
Nanyang Technological University
Singapore
Vijay Kumar
Department of Mechanical
Engineering and Applied
Mechanics
University of Pennsylvania
Philadelphia, Pennsylvania
Jean-Claude Latombe
Department of Computer Science
Stanford University
Palo Alto, California
Steven M. LaValle
Department of Computer Science
University of Illinois
Urbana-Champaign, Illinois
Tong Heng Lee
Department of Electrical and
Computer Engineering
National University of Singapore
Singapore
Frank L. Lewis
Automation and Robotics Research
Institute
University of Texas
Arlington, Texas
Yu Lu
Department of Electrical Engineering
University of California
Riverside, California
Maja J. Matari
´
c
Computer Science Department
University of Southern California
Los Angeles, California
Elena Messina
Intelligent Systems Division
National Institute of Standards and
Technology
Gaithersburg, Maryland
Mario E. Munich
Evolution Robotics Inc.
Pasadena, California
José Neira
Instituto de Investigación en
Ingeniería de Aragón
Universidad de Zaragoza
Zaragoza, Spain
Jason M. O’Kane
Department of Computer Science
University of Illinois
Urbana-Champaign, Illinois
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page xv — #15
Contributors xv
James P. Ostrowski
Evolution Robotics Inc.
Pasadena, California
Michel R. Parent
IMARA Group
INRIA-Rocquencourt
Le Chesnay, France
Stéphane R. Petti
Aisin AW Europe
Braine-L’Alleud, Belgium
Minhtuan Pham
School of Electrical and Electronics
Engineering
Nanyang Technological University
Singapore
Paolo Pirjanian
Evolution Robotics Inc.
Pasadena, California
Julian Ryde
Department of Computer Science
University of Essex
Colchester, UK
Andrew Shacklock
Singapore Institute of Manufacturing
Technology
Singapore
Jiali Shen
Department of Computer Science
University of Essex
Colchester, UK
Chun-Yi Su
Department of Mechanical
Engineering
Concordia University
Montreal, Quebec, Canada
Juan D. Tardós
Instituto de Investigación en
Ingeniería de Aragón
Universidad de Zaragoza
Zaragoza, Spain
Elmer R. Thomas
Department of Electrical Engineering
University of California
Riverside, California
Benjamín Tovar
Department of Computer Science
University of Illinois
Urbana-Champaign, Illinois
Danwei Wang
School of Electrical and Electronics
Engineering
Nanyang Technological University
Singapore
Han Wang
School of Electrical and Electronics
Engineering
Nanyang Technological University
Singapore
Zhuping Wang
Department of Electrical and
Computer Engineering
National University of Singapore
Singapore
Jian Xu
Singapore Institute of
Manufacturing Technology
Singapore
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page xvii — #17
Abstract
As technology advances, it has been envisioned that in the very near future,
robotic systems will become part and parcel of our everyday lives. Even at
the current stage of development, semi-autonomous or fully automated robots
are already indispensable in a staggering number of applications. To bring
forth a generation of truly autonomous and intelligent robotic systems that will
meld effortlessly into the human society involves research and development on
several levels, from robot perception, to control, to abstract reasoning.
This book tries for the first time to provide a comprehensive treatment
of autonomous mobile systems, ranging from fundamental technical issues to
practical system integration and applications. The chapters are written by some
of the leading researchers and practitioners working in this field today. Readers
will be presented with a coherent picture of autonomous mobile systems at the
systems level, and will also gain a better understanding of the technological
and theoretical aspects involved within each module that composes the overall
system. Five distinct parts of the book, each consisting of several chapters,
emphasize the different aspects of autonomous mobile systems, starting from
sensors and control, and gradually moving up the cognitive ladder to planning
and decision making, finally ending with the integration of the four modules in
application case studies of autonomous systems.
This book is primarily intended for researchers, engineers, and graduate
students involved in all aspects of autonomous mobile robot systems design
and development. Undergraduate students may also find the book useful, as a
complementary reading, in providing a general outlook of the various issues
and levels involved in autonomous robotic system design.
xvii
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page xix — #19
Contents
I Sensors and Sensor Fusion
1
Chapter 1 Visual Guidance for Autonomous Vehicles:
Capability and Challenges 5
Andrew Shacklock, Jian Xu, and Han Wang
Chapter 2 Millimeter Wave RADAR Power-Range Spectra
Interpretation for Multiple Feature Detection 41
Martin Adams and Ebi Jose
Chapter 3 Data Fusion via Kalman Filter: GPS and INS 99
Jingrong Cheng, Yu Lu, Elmer R. Thomas, and
Jay A. Farrell
Chapter 4 Landmarks and Triangulation in Navigation 149
Huosheng Hu, Julian Ryde, and Jiali Shen
II Modeling and Control
187
Chapter 5 Stabilization of Nonholonomic Systems 191
Alessandro Astolfi
Chapter 6 Adaptive Neural-Fuzzy Control of Nonholonomic
Mobile Robots 229
Fan Hong, Shuzhi Sam Ge, Frank L. Lewis, and
Tong Heng Lee
Chapter 7 Adaptive Control of Mobile Robots Including
Actuator Dynamics 267
Zhuping Wang, Chun-Yi Su, and Shuzhi Sam Ge
xix
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page xx — #20
xx Contents
Chapter 8 Unified Control Design for Autonomous
Car-Like Vehicle Tracking Maneuvers 295
Danwei Wang and Minhtuan Pham
III Map Building and Path Planning 331
Chapter 9 Map Building and SLAM Algorithms 335
José A. Castellanos, José Neira, and Juan D. Tardós
Chapter 10 Motion Planning: Recent Developments 373
Héctor H. González-Baños, David Hsu, and
Jean-Claude Latombe
Chapter 11 Multi-Robot Cooperation 417
Rafael Fierro, Luiz Chaimowicz, and Vijay Kumar
IV Decision Making and Autonomy
461
Chapter 12 Knowledge Representation and Decision
Making for Mobile Robots 465
Elena Messina and Stephen Balakirsky
Chapter 13 Algorithms for Planning under Uncertainty in
Prediction and Sensing 501
Jason M. O’Kane, Benjamín Tovar, Peng Cheng, and
Steven M. LaValle
Chapter 14 Behavior-Based Coordination in Multi-Robot Systems 549
Chris Jones and Maja J. Matari
´
c
V System Integration and Applications 571
Chapter 15 Integration for Complex Consumer Robotic
Systems: Case Studies and Analysis 573
Mario E. Munich, James P. Ostrowski, and
Paolo Pirjanian
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c000” — 2006/3/31 — 16:42 — page xxi — #21
Contents xxi
Chapter 16 Automotive Systems/Robotic Vehicles 613
Michel R. Parent and Stéphane R. Petti
Chapter 17 Intelligent Systems 655
Sesh Commuri, James S. Albus, and Anthony Barbera
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c001” — 2006/3/31 — 16:42 — page1—#1
I
Sensors and Sensor Fusion
Mobile robots participate in meaningful and intelligent interactions with other
entities — inanimate objects, human users, or other robots — through sensing
and perception. Sensing capabilities are tightly linked to the ability to perceive,
without which sensor data will only be a collection of meaningless figures.
Sensors are crucial to the operation of autonomous mobile robots in unknown
and dynamic environments where it is impossible to have complete a priori
information that can be given to the robots before operation.
In biological systems, visual sensing offers a rich source of information to
individuals, which in turn use such information for navigation, deliberation,
and planning. The same may be said of autonomous mobile robotic systems,
where vision has become a standard sensory tool on robots. This is especially
so with the advancement of image processing techniques, which facilitates the
extraction of even more useful information from images captured from mounted
still or moving cameras. The first chapter of this part therefore, focuses on
the use of visual sensors for guidance and navigation of unmanned vehicles.
This chapter starts with an analysis of the various requirements that the use of
unmanned vehicles poses to the visual guidance equipment. This is followed by
an analysis of the characteristics and limitations of visual perception hardware,
providing readers with an understanding of the physical constraints that must be
considered in the design of guidance systems. Various techniques currently in
use for road and vehicle following, and for obstacle detection are then reviewed.
With the wealth of information afforded by various visual sensors, sensor fusion
techniques play an important role in exploiting the available information to
1
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c001” — 2006/3/31 — 16:42 — page2—#2
2 Autonomous Mobile Robots
further improve the perceptual capabilities of systems. This issue is discussed,
with examples on the fusion of image data with LADAR information. The
chapter concludes with a discussion on the open problems and challenges in
the area of visual perception.
Where visual sensing is insufficient, other sensors serve as additional
sources of information, and are equally important in improving the naviga-
tional and perceptual capabilities of autonomous robots. The use of millimeter
wave RADAR for performing feature detection and navigation is treated in
detail in the second chapter of this part. Millimeter wave RADAR is capable of
providing high-fidelity range information when vision sensors fail under poor
visibility conditions, and is therefore, a usefultool forrobots to usein perceiving
their environment. The chapter first deals with the analysis and characterization
of noise affecting the measurements of millimeter wave RADAR. A method is
then proposed for the accurate prediction of range spectra. This is followed by
the description of a robust algorithm, based on target presence probability, to
improve feature detection in highly cluttered environments.
Aside from providing robots with a view of the environment it is immersed
in, certain sensors also give robots the ability to analyze and evaluate its
own state, namely, its position. Augmentation of such information with those
garnered from environmental perception further provides robots with a clearer
picture of the condition of its environment and the robot’s own role within
it. While visual perception may be used for localization, the use of internal
and external sensors, like the Inertial Navigation System (INS) and the Global
Positioning System (GPS), allows refinement of estimated values. The third
chapter of this part treats, in detail, the use of both INS and GPS for position
estimation. This chapter first provides a comprehensive review of the Extended
Kalman Filter (EKF), as well as the basics of GPS and INS. Detailed treat-
ment of the use of the EKF in fusing measurements from GPS and INS is
then provided, followed by a discussion of various approaches that have been
proposed for the fusion of GPS and INS.
In addition to internal and external explicit measurements, landmarks in the
environment may also be utilized by the robots to get a sense of where they
are. This may be done through triangulation techniques, which are described
in the final chapter of this part. Recognition of landmarks may be performed
by the visual sensors, and localization is achieved through the association of
landmarks with those in internal maps, thereby providing position estimates.
The chapter provides descriptions and experimental results of several different
techniques for landmark-based position estimation. Different landmarks are
used, ranging from laser beacons to visually distinct landmarks, to moveable
landmarks mounted on robots for multi-robot localization.
This part of the book aims to provide readers with an understanding of the
theoretical and practical issues involved in the use of sensors, and the important
role sensors play in determining (and limiting) the degree of autonomy mobile
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c001” — 2006/3/31 — 16:42 — page3—#3
Sensors and Sensor Fusion 3
robots possess. These sensors allow robots to obtain a basic set of observations
upon which controllers and higher level decision-making mechanisms can act
upon, thus forming an indispensable link in the chain of modules that together
constitutes an intelligent, autonomous robotic system.
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c001” — 2006/3/31 — 16:42 — page5—#5
1
Visual Guidance for
Autonomous Vehicles:
Capability and
Challenges
Andrew Shacklock, Jian Xu, and Han Wang
CONTENTS
1.1 Introduction 6
1.1.1 Context 6
1.1.2 Classes of UGV 7
1.2 Visual Sensing Technology 8
1.2.1 Visual Sensors 8
1.2.1.1 Passive imaging 9
1.2.1.2 Active sensors 10
1.2.2 Modeling of Image Formation and Calibration 12
1.2.2.1 The ideal pinhole model 12
1.2.2.2 Calibration 13
1.3 Visual Guidance Systems 15
1.3.1 Architecture 15
1.3.2 World Model Representation 15
1.3.3 Physical Limitations 17
1.3.4 Road and Vehicle Following 19
1.3.4.1 State-of-the-art 19
1.3.4.2 A road camera model 21
1.3.5 Obstacle Detection 23
1.3.5.1 Obstacle detection using range data 23
1.3.5.2 Stereo vision 24
1.3.5.3 Application examples 26
1.3.6 Sensor Fusion 28
1.4 Challenges and Solutions 33
1.4.1 Terrain Classification 33
5
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c001” — 2006/3/31 — 16:42 — page6—#6
6 Autonomous Mobile Robots
1.4.2 Localization and 3D Model Building from Vision 34
1.5 Conclusion 36
Acknowledgments 37
References 37
Biographies 40
1.1 INTRODUCTION
1.1.1 Context
Current efforts in the research and development of visual guidance technology
for autonomous vehicles fit into two major categories: unmanned ground
vehicles (UGVs) and intelligent transport systems (ITSs). UGVs are primarily
concerned with off-road navigation and terrain mapping whereas ITS (or auto-
mated highway systems) research is a much broader area concerned with safer
and more efficient transport in structured or urban settings. The focus of this
chapter is on visual guidance and therefore will not dwell on the definitions of
autonomous vehicles other than to examine how they set the following roles of
vision systems:
• Detection and following of a road
• Detection of obstacles
• Detection and tracking of other vehicles
• Detection and identification of landmarks
These four tasks are relevant to both UGV and ITS applications, although
the environments are quite different. Our experience is in the development
and testing of UGVs and so we concentrate on these specific problems in this
chapter. We refer to achievements in structured settings, such as road-following,
as the underlying principles are similar, and also because they are a good starting
point when facing complexity of autonomy in open terrain.
This introductory section continues with an examination of the expectations
of UGVs as laid out by the Committee on Army Unmanned Ground Vehicle
of the key technologies for visual guidance: two-dimensional (2D) passive ima-
ging and active scanning. The aimis to highlightthedifferencesbetween various
the
main
contentofthischapter; here wepresentavisualguidancesystem(VGS)
and its modules for guidance and obstacle detection. Descriptions concentrate
on pragmatic approaches adopted in light of the highly complex and uncer-
tain tasks which stretch the physical limitations of sensory systems. Examples
© 2006 by Taylor & Francis Group, LLC
Technology in its 2002 road map [1]. Next, in Section 1.2, we give an overview
options with regard to our task-specific requirements. Section 1.3 constitutes
FRANKL: “dk6033_c001” — 2006/3/31 — 16:42 — page7—#7
VisualGuidanceforAutonomousVehicles7
are given from stereo vision and image–ladar integration. The chapter ends
of visual sensors in meeting the key challenges for autonomy in unstructured
settings: terrain classification and localization/mapping.
1.1.2ClassesofUGV
The motivation or driving force behind UGV research is for military application.
This fact is made clear by examining the sources of funding behind prominent
research projects. The DARPA Grand Challenge is an immediate example at
an attempt to understand what a UGV is and how computer vision can play
a part in it, because the requirements are well defined. Another reason is that
as we shall see the scope and classification of UGVs from the U.S. military
is still quite broad and, therefore, encompasses many of the issues related to
autonomous vehicle technology. A third reason is that the requirements for
survivability in hostile environments are explicit, and therefore developers are
forced to face the toughest problems that will drive and test the efficacy of
visual perception research. These set the much needed benchmarks against
which we can assess performance and identify the most pressing problems.
The definitions of various UGVs and reviews of state-of-the-art are available in
the aforementioned road map [1]. This document is a valuable source for anyone
involved in autonomous vehicle research and development because the future
requirements and capability gaps are clearly set out. The report categorizes four
classes of vehicles with increasing autonomy and perception requirements:
Teleoperated Ground Vehicle (TGV). Sensors enable an operator to visualize
location and movement. No machine cognition is needed, but experience has
shown that remote driving is a difficult task and augmentation of views with
some
o
fthe functionality ofautomatic vision would help the operator. Fong[3]is
a good source for the reader interested in vehicle teleoperation and collaborative
control.
Semi-Autonomous Preceder–Follower (SAP/F). These devices are envis-
aged for logistics and equipment carrying. They require advanced navigation
capability to minimize operator interaction, for example, the ability to select a
traversable path in A-to-B mobility.
Platform-Centric AGV (PC-AGV). This is a system that has the autonomy
to complete a task. In addition to simple mobility, the system must include extra
terrain reasoning for survivability and self-defense.
Network-Centric AGV (NC-AGV). This refers to systems that operate as
nodes in tactical warfare. Their perception needs are similar to that of PC-AGVs
but with better cognition so that, for example, potential attackers can be
distinguished.
© 2006 by Taylor & Francis Group, LLC
hand [2]. An examination of military requirements is a good starting point, in
by returning to the road map in Section 1.4 and examining the potential role
FRANKL: “dk6033_c001” — 2006/3/31 — 16:42 — page8—#8
8 Autonomous Mobile Robots
TABLE 1.1
Classes of UGV
Class kph Capability gaps Perception tasks TRL 6
Searcher (TGV) All-weather sensors Not applicable 2006
Donkey (SAP/F) 40 Localization and mapping
algorithms
Detect static obstacles,
traversable paths
2009
Wingman
(PC-AGV)
100 Long-range sensors and
sensors for classifying
vegetation
Terrain assessment to detect
potential cover
2015
Hunter-killer
(NC-AGV)
120 Multiple sensors and
fusion
Identification of enemy
forces, situation awareness
2025
The road map identifies perception as the priority area for development and
defines increasing levels of “technology readiness.” Some of the require-
ments and capability gaps for the four classes are summarized and presen-
ted in Table 1.1. Technology readiness level 6 (TRL 6) is defined as the
point when a technology component has been demonstrated in a relevant
environment.
These roles range from the rather dumb donkey-type device used to carry
equipment to autonomous lethal systems making tactical decisions in open
country. It must be remembered, as exemplified in the inaugural Grand
Challenge, that the technology readiness levels of most research is a long
way from meeting the most simple of these requirements. The Challenge is
equivalent to a simple A-to-B mobility task for the SAP/F class of UGVs. On
a more positive note, the complexity of the Grand Challenge should not be
understated, and many past research programs, such as Demo III, have demon-
strated impressive capability. Such challenges, with clearly defined objectives,
are essential for making progress as they bring critical problems to the fore and
provide a common benchmark for evaluating technology.
1.2 VISUAL SENSING TECHNOLOGY
1.2.1 Visual Sensors
We first distinguish between passive and active sensor systems: A passive sensor
system relies upon ambient radiation, whereas an active sensor system illumin-
ates the scene with radiation (often laser beams) and determines how this is
reflected by the surroundings. Active sensors offer a clear advantage in outdoor
applications; they are less sensitive to changes in ambient conditions. How-
ever, some applications preclude their use; they can be detected by the enemy
© 2006 by Taylor & Francis Group, LLC
FRANKL: “dk6033_c001” — 2006/3/31 — 16:42 — page9—#9
Visual Guidance for Autonomous Vehicles 9
in military scenarios, or there may be too many conflicting sources in a civilian
setting. At this point we also highlight a distinction between the terms “act-
ive vision” and “active sensors.” Active vision refers to techniques in which
(passive) cameras are moved so that they can fixate on particular features [4].
These have applications in robot localization, terrain mapping, and driving in
cluttered environments.
1.2.1.1 Passive imaging
From the application and performance standpoint, our primary concern
is procuring hardware that will acquire good quality data for input to
guidance algorithms; so we now highlight some important considerations when
specifying a camera for passive imaging in outdoor environments.
The image sensor (CCD or CMOS). CMOS technology offers certain
advantages over the more familiar CCDs in that it allows direct access to indi-
vidual blocks of pixels much as would be done in reading computer memory.
This enables instantaneous viewing of regions of interest (ROI) without the
integration time, clocking, and shift registers of standard CCD sensors. A key
advantage of CMOS is that additional circuitry can be built into the silicon
which leads to improved functionality and performance: direct digital out-
put, reduced blooming, increased dynamic range, and so on. Dynamic range
becomes important when viewing outdoor scenes with varying illumination:
for example, mixed scenes of open ground and shadow.
Color or monochrome. Monochrome (B&W) cameras are widely used
in lane-following systems but color systems are often needed in off-road
(or country track)environments where there ispoor contrast indetecting travers-
able terrain. Once we have captured a color image there are different methods
of representing the RGB components: for example, the RGB values can be
converted into hue, saturation, and intensity (HSI) [5]. The hue component of
a surface is effectively invariant to illumination levels which can be important
when segmenting images with areas of shadow [6,7].
circuit captured with an IR camera. The hot road surface is quite distinct as
are metallic features such as manhole covers and lampposts. Trees similarly
contrast well against the sky but in open country after rainfall, different types
of vegetation and ground surfaces exhibit poor contrast. The camera works on
a different transducer principle from the photosensors in CCD or CMOS chips.
Radiation from hot bodies is projected onto elements in an array that heat up,
and this temperature change is converted into an electrical signal. At present,
compared to visible light cameras, the resolution is reduced (e.g., 320 × 240
pixels) and the responseis naturally slower. There are other problems to contend
with, such as calibration and drift of the sensor. IR cameras are expensive
© 2006 by Taylor & Francis Group, LLC
Infrared (IR). Figure 1.1 shows some views from our semi-urban scene test