Tải bản đầy đủ (.pdf) (836 trang)

robust adaptive control

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (4.04 MB, 836 trang )

Contents
Preface xiii
List of Acronyms xvii
1 Introduction 1
1.1 Control System Design Steps . . . . . . . . . . . . . . . . . . 1
1.2 Adaptive Control . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Robust Control . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Gain Scheduling . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Direct and Indirect Adaptive Control . . . . . . . . . 8
1.2.4 Model Reference Adaptive Control . . . . . . . . . . . 12
1.2.5 Adaptive Pole Placement Control . . . . . . . . . . . . 14
1.2.6 Design of On-Line Parameter Estimators . . . . . . . 16
1.3 A Brief History . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2 Models for Dynamic Systems 26
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 State-Space Models . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.1 General Description . . . . . . . . . . . . . . . . . . . 27
2.2.2 Canonical State-Space Forms . . . . . . . . . . . . . . 29
2.3 Input/Output Models . . . . . . . . . . . . . . . . . . . . . . 34
2.3.1 Transfer Functions . . . . . . . . . . . . . . . . . . . . 34
2.3.2 Coprime Polynomials . . . . . . . . . . . . . . . . . . 39
2.4 Plant Parametric Models . . . . . . . . . . . . . . . . . . . . 47
2.4.1 Linear Parametric Models . . . . . . . . . . . . . . . . 49
2.4.2 Bilinear Parametric Models . . . . . . . . . . . . . . . 58
v
vi CONTENTS
2.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3 Stability 66
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.2.1 Norms and L


p
Spaces . . . . . . . . . . . . . . . . . . 67
3.2.2 Properties of Functions . . . . . . . . . . . . . . . . . 72
3.2.3 Positive Definite Matrices . . . . . . . . . . . . . . . . 78
3.3 Input/Output Stability . . . . . . . . . . . . . . . . . . . . . . 79
3.3.1 L
p
Stability . . . . . . . . . . . . . . . . . . . . . . . . 79
3.3.2 The L

Norm and I/O Stability . . . . . . . . . . . . 85
3.3.3 Small Gain Theorem . . . . . . . . . . . . . . . . . . . 96
3.3.4 Bellman-Gronwall Lemma . . . . . . . . . . . . . . . . 101
3.4 Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . . . 105
3.4.1 Definition of Stability . . . . . . . . . . . . . . . . . . 105
3.4.2 Lyapunov’s Direct Method . . . . . . . . . . . . . . . 108
3.4.3 Lyapunov-Like Functions . . . . . . . . . . . . . . . . 117
3.4.4 Lyapunov’s Indirect Method . . . . . . . . . . . . . . . 119
3.4.5 Stability of Linear Systems . . . . . . . . . . . . . . . 120
3.5 Positive Real Functions and Stability . . . . . . . . . . . . . . 126
3.5.1 Positive Real and Strictly Positive Real Transfer Func-
tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
3.5.2 PR and SPR Transfer Function Matrices . . . . . . . 132
3.6 Stability of LTI Feedback Systems . . . . . . . . . . . . . . . 134
3.6.1 A General LTI Feedback System . . . . . . . . . . . . 134
3.6.2 Internal Stability . . . . . . . . . . . . . . . . . . . . . 135
3.6.3 Sensitivity and Complementary Sensitivity Functions . 136
3.6.4 Internal Model Principle . . . . . . . . . . . . . . . . . 137
3.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4 On-Line Parameter Estimation 144

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.2 Simple Examples . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.2.1 Scalar Example: One Unknown Parameter . . . . . . 146
4.2.2 First-Order Example: Two Unknowns . . . . . . . . . 151
4.2.3 Vector Case . . . . . . . . . . . . . . . . . . . . . . . . 156
CONTENTS vii
4.2.4 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.3 Adaptive Laws with Normalization . . . . . . . . . . . . . . . 162
4.3.1 Scalar Example . . . . . . . . . . . . . . . . . . . . . . 162
4.3.2 First-Order Example . . . . . . . . . . . . . . . . . . . 165
4.3.3 General Plant . . . . . . . . . . . . . . . . . . . . . . . 169
4.3.4 SPR-Lyapunov Design Approach . . . . . . . . . . . . 171
4.3.5 Gradient Method . . . . . . . . . . . . . . . . . . . . . 180
4.3.6 Least-Squares . . . . . . . . . . . . . . . . . . . . . . . 192
4.3.7 Effect of Initial Conditions . . . . . . . . . . . . . . . 200
4.4 Adaptive Laws with Projection . . . . . . . . . . . . . . . . . 203
4.4.1 Gradient Algorithms with Projection . . . . . . . . . . 203
4.4.2 Least-Squares with Projection . . . . . . . . . . . . . . 206
4.5 Bilinear Parametric Model . . . . . . . . . . . . . . . . . . . . 208
4.5.1 Known Sign of ρ

. . . . . . . . . . . . . . . . . . . . . 208
4.5.2 Sign of ρ

and Lower Bound ρ
0
Are Known . . . . . . 212
4.5.3 Unknown Sign of ρ

. . . . . . . . . . . . . . . . . . . 215

4.6 Hybrid Adaptive Laws . . . . . . . . . . . . . . . . . . . . . . 217
4.7 Summary of Adaptive Laws . . . . . . . . . . . . . . . . . . . 220
4.8 Parameter Convergence Proofs . . . . . . . . . . . . . . . . . 220
4.8.1 Useful Lemmas . . . . . . . . . . . . . . . . . . . . . . 220
4.8.2 Proof of Corollary 4.3.1 . . . . . . . . . . . . . . . . . 235
4.8.3 Proof of Theorem 4.3.2 (iii) . . . . . . . . . . . . . . . 236
4.8.4 Proof of Theorem 4.3.3 (iv) . . . . . . . . . . . . . . . 239
4.8.5 Proof of Theorem 4.3.4 (iv) . . . . . . . . . . . . . . . 240
4.8.6 Proof of Corollary 4.3.2 . . . . . . . . . . . . . . . . . 241
4.8.7 Proof of Theorem 4.5.1(iii) . . . . . . . . . . . . . . . 242
4.8.8 Proof of Theorem 4.6.1 (iii) . . . . . . . . . . . . . . . 243
4.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
5 Parameter Identifiers and Adaptive Observers 250
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
5.2 Parameter Identifiers . . . . . . . . . . . . . . . . . . . . . . . 251
5.2.1 Sufficiently Rich Signals . . . . . . . . . . . . . . . . . 252
5.2.2 Parameter Identifiers with Full-State Measurements . 258
5.2.3 Parameter Identifiers with Partial-State Measurements 260
5.3 Adaptive Observers . . . . . . . . . . . . . . . . . . . . . . . . 267
viii CONTENTS
5.3.1 The Luenberger Observer . . . . . . . . . . . . . . . . 267
5.3.2 The Adaptive Luenberger Observer . . . . . . . . . . . 269
5.3.3 Hybrid Adaptive Luenberger Observer . . . . . . . . . 276
5.4 Adaptive Observer with Auxiliary Input . . . . . . . . . . . 279
5.5 Adaptive Observers for Nonminimal Plant Models . . . . . 287
5.5.1 Adaptive Observer Based on Realization 1 . . . . . . . 287
5.5.2 Adaptive Observer Based on Realization 2 . . . . . . . 292
5.6 Parameter Convergence Proofs . . . . . . . . . . . . . . . . . 297
5.6.1 Useful Lemmas . . . . . . . . . . . . . . . . . . . . . . 297
5.6.2 Proof of Theorem 5.2.1 . . . . . . . . . . . . . . . . . 301

5.6.3 Proof of Theorem 5.2.2 . . . . . . . . . . . . . . . . . 302
5.6.4 Proof of Theorem 5.2.3 . . . . . . . . . . . . . . . . . 306
5.6.5 Proof of Theorem 5.2.5 . . . . . . . . . . . . . . . . . 309
5.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
6 Model Reference Adaptive Control 313
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
6.2 Simple Direct MRAC Schemes . . . . . . . . . . . . . . . . . 315
6.2.1 Scalar Example: Adaptive Regulation . . . . . . . . . 315
6.2.2 Scalar Example: Adaptive Tracking . . . . . . . . . . 320
6.2.3 Vector Case: Full-State Measurement . . . . . . . . . 325
6.2.4 Nonlinear Plant . . . . . . . . . . . . . . . . . . . . . . 328
6.3 MRC for SISO Plants . . . . . . . . . . . . . . . . . . . . . . 330
6.3.1 Problem Statement . . . . . . . . . . . . . . . . . . . . 331
6.3.2 MRC Schemes: Known Plant Parameters . . . . . . . 333
6.4 Direct MRAC with Unnormalized Adaptive Laws . . . . . . . 344
6.4.1 Relative Degree n

= 1 . . . . . . . . . . . . . . . . . 345
6.4.2 Relative Degree n

= 2 . . . . . . . . . . . . . . . . . 356
6.4.3 Relative Degree n

= 3 . . . . . . . . . . . . . . . . . . 363
6.5 Direct MRAC with Normalized Adaptive Laws . . . . . . . 373
6.5.1 Example: Adaptive Regulation . . . . . . . . . . . . . 373
6.5.2 Example: Adaptive Tracking . . . . . . . . . . . . . . 380
6.5.3 MRAC for SISO Plants . . . . . . . . . . . . . . . . . 384
6.5.4 Effect of Initial Conditions . . . . . . . . . . . . . . . 396
6.6 Indirect MRAC . . . . . . . . . . . . . . . . . . . . . . . . . . 397

6.6.1 Scalar Example . . . . . . . . . . . . . . . . . . . . . . 398
CONTENTS ix
6.6.2 Indirect MRAC with Unnormalized Adaptive Laws . . 402
6.6.3 Indirect MRAC with Normalized Adaptive Law . . . . 408
6.7 Relaxation of Assumptions in MRAC . . . . . . . . . . . . . . 413
6.7.1 Assumption P1: Minimum Phase . . . . . . . . . . . . 413
6.7.2 Assumption P2: Upper Bound for the Plant Order . . 414
6.7.3 Assumption P3: Known Relative Degree n

. . . . . . 415
6.7.4 Tunability . . . . . . . . . . . . . . . . . . . . . . . . . 416
6.8 Stability Proofs of MRAC Schemes . . . . . . . . . . . . . . . 418
6.8.1 Normalizing Properties of Signal m
f
. . . . . . . . . . 418
6.8.2 Proof of Theorem 6.5.1: Direct MRAC . . . . . . . . . 419
6.8.3 Proof of Theorem 6.6.2: Indirect MRAC . . . . . . . . 425
6.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
7 Adaptive Pole Placement Control 435
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
7.2 Simple APPC Schemes . . . . . . . . . . . . . . . . . . . . . . 437
7.2.1 Scalar Example: Adaptive Regulation . . . . . . . . . 437
7.2.2 Modified Indirect Adaptive Regulation . . . . . . . . . 441
7.2.3 Scalar Example: Adaptive Tracking . . . . . . . . . . 443
7.3 PPC: Known Plant Parameters . . . . . . . . . . . . . . . . . 448
7.3.1 Problem Statement . . . . . . . . . . . . . . . . . . . . 449
7.3.2 Polynomial Approach . . . . . . . . . . . . . . . . . . 450
7.3.3 State-Variable Approach . . . . . . . . . . . . . . . . . 455
7.3.4 Linear Quadratic Control . . . . . . . . . . . . . . . . 460
7.4 Indirect APPC Schemes . . . . . . . . . . . . . . . . . . . . . 467

7.4.1 Parametric Model and Adaptive Laws . . . . . . . . . 467
7.4.2 APPC Scheme: The Polynomial Approach . . . . . . . 469
7.4.3 APPC Schemes: State-Variable Approach . . . . . . . 479
7.4.4 Adaptive Linear Quadratic Control (ALQC) . . . . . 487
7.5 Hybrid APPC Schemes . . . . . . . . . . . . . . . . . . . . . 495
7.6 Stabilizability Issues and Mo dified APPC . . . . . . . . . . . 499
7.6.1 Loss of Stabilizability: A Simple Example . . . . . . . 500
7.6.2 Modified APPC Schemes . . . . . . . . . . . . . . . . 503
7.6.3 Switched-Excitation Approach . . . . . . . . . . . . . 507
7.7 Stability Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . 514
7.7.1 Proof of Theorem 7.4.1 . . . . . . . . . . . . . . . . . 514
x CONTENTS
7.7.2 Proof of Theorem 7.4.2 . . . . . . . . . . . . . . . . . 520
7.7.3 Proof of Theorem 7.5.1 . . . . . . . . . . . . . . . . . 524
7.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
8 Robust Adaptive Laws 531
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
8.2 Plant Uncertainties and Robust Control . . . . . . . . . . . . 532
8.2.1 Unstructured Uncertainties . . . . . . . . . . . . . . . 533
8.2.2 Structured Uncertainties: Singular Perturbations . . . 537
8.2.3 Examples of Uncertainty Representations . . . . . . . 540
8.2.4 Robust Control . . . . . . . . . . . . . . . . . . . . . . 542
8.3 Instability Phenomena in Adaptive Systems . . . . . . . . . . 545
8.3.1 Parameter Drift . . . . . . . . . . . . . . . . . . . . . 546
8.3.2 High-Gain Instability . . . . . . . . . . . . . . . . . . 549
8.3.3 Instability Resulting from Fast Adaptation . . . . . . 550
8.3.4 High-Frequency Instability . . . . . . . . . . . . . . . 552
8.3.5 Effect of Parameter Variations . . . . . . . . . . . . . 553
8.4 Modifications for Robustness: Simple Examples . . . . . . . . 555
8.4.1 Leakage . . . . . . . . . . . . . . . . . . . . . . . . . . 557

8.4.2 Parameter Projection . . . . . . . . . . . . . . . . . . 566
8.4.3 Dead Zone . . . . . . . . . . . . . . . . . . . . . . . . 567
8.4.4 Dynamic Normalization . . . . . . . . . . . . . . . . . 572
8.5 Robust Adaptive Laws . . . . . . . . . . . . . . . . . . . . . . 576
8.5.1 Parametric Models with Modeling Error . . . . . . . . 577
8.5.2 SPR-Lyapunov Design Approach with Leakage . . . . 583
8.5.3 Gradient Algorithms with Leakage . . . . . . . . . . . 593
8.5.4 Least-Squares with Leakage . . . . . . . . . . . . . . . 603
8.5.5 Projection . . . . . . . . . . . . . . . . . . . . . . . . . 604
8.5.6 Dead Zone . . . . . . . . . . . . . . . . . . . . . . . . 607
8.5.7 Bilinear Parametric Model . . . . . . . . . . . . . . . . 614
8.5.8 Hybrid Adaptive Laws . . . . . . . . . . . . . . . . . . 617
8.5.9 Effect of Initial Conditions . . . . . . . . . . . . . . . 624
8.6 Summary of Robust Adaptive Laws . . . . . . . . . . . . . . 624
8.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
CONTENTS xi
9 Robust Adaptive Control Schemes 635
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
9.2 Robust Identifiers and Adaptive Observers . . . . . . . . . . . 636
9.2.1 Dominantly Rich Signals . . . . . . . . . . . . . . . . . 639
9.2.2 Robust Parameter Identifiers . . . . . . . . . . . . . . 644
9.2.3 Robust Adaptive Observers . . . . . . . . . . . . . . . 649
9.3 Robust MRAC . . . . . . . . . . . . . . . . . . . . . . . . . . 651
9.3.1 MRC: Known Plant Parameters . . . . . . . . . . . . 652
9.3.2 Direct MRAC with Unnormalized Adaptive Laws . . . 657
9.3.3 Direct MRAC with Normalized Adaptive Laws . . . . 667
9.3.4 Robust Indirect MRAC . . . . . . . . . . . . . . . . . 688
9.4 Performance Improvement of MRAC . . . . . . . . . . . . . . 694
9.4.1 Modified MRAC with Unnormalized Adaptive Laws . 698
9.4.2 Modified MRAC with Normalized Adaptive Laws . . . 704

9.5 Robust APPC Schemes . . . . . . . . . . . . . . . . . . . . . 710
9.5.1 PPC: Known Parameters . . . . . . . . . . . . . . . . 711
9.5.2 Robust Adaptive Laws for APPC Schemes . . . . . . . 714
9.5.3 Robust APPC: Polynomial Approach . . . . . . . . . 716
9.5.4 Robust APPC: State Feedback Law . . . . . . . . . . 723
9.5.5 Robust LQ Adaptive Control . . . . . . . . . . . . . . 731
9.6 Adaptive Control of LTV Plants . . . . . . . . . . . . . . . . 733
9.7 Adaptive Control for Multivariable Plants . . . . . . . . . . . 735
9.7.1 Decentralized Adaptive Control . . . . . . . . . . . . . 736
9.7.2 The Command Generator Tracker Approach . . . . . 737
9.7.3 Multivariable MRAC . . . . . . . . . . . . . . . . . . . 740
9.8 Stability Proofs of Robust MRAC Schemes . . . . . . . . . . 745
9.8.1 Properties of Fictitious Normalizing Signal . . . . . . 745
9.8.2 Proof of Theorem 9.3.2 . . . . . . . . . . . . . . . . . 749
9.9 Stability Proofs of Robust APPC Schemes . . . . . . . . . . . 760
9.9.1 Proof of Theorem 9.5.2 . . . . . . . . . . . . . . . . . 760
9.9.2 Proof of Theorem 9.5.3 . . . . . . . . . . . . . . . . . 764
9.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
A Swapping Lemmas . . . . . . . . . . . . . . . . . . . . . . . . 775
B Optimization Techniques . . . . . . . . . . . . . . . . . . . . . 784
B.1 Notation and Mathematical Background . . . . . . . . 784
B.2 The Method of Steepest Descent (Gradient Method) . 786
xii CONTENTS
B.3 Newton’s Method . . . . . . . . . . . . . . . . . . . . . 787
B.4 Gradient Projection Method . . . . . . . . . . . . . . 789
B.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 792
Bibliography 796
Index 819
License Agreement and Limited Warranty 822
Preface

The area of adaptive control has grown to be one of the richest in terms of
algorithms, design techniques, analytical tools, and modifications. Several
books and research monographs already exist on the topics of parameter
estimation and adaptive control.
Despite this rich literature, the field of adaptive control may easily appear
to an outsider as a collection of unrelated tricks and modifications. Students
are often overwhelmed and sometimes confused by the vast number of what
appear to be unrelated designs and analytical methods achieving similar re-
sults. Researchers concentrating on different approaches in adaptive control
often find it difficult to relate their techniques with others without additional
research efforts.
The purpose of this book is to alleviate some of the confusion and diffi-
culty in understanding the design, analysis, and robustness of a wide class
of adaptive control for continuous-time plants. The book is the outcome of
several years of research, whose main purpose was not to generate new re-
sults, but rather unify, simplify, and present in a tutorial manner most of the
existing techniques for designing and analyzing adaptive control systems.
The book is written in a self-contained fashion to be used as a textbook
on adaptive systems at the senior undergraduate, or first and second gradu-
ate level. It is assumed that the reader is familiar with the materials taught
in undergraduate courses on linear systems, differential equations, and auto-
matic control. The bo ok is also useful for an industrial audience where the
interest is to implement adaptive control rather than analyze its stability
properties. Tables with descriptions of adaptive control schemes presented
in the book are meant to serve this audience. The personal computer floppy
disk, included with the book, provides several examples of simple adaptive
xiii
xiv PREFACE
control systems that will help the reader understand some of the implemen-
tation aspects of adaptive systems.

A significant part of the book, devoted to parameter estimation and
learning in general, provides techniques and algorithms for on-line fitting
of dynamic or static models to data generated by real systems. The tools
for design and analysis presented in the book are very valuable in under-
standing and analyzing similar parameter estimation problems that appear
in neural networks, fuzzy systems, and other universal approximators. The
book will be of great interest to the neural and fuzzy logic audience who
will benefit from the strong similarity that exists between adaptive systems,
whose stability properties are well established, and neural networks, fuzzy
logic systems where stability and convergence issues are yet to be resolved.
The book is organized as follows: Chapter 1 is used to introduce adap-
tive control as a metho d for controlling plants with parametric uncertainty.
It also provides some background and a brief history of the development
of adaptive control. Chapter 2 presents a review of various plant model
representations that are useful for parameter identification and control. A
considerable number of stability results that are useful in analyzing and un-
derstanding the properties of adaptive and nonlinear systems in general are
presented in Chapter 3. Chapter 4 deals with the design and analysis of on-
line parameter estimators or adaptive laws that form the backbone of every
adaptive control scheme presented in the chapters to follow. The design of
parameter identifiers and adaptive observers for stable plants is presented
in Chapter 5. Chapter 6 is devoted to the design and analysis of a wide
class of model reference adaptive controllers for minimum phase plants. The
design of adaptive control for plants that are not necessarily minimum phase
is presented in Chapter 7. These schemes are based on pole placement con-
trol strategies and are referred to as adaptive pole placement control. While
Chapters 4 through 7 deal with plant models that are free of disturbances,
unmodeled dynamics and noise, Chapters 8 and 9 deal with the robustness
issues in adaptive control when plant model uncertainties, such as bounded
disturbances and unmodeled dynamics, are present.

The book can be used in various ways. The reader who is familiar with
stability and linear systems may start from Chapter 4. An introductory
course in adaptive control could be covered in Chapters 1, 2, and 4 to 9,
by excluding the more elaborate and difficult proofs of theorems that are
PREFACE xv
presented either in the last section of chapters or in the appendices. Chapter
3 could be used for reference and for covering relevant stability results that
arise during the course. A higher-level course intended for graduate students
that are interested in a deeper understanding of adaptive control could cover
all chapters with more emphasis on the design and stability proofs. A course
for an industrial audience could contain Chapters 1, 2, and 4 to 9 with
emphasis on the design of adaptive control algorithms rather than stability
proofs and convergence.
Acknowledgments
The writing of this book has been surprisingly difficult and took a long time
to evolve to its present form. Several versions of the book were completed
only to be put aside after realizing that new results and techniques would
lead to a better version. In the meantime, both of us started our families
that soon enough expanded. If it were not for our families, we probably
could have finished the book a year or two earlier. Their love and company,
however, served as an insurance that we would finish it one day.
A long list of friends and colleagues have helped us in the preparation of
the bo ok in many different ways. We are especially grateful to Petar Koko-
tovi´c who introduced the first author to the field of adaptive control back in
1979. Since then he has been a great advisor, friend, and colleague. His con-
tinuous enthusiasm and hard work for research has been the strongest driving
force behind our research and that of our students. We thank Brian Ander-
son, Karl
˚
Astr¨om, Mike Athans, Bo Egardt, Graham Goodwin, Rick John-

son, Gerhard Kreisselmeier, Yoan Landau, Lennart Ljung, David Mayne,
late R. Monopoli, Bob Narendra, and Steve Morse for their work, interac-
tions, and continuous enthusiasm in adaptive control that helped us lay the
foundations of most parts of the book.
We would especially like to express our deepest appreciation to Laurent
Praly and Kostas Tsakalis. Laurent was the first researcher to recognize
and publicize the beneficial effects of dynamic normalization on robustness
that opened the way to a wide class of robust adaptive control algorithms
addressed in the book. His interactions with us and our students is highly
appreciated. Kostas, a former student of the first author, is responsible for
many mathematical tools and stability arguments used in Chapters 6 and
xvi PREFACE
9. His continuous interactions helped us to decipher many of the cryptic
concepts and robustness properties of model reference adaptive control.
We are thankful to our former and current students and visitors who col-
laborated with us in research and contributed to this work: Farid Ahmed-
Zaid, C. C. Chien, Aniruddha Datta, Marios Polycarpou, Houmair Raza,
Alex Stotsky, Tim Sun, Hualin Tan, Gang Tao, Hui Wang, Tom Xu, and
Youping Zhang. We are grateful to many colleagues for stimulating discus-
sions at conferences, workshops, and meetings. They have helped us broaden
our understanding of the field. In particular, we would like to mention Anu
Annaswamy, Erwei Bai, Bob Bitmead, Marc Bodson, Stephen Boyd, Sara
Dasgupta, the late Howard Elliot, Li-chen Fu, Fouad Giri, David Hill, Ioan-
nis Kanellakopoulos, Pramo d Khargonekar, Hassan Khalil, Bob Kosut, Jim
Krause, Miroslav Krsti´c, Rogelio Lozano-Leal, Iven Mareels, Rick Middle-
ton, David Mudget, Romeo Ortega, Brad Riedle, Charles Rohrs, Ali Saberi,
Shankar Sastry, Lena Valavani, Jim Winkelman, and Erik Ydstie. We would
also like to extend our thanks to our colleagues at the University of Southern
California, Wayne State University, and Ford Research Laboratory for their
friendship, support, and technical interactions. Special thanks, on behalf of

the second author, go to the members of the Control Systems Department
of Ford Research Laboratory, and Jessy Grizzle and Anna Stefanopoulou of
the University of Michigan.
Finally, we acknowledge the support of several organizations includ-
ing Ford Motor Company, General Motors Project Trilby, National Science
Foundation, Rockwell International, and Lockheed. Special thanks are due
to Bob Borcherts, Roger Fruechte, Neil Schilke, and James Rillings of for-
mer Project Trilby; Bill Powers, Mike Shulman, and Steve Eckert of Ford
Motor Company; and Bob Rooney and Houssein Youseff of Lockheed whose
support of our research made this book possible.
Petros A. Ioannou
Jing Sun
List of Acronyms
ALQC Adaptive linear quadratic control
APPC Adaptive pole placement control
B-G Bellman Gronwall (lemma)
BIBO Bounded-input b ounded-output
CEC Certainty equivalence control
I/O Input/output
LKY Lefschetz-Kalman-Yakubovich (lemma)
LQ Linear quadratic
LTI Linear time invariant
LTV Linear time varying
MIMO Multi-input multi-output
MKY Meyer-Kalman-Yakubovich (lemma)
MRAC Model reference adaptive control
MRC Model reference control
PE Persistently exciting
PI Proportional plus integral
PPC Pole placement control

PR Positive real
SISO Single input single output
SPR Strictly positive real
TV Time varying
UCO Uniformly completely observable
a.s. Asymptotically stable
e.s. Exponentially stable
m.s.s. (In the) mean square sense
u.a.s. Uniformly asymptotically stable
u.b. Uniformly bounded
u.s. Uniformly stable
u.u.b. Uniformly ultimately bounded
w.r.t. With respect to
xvii
18 PREFACE
Chapter 1
Introduction
1.1 Control System Design Steps
The design of a controller that can alter or modify the behavior and response
of an unknown plant to meet certain performance requirements can be a
tedious and challenging problem in many control applications. By plant, we
mean any process characterized by a certain number of inputs u and outputs
y, as shown in Figure 1.1.
The plant inputs u are processed to produce several plant outputs y that
represent the measured output response of the plant. The control design task
is to choose the input u so that the output resp onse y(t) satisfies certain given
performance requirements. Because the plant process is usually complex,
i.e., it may consist of various mechanical, electronic, hydraulic parts, etc.,
the appropriate choice of u is in general not straightforward. The control
design steps often followed by most control engineers in choosing the input

u are shown in Figure 1.2 and are explained below.




Plant
Process
P




Inputs
u
Outputs
y
Figure 1.1 Plant representation.
1
2 CHAPTER 1. INTRODUCTION
Step 1. Modeling
The task of the control engineer in this step is to understand the pro-
cessing mechanism of the plant, which takes a given input signal u(t) and
produces the output response y(t), to the point that he or she can describe
it in the form of some mathematical equations. These equations constitute
the mathematical model of the plant. An exact plant model should produce
the same output response as the plant, provided the input to the model and
initial conditions are exactly the same as those of the plant. The complexity
of most physical plants, however, makes the development of such an exact
model unwarranted or even impossible. But even if the exact plant model
becomes available, its dimension is likely to be infinite, and its description

nonlinear or time varying to the point that its usefulness from the control
design viewpoint is minimal or none. This makes the task of modeling even
more difficult and challenging, b ecause the control engineer has to come up
with a mathematical model that describes accurately the input/output be-
havior of the plant and yet is simple enough to be used for control design
purposes. A simple model usually leads to a simple controller that is easier
to understand and implement, and often more reliable for practical purposes.
A plant model may be developed by using physical laws or by processing
the plant input/output (I/O) data obtained by performing various experi-
ments. Such a model, however, may still be complicated enough from the
control design viewpoint and further simplifications may be necessary. Some
of the approaches often used to obtain a simplified model are
(i) Linearization around operating points
(ii) Model order reduction techniques
In approach (i) the plant is approximated by a linear model that is valid
around a given operating point. Different operating p oints may lead to
several different linear models that are used as plant models. Linearization
is achieved by using Taylor’s series expansion and approximation, fitting of
experimental data to a linear model, etc.
In approach (ii) small effects and phenomena outside the frequency range
of interest are neglected leading to a lower order and simpler plant model.
The reader is referred to references [67, 106] for more details on model re-
duction techniques and approximations.
1.1. CONTROL SYSTEM DESIGN STEPS 3


Plant
P



u
y

Step 1: Modeling


Plant Model
P
m


u
ˆy

Step 2: Controller Design


Plant Model
P
m

❍ ❧
Σ




Uncertainty

✁❆





Controller
C
u
ˆy
Input
Command

Step 3: Implementation


Plant
P






Controller
C
u
y
Input
Command
Figure 1.2 Control system design steps.
In general, the task of modeling involves a good understanding of the

plant process and performance requirements, and may require some experi-
ence from the part of the control engineer.
Step 2. Controller Design
Once a model of the plant is available, one can proceed with the controller
design. The controller is designed to meet the performance requirements for
4 CHAPTER 1. INTRODUCTION
the plant model. If the model is a good approximation of the plant, then
one would hope that the controller performance for the plant model would
be close to that achieved when the same controller is applied to the plant.
Because the plant model is always an approximation of the plant, the
effect of any discrepancy between the plant and the model on the perfor-
mance of the controller will not be known until the controller is applied to
the plant in Step 3. One, however, can take an intermediate step and ana-
lyze the properties of the designed controller for a plant model that includes
a class of plant model uncertainties denoted by  that are likely to appear
in the plant. If  represents most of the unmodeled plant phenomena, its
representation in terms of mathematical equations is not possible. Its char-
acterization, however, in terms of some known bounds may be possible in
many applications. By considering the existence of a general class of uncer-
tainties  that are likely to be present in the plant, the control engineer may
be able to modify or redesign the controller to be less sensitive to uncertain-
ties, i.e., to be more robust with respect to . This robustness analysis and
redesign improves the potential for a successful implementation in Step 3.
Step 3. Implementation
In this step, a controller designed in Step 2, which is shown to meet the
performance requirements for the plant model and is robust with respect to
possible plant model uncertainties , is ready to be applied to the unknown
plant. The implementation can be done using a digital computer, even
though in some applications analog computers may be used too. Issues,
such as the type of computer available, the type of interface devices between

the computer and the plant, software tools, etc., need to be considered a
priori. Computer speed and accuracy limitations may put constraints on
the complexity of the controller that may force the control engineer to go
back to Step 2 or even Step 1 to come up with a simpler controller without
violating the performance requirements.
Another important asp ect of implementation is the final adjustment,
or as often called the tuning, of the controller to improve performance by
compensating for the plant model uncertainties that are not accounted for
during the design process. Tuning is often done by trial and error, and
depends very much on the experience and intuition of the control engineer.
In this book we will concentrate on Step 2. We will be dealing with
1.2. ADAPTIVE CONTROL 5
the design of control algorithms for a class of plant models describ ed by the
linear differential equation
˙x = Ax + Bu, x(0) = x
0
y = C

x + Du
(1.1.1)
In (1.1.1) x ∈ R
n
is the state of the model, u ∈ R
r
the plant input, and y ∈
R
l
the plant model output. The matrices A ∈ R
n×n
, B ∈ R

n×r
, C ∈ R
n×l
,
and D ∈ R
l×r
could be constant or time varying. This class of plant models
is quite general because it can serve as an approximation of nonlinear plants
around operating points. A controller based on the linear model (1.1.1) is
expected to be simpler and easier to understand than a controller based on
a possibly more accurate but nonlinear plant model.
The class of plant models given by (1.1.1) can be generalized further if we
allow the elements of A, B, and C to be completely unknown and changing
with time or operating conditions. The control of plant models (1.1.1) with
A, B, C, and D unknown or partially known is covered under the area of
adaptive systems and is the main topic of this book.
1.2 Adaptive Control
According to Webster’s dictionary, to adapt means “to change (oneself) so
that one’s behavior will conform to new or changed circumstances.” The
words “adaptive systems” and “adaptive control” have b een used as early
as 1950 [10, 27].
The design of autopilots for high-performance aircraft was one of the pri-
mary motivations for active research on adaptive control in the early 1950s.
Aircraft operate over a wide range of speeds and altitudes, and their dy-
namics are nonlinear and conceptually time varying. For a given operating
point, specified by the aircraft speed (Mach number) and altitude, the com-
plex aircraft dynamics can be approximated by a linear model of the same
form as (1.1.1). For example, for an operating point i, the linear aircraft
model has the following form [140]:
˙x = A

i
x + B
i
u, x(0) = x
0
y = C

i
x + D
i
u
(1.2.1)
where A
i
, B
i
, C
i
, and D
i
are functions of the operating point i. As the air-
craft goes through different flight conditions, the operating point changes
6 CHAPTER 1. INTRODUCTION


Controller

Plant
P


Strategy for
Adjusting
Controller Gains






✁✕
Input
Command
u y
u(t)
y(t)
Figure 1.3 Controller structure with adjustable controller gains.
leading to different values for A
i
, B
i
, C
i
, and D
i
. Because the output re-
sponse y (t) carries information about the state x as well as the parameters,
one may argue that in principle, a sophisticated feedback controller should
be able to learn about parameter changes by processing y(t) and use the
appropriate gains to accommodate them. This argument led to a feedback
control structure on which adaptive control is based. The controller struc-

ture consists of a feedback loop and a controller with adjustable gains as
shown in Figure 1.3. The way of changing the controller gains in response
to changes in the plant and disturbance dynamics distinguishes one scheme
from another.
1.2.1 Robust Control
A constant gain feedback controller may be designed to cope with parameter
changes provided that such changes are within certain bounds. A block
diagram of such a controller is shown in Figure 1.4 where G(s) is the transfer
function of the plant and C(s) is the transfer function of the controller. The
transfer function from y

to y is
y
y

=
C(s)G(s)
1 + C(s)G(s)
(1.2.2)
where C(s) is to be chosen so that the closed-loop plant is stable, despite
parameter changes or uncertainties in G(s), and y ≈ y

within the frequency
range of interest. This latter condition can be achieved if we choose C(s)
1.2. ADAPTIVE CONTROL 7
✲ ❧ ✲
y

+


u y
Σ
C(s)

Plant
G(s)


Figure 1.4 Constant gain feedback controller.
so that the loop gain |C(jw)G(jw)| is as large as possible in the frequency
spectrum of y

provided, of course, that large loop gain does not violate
closed-loop stability requirements. The tracking and stability objectives can
be achieved through the design of C(s) provided the changes within G(s)
are within certain bounds. More details about robust control will be given
in Chapter 8.
Robust control is not considered to be an adaptive system even though
it can handle certain classes of parametric and dynamic uncertainties.
1.2.2 Gain Scheduling
Let us consider the aircraft model (1.2.1) where for each operating point
i, i = 1, 2, . , N, the parameters A
i
, B
i
, C
i
, and D
i
are known. For a

given operating point i, a feedback controller with constant gains, say θ
i
,
can be designed to meet the performance requirements for the correspond-
ing linear model. This leads to a controller, say C(θ), with a set of gains

1
, θ
2
, , θ
i
, , θ
N
} covering N operating points. Once the operating point,
say i, is detected the controller gains can be changed to the appropriate value
of θ
i
obtained from the precomputed gain set. Transitions between different
operating points that lead to significant parameter changes may be handled
by interpolation or by increasing the number of operating points. The two
elements that are essential in implementing this approach is a look-up table
to store the values of θ
i
and the plant auxiliary measurements that corre-
late well with changes in the operating points. The approach is called gain
scheduling and is illustrated in Figure 1.5.
The gain scheduler consists of a look-up table and the appropriate logic
for detecting the operating point and choosing the corresponding value of
θ
i

from the table. In the case of aircraft, the auxiliary measurements are
the Mach number and the dynamic pressure. With this approach plant
8 CHAPTER 1. INTRODUCTION


✲ ✲


✠



Controller
C(θ)
Plant
Gain
Scheduler
θ
i
yu
Auxiliary
Measurements
Command or
Reference Signal
Figure 1.5 Gain scheduling.
parameter variations can be compensated by changing the controller gains
as functions of the auxiliary measurements.
The advantage of gain scheduling is that the controller gains can be
changed as quickly as the auxiliary measurements respond to parameter
changes. Frequent and rapid changes of the controller gains, however, may

lead to instability [226]; therefore, there is a limit as to how often and how
fast the controller gains can be changed.
One of the disadvantages of gain scheduling is that the adjustment mech-
anism of the controller gains is precomputed off-line and, therefore, provides
no feedback to compensate for incorrect schedules. Unpredictable changes
in the plant dynamics may lead to deterioration of performance or even to
complete failure. Another possible drawback of gain scheduling is the high
design and implementation costs that increase with the number of operating
points.
Despite its limitations, gain scheduling is a popular method for handling
parameter variations in flight control [140, 210] and other systems [8].
1.2.3 Direct and Indirect Adaptive Control
An adaptive controller is formed by combining an on-line parameter estima-
tor, which provides estimates of unknown parameters at each instant, with
a control law that is motivated from the known parameter case. The way
the parameter estimator, also referred to as adaptive law in the bo ok, is
combined with the control law gives rise to two different approaches. In the
first approach, referred to as indirect adaptive control, the plant parameters
are estimated on-line and used to calculate the controller parameters. This
1.2. ADAPTIVE CONTROL 9
approach has also been referred to as explicit adaptive control, because the
design is based on an explicit plant model.
In the second approach, referred to as direct adaptive control, the plant
model is parameterized in terms of the controller parameters that are esti-
mated directly without intermediate calculations involving plant parameter
estimates. This approach has also been referred to as implicit adaptive con-
trol because the design is based on the estimation of an implicit plant model.
In indirect adaptive control, the plant model P (θ

) is parameterized with

respect to some unknown parameter vector θ

. For example, for a linear
time invariant (LTI) single-input single-output (SISO) plant model, θ

may
represent the unknown coefficients of the numerator and denominator of the
plant model transfer function. An on-line parameter estimator generates
an estimate θ(t) of θ

at each time t by processing the plant input u and
output
y
. The parameter estimate
θ
(
t
) specifies an estimated plant model
characterized by
ˆ
P (θ(t)) that for control design purposes is treated as the
“true” plant model and is used to calculate the controller parameter or gain
vector θ
c
(t) by solving a certain algebraic equation θ
c
(t) = F (θ(t)) at each
time t. The form of the control law C(θ
c
) and algebraic equation θ

c
= F (θ)
is chosen to be the same as that of the control law C(θ

c
) and equation θ

c
=
F (θ

) that could be used to meet the performance requirements for the plant
model P (θ

) if θ

was known. It is, therefore, clear that with this approach,
C(θ
c
(t)) is designed at each time t to satisfy the performance requirements
for the estimated plant model
ˆ
P (θ(t)), which may be different from the
unknown plant model P(θ

). Therefore, the principal problem in indirect
adaptive control is to choose the class of control laws C(θ
c
) and the class
of parameter estimators that generate θ(t) as well as the algebraic equation

θ
c
(t) = F(θ(t)) so that C(θ
c
(t)) meets the performance requirements for
the plant model P(θ

) with unknown θ

. We will study this problem in
great detail in Chapters 6 and 7, and consider the robustness properties of
indirect adaptive control in Chapters 8 and 9. The block diagram of an
indirect adaptive control scheme is shown in Figure 1.6.
In direct adaptive control, the plant model P (θ

) is parameterized in
terms of the unknown controller parameter vector θ

c
, for which C(θ

c
) meets
the performance requirements, to obtain the plant model P
c


c
) with exactly
the same input/output characteristics as P(θ


).
The on-line parameter estimator is designed based on P
c


c
) instead of
10 CHAPTER 1. INTRODUCTION
Controller
C(θ
c
)



Plant
P (θ

)


On-Line
Parameter
Estimation of θ



Calculations
θ

c
(t) = F (θ(t))

✒


θ
c
θ(t)
Input
Command
r
u
r
y
Figure 1.6 Indirect adaptive control.
P (θ

) to provide direct estimates θ
c
(t) of θ

c
at each time t by processing the
plant input u and output y. The estimate θ
c
(t) is then used to update the
controller parameter vector θ
c
without intermediate calculations. The choice

of the class of control laws C(θ
c
) and parameter estimators generating θ
c
(t)
for which C(θ
c
(t)) meets the performance requirements for the plant model
P (θ

) is the fundamental problem in direct adaptive control. The properties
of the plant model P (θ

) are crucial in obtaining the parameterized plant
model P
c


c
) that is convenient for on-line estimation. As a result, direct
adaptive control is restricted to a certain class of plant models. As we will
show in Chapter 6, a class of plant models that is suitable for direct adaptive
control consists of all SISO LTI plant models that are minimum-phase, i.e.,
their zeros are located in Re [s] < 0. The block diagram of direct adaptive
control is shown in Figure 1.7.
The principle behind the design of direct and indirect adaptive control
shown in Figures 1.6 and 1.7 is conceptually simple. The design of C(θ
c
)
treats the estimates θ

c
(t) (in the case of direct adaptive control) or the
estimates θ(t) (in the case of indirect adaptive control) as if they were the
true parameters. This design approach is called certainty equivalence and can
be used to generate a wide class of adaptive control schemes by combining
different on-line parameter estimators with different control laws.
1.2. ADAPTIVE CONTROL 11
Controller
C(θ
c
)




r

Plant
P (θ

) → P
c


c
)


On-Line
Parameter

Estimation of θ

c

✒
θ
c
Input
Command
r
u
y
Figure 1.7 Direct adaptive control.
The idea behind the certainty equivalence approach is that as the param-
eter estimates θ
c
(t) and θ(t) converge to the true ones θ

c
and θ

, respectively,
the performance of the adaptive controller C(θ
c
) tends to that achieved by
C(θ

c
) in the case of known parameters.
The distinction between direct and indirect adaptive control may be con-

fusing to most readers for the following reasons: The direct adaptive control
structure shown in Figure 1.7 can be made identical to that of the indi-
rect adaptive control by including a block for calculations with an identity
transformation between updated parameters and controller parameters. In
general, for a given plant model the distinction between the direct and in-
direct approach becomes clear if we go into the details of design and anal-
ysis. For example, direct adaptive control can be shown to meet the per-
formance requirements, which involve stability and asymptotic tracking, for
a minimum-phase plant. It is still not clear how to design direct schemes
for nonminimum-phase plants. The difficulty arises from the fact that, in
general, a convenient (for the purpose of estimation) parameterization of the
plant model in terms of the desired controller parameters is not possible for
nonminimum-phase plant models.
Indirect adaptive control, on the other hand, is applicable to both
minimum- and nonminimum-phase plants. In general, however, the mapping
between θ(t) and θ
c
(t), defined by the algebraic equation θ
c
(t)

= F (θ(t)),
cannot be guaranteed to exist at each time t giving rise to the so-called
stabilizability problem that is discussed in Chapter 7. As we will show in

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×