Tải bản đầy đủ (.pdf) (10 trang)

Optimal Control with Engineering Applications Episode 1 pps

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (198.72 KB, 10 trang )

Hans P. Geering
Optimal Control with Engineering Applications
Hans P. Geering
Optimal Control
with Engineering
Applications
With 12 Figures
123
Hans P. Geering, Ph.D.
Professor of Automatic Control and Mechatronics
Measurement and Control Laboratory
Department of Mechanical and Process Engineering
ETH Zurich
Sonneggstrasse 3
CH-8092 Zurich, Switzerland
Library of Congress Control Number: 2007920933
ISBN 978-3-540-69437-3 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broad-
casting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of
this publication or parts thereof is permitted only under the provisions of the German Copyright Law
of September 9, 1965, in its current version, and permission for use must always be obtained from
Springer. Violations are liable for prosecution under the German Copyright Law.
Springer is a part of Springer Science+Business Media
springer.com
© Springer-Verlag Berlin Heidelberg 2007
The use of general descriptive names, registered names, trademarks, etc. in this publication does not
imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
Typesetting: Camera ready by author


Production: LE-T
E
XJelonek,Schmidt&VöcklerGbR,Leipzig
Cover design: eStudi Calamar , Steinen-Bro
SPIN 11880127 7/3100/YL - 5 4 3210 Printedonacid-freepaper
o S.L. F. , Girona, Spaino
Foreword
This book is based on the lecture material for a one-semester senior-year
undergraduate or first-year graduate course in optimal control which I have
taught at the Swiss Federal Institute of Technology (ETH Zurich) for more
than twenty years. The students taking this course are mostly students in
mechanical engineering and electrical engineering taking a major in control.
But there also are students in computer science and mathematics taking this
course for credit.
The only prerequisites for this book are: The reader should be familiar with
dynamics in general and with the state space description of dynamic systems
in particular. Furthermore, the reader should have a fairly sound understand-
ing of differential calculus.
The text mainly covers the design of open-loop optimal controls with the help
of Pontryagin’s Minimum Principle, the conversion of optimal open-loop to
optimal closed-loop controls, and the direct design of optimal closed-loop
optimal controls using the Hamilton-Jacobi-Bellman theory.
In theses areas, the text also covers two special topics which are not usually
found in textbooks: the extension of optimal control theory to matrix-valued
performance criteria and Lukes’ method for the iterative design of approxi-
matively optimal controllers.
Furthermore, an introduction to the phantastic, but incredibly intricate field
of differential games is given. The only reason for doing this lies in the
fact that the differential games theory has (exactly) one simple application,
namely the LQ differential game. It can be solved completely and it has a

very attractive connection to the H

method for the design of robust linear
time-invariant controllers for linear time-invariant plants. — This route is
the easiest entry into H

theory. And I believe that every student majoring
in control should become an expert in H

control design, too.
The book contains a rather large variety of optimal control problems. Many
of these problems are solved completely and in detail in the body of the text.
Additional problems are given as exercises at the end of the chapters. The
solutions to all of these exercises are sketched in the Solution section at the
endofthebook.
vi Foreword
Acknowledgements
First, my thanks go to Michael Athans for elucidating me on the background
of optimal control in the first semester of my graduate studies at M.I.T. and
for allowing me to teach his course in my third year while he was on sabbatical
leave.
I am very grateful that Stephan A. R. Hepner pushed me from teaching the
geometric version of Pontryagin’s Minimum Principle along the lines of [2],
[20], and [14] (which almost no student understood because it is so easy, but
requires 3D vision) to teaching the variational approach as presented in this
text (which almost every student understands because it is so easy and does
not require any 3D vision).
I am indebted to Lorenz M. Schumann for his contributions to the material
on the Hamilton-Jacobi-Bellman theory and to Roberto Cirillo for explaining
Lukes’ method to me.

Furthermore, a large number of persons have supported me over the years. I
cannot mention all of them here. But certainly, I appreciate the continuous
support by Gabriel A. Dondi, Florian Herzog, Simon T. Keel, Christoph
M. Sch¨ar, Esfandiar Shafai, and Oliver Tanner over many years in all aspects
of my course on optimal control. — Last but not least, I like to mention my
secretary Brigitte Rohrbach who has always eagle-eyed my texts for errors
and silly faults.
Finally, I thank my wife Rosmarie for not killing me or doing any other
harm to me during the very intensive phase of turning this manuscript into
a printable form.
Hans P. Geering
Fall 2006
Contents
List of Symbols 1
1 Introduction 3
1.1 Problem Statements 3
1.1.1TheOptimalControlProblem 3
1.1.2TheDifferentialGameProblem 4
1.2 Examples 5
1.3 Static Optimization 18
1.3.1 Unconstrained Static Optimization 18
1.3.2 Static Optimization under Constraints 19
1.4 Exercises 22
2 Optimal Control 23
2.1 Optimal Control Problems with a Fixed Final State 24
2.1.1TheOptimalControlProblemofTypeA 24
2.1.2Pontryagin’sMinimumPrinciple 25
2.1.3Proof 25
2.1.4 Time-Optimal, Frictionless,
HorizontalMotionofaMassPoint 28

2.1.5 Fuel-Optimal, Frictionless,
HorizontalMotionofaMassPoint 32
2.2 SomeFinePoints 35
2.2.1 Strong Control Variation and
globalMinimizationoftheHamiltonian 35
2.2.2EvolutionoftheHamiltonian 36
2.2.3 Special Case: Cost Functional J(u)=±x
i
(t
b
) 36
viii Contents
2.3 Optimal Control Problems with a Free Final State 38
2.3.1TheOptimalControlProblemofTypeC 38
2.3.2Pontryagin’sMinimumPrinciple 38
2.3.3Proof 39
2.3.4 The LQ Regulator Problem 41
2.4 Optimal Control Problems with a
Partially Constrained Final State 43
2.4.1TheOptimalControlProblemofTypeB 43
2.4.2Pontryagin’sMinimumPrinciple 43
2.4.3Proof 44
2.4.4Energy-OptimalControl 46
2.5 Optimal Control Problems with State Constraints 48
2.5.1TheOptimalControlProblemofTypeD 48
2.5.2Pontryagin’sMinimumPrinciple 49
2.5.3Proof 51
2.5.4 Time-Optimal, Frictionless, Horizontal Motion of a
MassPointwithaVelocityConstraint 54
2.6 SingularOptimalControl 59

2.6.1 Problem Solving Technique 59
2.6.2Goh’sFishingProblem 60
2.6.3 Fuel-Optimal Atmospheric Flight of a Rocket 62
2.7 ExistenceTheorems 65
2.8 Optimal Control Problems
withaNon-Scalar-ValuedCostFunctional 67
2.8.1Introduction 67
2.8.2 Problem Statement 68
2.8.3Geering’sInfimumPrinciple 68
2.8.4 The Kalman-Bucy Filter 69
2.9 Exercises 72
3 Optimal State Feedback Control 75
3.1 ThePrincipleofOptimality 75
3.2 Hamilton-Jacobi-BellmanTheory 78
3.2.1 Sufficient Conditions for the Optimality of a Solution . . 78
3.2.2 Plausibility Arguments about the HJB Theory 80
Contents ix
3.2.3 The LQ Regulator Problem 81
3.2.4 The Time-Invariant Case with Infinite Horizon 83
3.3 ApproximativelyOptimalControl 86
3.3.1 Notation 87
3.3.2Lukes’Method 88
3.3.3 Controller with a Progressive Characteristic 92
3.3.4LQQSpeedControl 96
3.4 Exercises 99
4 Differential Games 103
4.1 Theory 103
4.1.1 Problem Statement 104
4.1.2TheNash-PontryaginMinimaxPrinciple 105
4.1.3Proof 106

4.1.4Hamilton-Jacobi-IsaacsTheory 107
4.2 TheLQDifferentialGameProblem 109
4.2.1 Solved with the Nash-Pontryagin Minimax Principle 109
4.2.2 Solved with the Hamilton-Jacobi-Isaacs Theory . . 111
4.3 H

-ControlviaDifferentialGames 113
Solutions to Exercises 117
References 129
Index 131
List of Symbols
Independent Variables
t time
t
a
,t
b
initial time, final time
t
1
,t
2
times in (t
a
,t
b
),
e.g., starting end ending times of a singular arc
τ a special time in [t
a

,t
b
]
Vectors and Vector Signals
u(t) control vector, u(t)∈Ω⊆R
m
x(t) state vector, x(t)∈R
n
y(t) output vector, y(t)∈R
p
y
d
(t) desired output vector, y
d
(t)∈R
p
λ(t) costate vector, λ(t)∈R
n
,
i.e., vector of Lagrange multipliers
q additive part of λ(t
b
)=∇
x
K(x(t
b
)) + q
which is involved in the transversality condition
λ
a


b
vectors of Lagrange multipliers
µ
0
, ,µ
−1


(t) scalar Lagrange multipliers
Sets
Ω ⊆ R
m
control constraint

u
⊆ R
m
u
, Ω
v
⊆ R
m
v
control constraints in a differential game

x
(t) ⊆ R
n
state constraint

S ⊆ R
n
target set for the final state x(t
b
)
T (S, x) ⊆ R
n
tangent cone of the target set S at x
T

(S, x) ⊆ R
n
normal cone of the target set S at x
T (Ω,u) ⊆ R
m
tangent cone of the constraint set Ω at u
T

(Ω,u) ⊆ R
m
normal cone of the constraint set Ω at u

×