Tải bản đầy đủ (.pdf) (437 trang)

Search and optimization by metaheuristics

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (6.28 MB, 437 trang )

Ke-Lin Du
M.N.S. Swamy

Search and
Optimization by
Metaheuristics
Techniques
and Algorithms
Inspired by Nature



Ke-Lin Du M.N.S. Swamy


Search and Optimization
by Metaheuristics
Techniques and Algorithms Inspired
by Nature


M.N.S. Swamy
Department of Electrical and Computer
Engineering
Concordia University
Montreal, QC
Canada

Ke-Lin Du
Xonlink Inc
Ningbo, Zhejiang


China
and

Department of Electrical and Computer
Engineering
Concordia University
Montreal, QC
Canada

ISBN 978-3-319-41191-0
DOI 10.1007/978-3-319-41192-7

ISBN 978-3-319-41192-7

(eBook)

Library of Congress Control Number: 2016943857
Mathematics Subject Classification (2010): 49-04, 68T20, 68W15
© Springer International Publishing Switzerland 2016
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or

for any errors or omissions that may have been made.
Printed on acid-free paper
This book is published under the trade name Birkhäuser
The registered company is Springer International Publishing AG Switzerland
(www.birkhauser-science.com)


To My Friends Jiabin Lu and Biaobiao
Zhang
Ke-Lin Du
and
To My Parents
M.N.S. Swamy


Preface

Optimization is a branch of applied mathematics and numerical analysis. Almost
every problem in engineering, science, economics, and life can be formulated as an
optimization or a search problem. While some of the problems can be simple that
can be solved by traditional optimization methods based on mathematical analysis,
most of the problems are very hard to be solved using analysis-based approaches.
Fortunately, we can solve these hard optimization problems by inspirations from
nature, since we know that nature is a system of vast complexity and it always
generates a near-optimum solution.
Natural computing is concerned with computing inspired by nature, as well as
with computations taking place in nature. Well-known examples of natural computing are evolutionary computation, neural computation, cellular automata, swarm
intelligence, molecular computing, quantum computation, artificial immune systems, and membrane computing. Together, they constitute the discipline of computational intelligence.
Among all the nature-inspired computational paradigms, evolutionary computation is most influential. It is a computational method for obtaining the best possible solutions in a huge solution space based on Darwin’s survival-of-the-fittest
principle. Evolutionary algorithms are a class of effective global optimization

techniques for many hard problems.
More and more biologically inspired methods have been proposed in the past
two decades. The most prominent ones are particle swarm optimization, ant colony
optimization, and immune algorithm. These methods are widely used due to their
particular features compared with evolutional computation. All these biologically
inspired methods are population-based. Computation is performed by autonomous
agents, and these agents exchange information by social behaviors. The memetic
algorithm models the behavior of knowledge propagation of animals.
There are also many other nature-inspired metaheuristics for search and optimization. These include methods inspired by physical laws, chemical reaction,
biological phenomena, social behaviors, and animal thinking.
Metaheuristics are a class of intelligent self-learning algorithms for finding
near-optimum solutions to hard optimization problems, mimicking intelligent
processes and behaviors observed from nature, sociology, thinking, and other
disciplines. Metaheuristics may be nature-inspired paradigms, stochastic, or

vii


viii

Preface

probabilistic algorithms. Metaheuristics-based search and optimization are widely
used for fully automated decision-making and problem-solving.
In this book, we provide a comprehensive introduction to nature-inspired
metaheuristical methods for search and optimization. While each
metaheuristics-based method has its specific strength for particular cases, according
to no free lunch theorem, it has actually the same performance as that of random
search in consideration of the entire set of search and optimization problems. Thus,
when talking about the performance of an optimization method, it is actually based

on the same benchmarking examples that are representatives of some particular
class of problems.
This book is intended as an accessible introduction to metaheuristic optimization
for a broad audience. It provides an understanding of some fundamental insights on
metaheuristic optimization, and serves as a helpful starting point for those interested
in more in-depth studies of metaheuristic optimization. The computational paradigms described in this book are of general purpose in nature. This book can be
used as a textbook for advanced undergraduate students and graduate students. All
those interested in search and optimization can benefit from this book. Readers
interested in a particular topic will benefit from the appropriate chapter.
A roadmap for navigating through the book is given as follows. Except the
introductory Chapter 1, the contents of the book can be grossly divided into five
categories and an appendix.
• Evolution-based approach is covered in Chapters 3–8:
Chapter 3. Genetic Algorithms
Chapter 4. Genetic Programming
Chapter 5. Evolutionary Strategies
Chapter 6. Differential Evolution
Chapter 7. Estimation of Distribution Algorithms
Chapter 8. Topics in Evolutionary Algorithms
• Swarm intelligence-based approach is covered in Chapters 9–15:
Chapter 9. Particle Swarm Optimization
Chapter 10. Artificial Immune Systems
Chapter 11. Ant Colony Optimization
Chapter 12. Bee Metaheuristics
Chapter 13. Bacterial Foraging Algorithm
Chapter 14. Harmony Search
Chapter 15. Swarm Intelligence
• Sciences-based approach is covered in Chapters 2, 16–18:
Chapter 2. Simulated Annealing
Chapter 16. Biomolecular Computing

Chapter 17. Quantum Computing
Chapter 18. Metaheuristics Based on Sciences
• Human-based approach is covered in Chapters 19–21:


Preface

ix

Chapter 19. Memetic Algorithms
Chapter 20. Tabu Search and Scatter Search
Chapter 21. Search Based on Human Behaviors
• General optimization problems are treated in Chapters 22–23:
Chapter 22. Dynamic, Multimodal, and Constrained Optimizations
Chapter 23. Multiobjective Optimization
• The appendix contains auxiliary benchmarks helpful to test new and existing
algorithms.
In this book, hundreds of different metaheuristic methods are introduced.
However, due to space limitation, we only give detailed description to a large
number of the most popular metaheuristic methods. Some computational examples
for representative metaheuristic methods are given. The MATLAB codes for these
examples are available at the book website. We have also collected some MATLAB
codes for some other metaheuristics. These codes are of general purpose in nature.
The reader needs just to run these codes with their own objective functions.
For instructors, this book has been designed to serve as a textbook for courses on
evolutionary algorithms or nature-inspired optimization. This book can be taught in
12 two-hour sessions. We recommend that Chapters 1–11, 19, 22 and 23 should be
taught. In order to acquire a mastery of these popular metaheuristic algorithms,
some programming exercises using the benchmark functions given in the appendix
should be assigned to the students. The MATLAB codes provided with the book are

useful for learning the algorithms.
For readers, we suggest that you start with Chapter 1, which covers basic
concepts in optimization and metaheuristics. When you have digested the basics,
you can delve into one or more specific metaheuristic paradigms that you are
interested in or that satisfy your specific problems. The MATLAB codes accompanying the book are very useful for learning those popular algorithms, and they
can be directly used for solving your specific problems. The benchmark functions
are also very useful for researchers for evaluating their own algorithms.
We would like to thank Limin Meng (Zhejiang University of Technology,
China), and Yongyao Yang (SUPCON Group Inc, China) for their consistent
help. We would like to thank all the helpful and thoughtful staff at Xonlink Inc. Last
but not least, we would like to recognize the assistance of Benjamin Levitt and the
production team at Springer.
Ningbo, China
Montreal, Canada

Ke-Lin Du
M.N.S. Swamy


Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1
Computation Inspired by Nature . . . . . . . . . . . . . . .
1.2
Biological Processes . . . . . . . . . . . . . . . . . . . . . . .
1.3
Evolution Versus Learning . . . . . . . . . . . . . . . . . . .

1.4
Swarm Intelligence . . . . . . . . . . . . . . . . . . . . . . . .
1.4.1
Group Behaviors . . . . . . . . . . . . . . . . . . . .
1.4.2
Foraging Theory . . . . . . . . . . . . . . . . . . . .
1.5
Heuristics, Metaheuristics, and Hyper-Heuristics . . . .
1.6
Optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6.1
Lagrange Multiplier Method . . . . . . . . . . . .
1.6.2
Direction-Based Search and Simplex Search .
1.6.3
Discrete Optimization Problems . . . . . . . . .
1.6.4
P, NP, NP-Hard, and NP-Complete . . . . . . .
1.6.5
Multiobjective Optimization Problem . . . . . .
1.6.6
Robust Optimization . . . . . . . . . . . . . . . . .
1.7
Performance Indicators. . . . . . . . . . . . . . . . . . . . . .
1.8
No Free Lunch Theorem . . . . . . . . . . . . . . . . . . . .
1.9
Outline of the Book. . . . . . . . . . . . . . . . . . . . . . . .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.


.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.


1
1
3
5
6
7
8
9
11
12
13
14
16
17
19
20
22
23
25

2

Simulated Annealing . . . . . . . . . . . . . .
2.1
Introduction . . . . . . . . . . . . . . .
2.2
Basic Simulated Annealing . . . . .
2.3
Variants of Simulated Annealing .
References. . . . . . . . . . . . . . . . . . . . . .


.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.


.
.
.
.
.

29
29
30
33
35

3

Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1
Introduction to Evolutionary Computation . . . . . . .
3.1.1
Evolutionary Algorithms Versus Simulated
Annealing . . . . . . . . . . . . . . . . . . . . . . .
3.2
Terminologies of Evolutionary Computation . . . . . .
3.3
Encoding/Decoding . . . . . . . . . . . . . . . . . . . . . . .
3.4
Selection/Reproduction. . . . . . . . . . . . . . . . . . . . .
3.5
Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


.......
.......

37
37

.
.
.
.
.

39
39
42
43
46

.
.
.
.
.

.
.
.
.
.


.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.


.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.


.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.


.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

xi


xii

Contents

3.6
Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.7
Noncanonical Genetic Operators . . . . . . . . . . .
3.8

Exploitation Versus Exploration . . . . . . . . . . .
3.9
Two-Dimensional Genetic Algorithms . . . . . . .
3.10 Real-Coded Genetic Algorithms . . . . . . . . . . .
3.11 Genetic Algorithms for Sequence Optimization .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.


.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

48
49
51

55
56
60
64

Genetic Programming . . . . . . . . . . . . . . . . . . . . . . .
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
4.2
Syntax Trees. . . . . . . . . . . . . . . . . . . . . . . . .
4.3
Causes of Bloat. . . . . . . . . . . . . . . . . . . . . . .
4.4
Bloat Control . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1
Limiting on Program Size . . . . . . . . .
4.4.2
Penalizing the Fitness of an Individual
with Large Size. . . . . . . . . . . . . . . . .
4.4.3
Designing Genetic Operators . . . . . . .
4.5
Gene Expression Programming . . . . . . . . . . . .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.


.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.

.
.
.
.

71
71
72
75
76
77

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.

.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.

.

77
77
78
80

5

Evolutionary Strategies . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2
Basic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3
Evolutionary Gradient Search and Gradient Evolution
5.4
CMA Evolutionary Strategies . . . . . . . . . . . . . . . . .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.

.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.

.

83
83
84
85
88
90

6

Differential Evolution . . . . . .
6.1
Introduction . . . . . . . .
6.2
DE Algorithm . . . . . . .
6.3
Variants of DE . . . . . .
6.4
Binary DE Algorithms .
6.5
Theoretical Analysis on
References. . . . . . . . . . . . . . .

.
.
.
.
.
.

.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.

.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.


.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

93
93
94
97
100
100

101

7

Estimation of Distribution Algorithms . . . . . . .
7.1
Introduction . . . . . . . . . . . . . . . . . . . . .
7.2
EDA Flowchart. . . . . . . . . . . . . . . . . . .
7.3
Population-Based Incremental Learning . .
7.4
Compact Genetic Algorithms . . . . . . . . .
7.5
Bayesian Optimization Algorithm . . . . . .
7.6
Concergence Properties . . . . . . . . . . . . .
7.7
Other EDAs . . . . . . . . . . . . . . . . . . . . .
7.7.1
Probabilistic Model Building GP.
References. . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

105
105
107

108
110
112
112
113
115
116

4

...
...
...
...
...
DE
...

.
.
.
.
.
.
.

.
.
.
.

.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.



Contents

8

9

xiii

Topics in Evolutinary Algorithms . . . . . . . . . . . . . . . . . . . . .
8.1
Convergence of Evolutinary Algorithms . . . . . . . . . . . . .
8.1.1
Schema Theorem and Building-Block Hypothesis
8.1.2
Finite and Infinite Population Models . . . . . . . .
8.2
Random Problems and Deceptive Functions . . . . . . . . . .
8.3
Parallel Evolutionary Algorithms . . . . . . . . . . . . . . . . . .
8.3.1
Master–Slave Model . . . . . . . . . . . . . . . . . . . .
8.3.2
Island Model . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.3
Cellular EAs. . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.4
Cooperative Coevolution . . . . . . . . . . . . . . . . .
8.3.5
Cloud Computing . . . . . . . . . . . . . . . . . . . . . .

8.3.6
GPU Computing . . . . . . . . . . . . . . . . . . . . . . .
8.4
Coevolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.1
Coevolutionary Approaches . . . . . . . . . . . . . . .
8.4.2
Coevolutionary Approach for Minimax
Optimization. . . . . . . . . . . . . . . . . . . . . . . . . .
8.5
Interactive Evolutionary Computation . . . . . . . . . . . . . .
8.6
Fitness Approximation . . . . . . . . . . . . . . . . . . . . . . . . .
8.7
Other Heredity-Based Algorithms . . . . . . . . . . . . . . . . .
8.8
Application: Optimizating Neural Networks . . . . . . . . . .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Particle Swarm Optimization. . . . . . . . . . . . . . . . . . . . . . . .
9.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2
Basic PSO Algorithms . . . . . . . . . . . . . . . . . . . . . . . .
9.2.1
Bare-Bones PSO . . . . . . . . . . . . . . . . . . . . . .
9.2.2
PSO Variants Using Gaussian or Cauchy
Distribution . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.3
Stability Analysis of PSO. . . . . . . . . . . . . . . .

9.3
PSO Variants Using Different Neighborhood Topologies
9.4
Other PSO Variants . . . . . . . . . . . . . . . . . . . . . . . . . .
9.5
PSO and EAs: Hybridization . . . . . . . . . . . . . . . . . . .
9.6
Discrete PSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7
Multi-swarm PSOs . . . . . . . . . . . . . . . . . . . . . . . . . .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10 Artificial Immune Systems . . . . . . . . . . . . .
10.1 Introduction . . . . . . . . . . . . . . . . . . .
10.2 Immunological Theories . . . . . . . . . . .
10.3 Immune Algorithms. . . . . . . . . . . . . .
10.3.1 Clonal Selection Algorithm . .
10.3.2 Artificial Immune Network. . .
10.3.3 Negative Selection Algorithm .
10.3.4 Dendritic Cell Algorithm . . . .
References. . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.

121
121
121
123
125
127
129
130
132
133
134
135
136
137

.
.
.
.
.
.

.

.
.
.
.
.

.
.
.
.
.
.

138
139
139
141
142
146

.
.
.
.

.
.
.
.


.
.
.
.

.
.
.
.

153
153
154
156

.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

157
157
159
160
164
165
166
169


.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.


.
.
.
.
.
.
.
.
.

175
175
177
180
180
184
185
186
187


xiv

Contents

11 Ant Colony Optimization . . . . . . . . . . . . . . . . .
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . .
11.2 Ant-Colony Optimization . . . . . . . . . . . . .
11.2.1 Basic ACO Algorithm . . . . . . . . .
11.2.2 ACO for Continuous Optimization

References. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.


.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.

191
191
192
194
195
198

12 Bee Metaheuristics . . . . . . . . . . . . . . . . . . . . . .
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . .
12.2 Artificial Bee Colony Algorithm . . . . . . . .
12.2.1 Algorithm Flowchart . . . . . . . . . .
12.2.2 Modifications on ABC Algorithm .
12.2.3 Discrete ABC Algorithms. . . . . . .
12.3 Marriage in Honeybees Optimization . . . . .
12.4 Bee Colony Optimization . . . . . . . . . . . . .
12.5 Other Bee Algorithms . . . . . . . . . . . . . . .
12.5.1 Wasp Swarm Optimization . . . . . .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

201
201

203
203
207
208
209
210
211
212
213

13 Bacterial Foraging Algorithm . . . . .
13.1 Introduction . . . . . . . . . . . . .
13.2 Bacterial Foraging Algorithm .
13.3 Algorithms Inspired by Molds,
References. . . . . . . . . . . . . . . . . . . .

..................
..................
..................
Algae, and Tumor Cells .
..................

.
.
.
.
.

.
.

.
.
.

.
.
.
.
.

.
.
.
.
.

217
217
219
222
224

14 Harmony Search. . . . . . . . . . . . . .
14.1 Introduction . . . . . . . . . . . .
14.2 Harmony Search Algorithm .
14.3 Variants of Harmony Search .
14.4 Melody Search . . . . . . . . . .
References. . . . . . . . . . . . . . . . . . .

.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.

.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.

.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.


.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.

.
.
.
.
.
.

227
227
228
230
233
234

15 Swarm Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.1 Glowworm-Based Optimization. . . . . . . . . . . . . .
15.1.1 Glowworm Swarm Optimization . . . . . . .
15.1.2 Firefly Algorithm . . . . . . . . . . . . . . . . .
15.2 Group Search Optimization. . . . . . . . . . . . . . . . .
15.3 Shuffled Frog Leaping . . . . . . . . . . . . . . . . . . . .
15.4 Collective Animal Search . . . . . . . . . . . . . . . . . .
15.5 Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . .
15.6 Bat Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . .
15.7 Swarm Intelligence Inspired by Animal Behaviors.
15.7.1 Social Spider Optimization . . . . . . . . . . .
15.7.2 Fish Swarm Optimization . . . . . . . . . . . .
15.7.3 Krill Herd Algorithm . . . . . . . . . . . . . . .

15.7.4 Cockroach-Based Optimization . . . . . . . .
15.7.5 Seven-Spot Ladybird Optimization . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

237
237
238
239
240
241
242
243
246
247
247
249
250
251
252


Contents


xv

15.7.6 Monkey-Inspired Optimization . . . . . .
15.7.7 Migrating-Based Algorithms . . . . . . . .
15.7.8 Other Methods . . . . . . . . . . . . . . . . .
15.8 Plant-Based Metaheuristics . . . . . . . . . . . . . . .
15.9 Other Swarm Intelligence-Based Metaheuristics.
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.


.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

252
253
254
255
257
259

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

265
265
267
268

271
271
272
273
275
277
278

17 Quantum Computing . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.2 Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.2.1 Grover's Search Algorithm . . . . . . . . . . . . .
17.3 Hybrid Methods . . . . . . . . . . . . . . . . . . . . . . . . . .
17.3.1 Quantum-Inspired EAs. . . . . . . . . . . . . . . .
17.3.2 Other Quantum-Inspired Hybrid Algorithms .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.

283
283
284
286
287
287
290
291

18 Metaheuristics Based on Sciences . . . . . . . . . . . .
18.1 Search Based on Newton's Laws . . . . . . . . .
18.2 Search Based on Electromagnetic Laws . . . .
18.3 Search Based on Thermal-Energy Principles .
18.4 Search Based on Natural Phenomena . . . . . .
18.4.1 Search Based on Water Flows . . . .
18.4.2 Search Based on Cosmology . . . . .
18.4.3 Black Hole-Based Optimization . . .
18.5 Sorting. . . . . . . . . . . . . . . . . . . . . . . . . . .
18.6 Algorithmic Chemistries. . . . . . . . . . . . . . .

18.6.1 Chemical Reaction Optimization . . .
18.7 Biogeography-Based Optimization. . . . . . . .
18.8 Methods Based on Mathematical Concepts . .
18.8.1 Opposition-Based Learning. . . . . . .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

295
295
297
298
299
299
301
302
303
304
304
306
309
310
311

19 Memetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19.2 Cultural Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

315
315
316

16 Biomolecular Computing. . . . . . . . . . . . . . . .
16.1 Introduction . . . . . . . . . . . . . . . . . . . .
16.1.1 Biochemical Networks . . . . . . .
16.2 DNA Computing. . . . . . . . . . . . . . . . .
16.2.1 DNA Data Embedding. . . . . . .
16.3 Membrane Computing . . . . . . . . . . . . .
16.3.1 Cell-Like P System . . . . . . . . .
16.3.2 Computing by P System . . . . .
16.3.3 Other P Systems . . . . . . . . . . .
16.3.4 Membrane-Based Optimization .
References. . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.


.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.


xvi

Contents

19.3


Memetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . .
19.3.1 Simplex-based Memetic Algorithms. . . . . . . . .
19.4 Application: Searching Low Autocorrelation Sequences .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20 Tabu Search and Scatter Search . . .
20.1 Tabu Search . . . . . . . . . . . . .
20.1.1 Iterative Tabu Search .
20.2 Scatter Search . . . . . . . . . . . .
20.3 Path Relinking . . . . . . . . . . .
References. . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.


.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.

.

.
.
.
.

.
.
.
.

.
.
.
.

318
320
321
324

.
.
.
.
.
.

.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.

.
.
.

.
.
.
.
.
.

327
327
330
331
333
335

21 Search Based on Human Behaviors . . . . . . . . . . . . . . . . .
21.1 Seeker Optimization Algorithm . . . . . . . . . . . . . . . .
21.2 Teaching–Learning-Based Optimization . . . . . . . . . .
21.3 Imperialist Competitive Algorithm. . . . . . . . . . . . . .
21.4 Several Metaheuristics Inspired by Human Behaviors
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.


.
.
.
.
.
.

337
337
338
340
342
345

22 Dynamic, Multimodal, and Constrained Optimizations . . . . .
22.1 Dynamic Optimization . . . . . . . . . . . . . . . . . . . . . . . .
22.1.1 Memory Scheme . . . . . . . . . . . . . . . . . . . . . .
22.1.2 Diversity Maintaining or Reinforcing . . . . . . . .
22.1.3 Multiple Population Scheme . . . . . . . . . . . . . .
22.2 Multimodal Optimization . . . . . . . . . . . . . . . . . . . . . .
22.2.1 Crowding and Restricted Tournament Selection
22.2.2 Fitness Sharing . . . . . . . . . . . . . . . . . . . . . . .
22.2.3 Speciation . . . . . . . . . . . . . . . . . . . . . . . . . .
22.2.4 Clearing, Local Selection, and Demes . . . . . . .
22.2.5 Other Methods . . . . . . . . . . . . . . . . . . . . . . .
22.2.6 Metrics for Multimodal Optimization . . . . . . . .
22.3 Constrained Optimization . . . . . . . . . . . . . . . . . . . . . .
22.3.1 Penalty Function Method . . . . . . . . . . . . . . . .
22.3.2 Using Multiobjective Optimization Techniques .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

347
347
348
348
349
350
351
353
354
356
357
359
359
360
363
365

23 Multiobjective Optimization . . . . . . . . . . . . . . . . . . . . . .
23.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23.2 Multiobjective Evolutionary Algorithms . . . . . . . . . .
23.2.1 Nondominated Sorting Genetic Algorithm II.

23.2.2 Strength Pareto Evolutionary Algorithm 2 . .
23.2.3 Pareto Archived Evolution Strategy (PAES) .
23.2.4 Pareto Envelope-Based Selection Algorithm .
23.2.5 MOEA Based on Decomposition (MOEA/D)
23.2.6 Several MOEAs . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.

371
371
373
374
377
378
379
380
381

.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.


Contents

xvii

23.2.7
23.2.8

Nondominated Sorting . . . . . . . . . . . . . . . . . .
Multiobjective Optimization
Based on Differential Evolution . . . . . . . . . . .
23.3 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . .
23.4 Many-Objective Optimization . . . . . . . . . . . . . . . . . . .
23.4.1 Challenges in Many-Objective Optimization . . .

23.4.2 Pareto-Based Algorithms . . . . . . . . . . . . . . . .
23.4.3 Decomposition-Based Algorithms . . . . . . . . . .
23.5 Multiobjective Immune Algorithms . . . . . . . . . . . . . . .
23.6 Multiobjective PSO . . . . . . . . . . . . . . . . . . . . . . . . . .
23.7 Multiobjective EDAs . . . . . . . . . . . . . . . . . . . . . . . . .
23.8 Tabu/Scatter Search Based Multiobjective Optimization .
23.9 Other Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23.10 Coevolutionary MOEAs . . . . . . . . . . . . . . . . . . . . . . .
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

....

384

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.

.
.
.
.
.
.
.
.
.
.
.

385
386
389
389
391
393
394
395
398
399
400
402
403

Appendix A: Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

413


Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

431

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.


Abbreviations

Ab
ABC
AbYSS
ACO
ADF
AI
aiNet
AIS
BBO
BFA
BMOA
CCEA
cGA
CLONALG
CMA
C-MOGA
COMIT
COP
CRO
CUDA
DE
DEMO
DMOPSO
DNA
DOP

DSMOPSO
DT-MEDA
EA
EASEA
EBNA
EDA
EGNA
ELSA

Antibody
Artificial bee colony
Archive-based hybrid scatter search
Ant colony optimization
Automatically defined function
Artificial intelligence
Artificial immune network
Artificial immune system
Biogeography-based optimization
Bacterial foraging algorithm
Bayesian multiobjective optimization algorithm
Cooperative coevolutionary algorithm
Compact GA
Clonal selection algorithm
Covariance matrix adaptation
Cellular multiobjective GA
Combining optimizers with mutual information trees algorithm
Combinatorial optimization problem
Chemical reaction optimization
Computer unified device architecture
Differential evolution

DE for multiobjective optimization
Dynamic population multiple-swarm multiobjective PSO
Deoxyribonucleic acid
Dynamic optimization problem
Dynamic multiple swarms in multiobjective PSO
Decision-tree-based multiobjective EDA
Evolutionary algorithms
Easy specification of EA
Estimation of Bayesian networks algorithm
Estimation of distribution algorithm
Estimation of Gaussian networks algorithm
Evolutionary local selection algorithm

xix


xx

EPUS-PSO
ES
FDR-PSO
G3
GA
GEP
GP
GPU
HypE
IDCMA
IDEA
IEC

IMOEA
IMOGA
LABS
LCSS
LDWPSO
LMI
MCMC
meCGA
MIMD
MIMIC
MISA
MOEA/D
MOGA
MOGLS
mohBOA
MOP
moPGA
MPMO
MST
MTSP
NetKeys
NMR
NNIA
NPGA
NSGA
opt-aiNet
PAES
PBIL
PCB
PCSEA

PCX
PICEA
PIPE

Abbreviations

Efficient population utilization strategy for PSO
Evolution strategy
Fitness-distance-ratio-based PSO
Generalized generation gap
Genetic algorithm
Gene expression programming
Genetic programming
Graphics processing unit
Hypervolume-based algorithm
Immune dominance clonal multiobjective algorithm
Iterated density-estimation EA
Interactive evolutionary computation
Incrementing MOEA
Incremental multiple-objective GA
Low autocorrelation binary sequences
Longest common subsequence
Linearly decreasing weight PSO
Linear matrix inequality
Markov chain Monte Carlo
Multiobjective extended compact GA
Multiple instruction multiple data
Mutual information maximization for input clustering
Multiobjective immune system algorithm
MOEA based on decomposition

Multiobjective GA
Multiple-objective genetic local search
Multiobjective hierarchical BOA
Multiobjective optimization problem
Multiobjective parameterless GA
Multiple populations for multiple objectives
Minimum spanning tree
Multiple traveling salesmen problem
Network random keys
Nuclear magnetic resonance
Nondominated neighbor immune algorithm
Niched-Pareto GA
Nondominated sorting GA
Optimized aiNet
Pareto archived ES
Population-based incremental learning
Printed circuit board
Pareto corner search EA
Parent-centric recombination
Preference-inspired coevolutionary algorithm
Probabilistic incremental program evolution


Abbreviations

POLE
PSL
PSO
QAP
QSO

REDA
RM-MEDA
SA
SAGA
SAMC
SDE
SIMD
SPEA
SVLC
TLBO
TOPSIS
TSP
TVAC
UMDA
UNBLOX
VEGA
VIV

xxi

Program optimization with linkage estimation
Peak sidelobe level
Particle swarm optimization
Quadratic assignment problem
Quantum swarm optimization
Restricted Boltzmann machine-based multiobjective EDA
Regularity model-based multiobjective EDA
Simulated annealing
Speciation adaptation GA
Stochastic approximation Monte Carlo

Shift-based density estimation
Single instruction multiple data
Strength Pareto EA
Synapsing variable-length crossover
Teaching–learning-based optimization
Technique for order preference similar to an ideal solution
Traveling salesman problem
Time-varying acceleration coefficients
Univariate marginal distribution algorithm
Uniform block crossover
Vector-evaluated GA
Virtual virus


1

Introduction

This chapter introduces background material on global optimization and the concept
of metaheuritstics. Basic definitions of optimization, swarm intelligence, biological
process, evolution versus learning, and no-free-lunch theorem are described. We
hope this chapter will arouse your interest in reading the other chapters.

1.1 Computation Inspired by Nature
Artificial intelligence (AI) is an old discipline for making intelligent machines.
Search is a key concept of AI, because it serves all disciplines. In general, the
search spaces of practical problems are typically so large that excludes the possibility for being enumerated. This disables the use of traditional calculus-based and
enumeration-based methods. Computational intelligence paradigms are initiated for
this purpose, and the approach mainly depends on the cooperation of agents.
Optimization is the process of searching for the optimal solution. The three search

mechanisms are analytical, enumeration, and heuristic search techniques. Analytical
search is calculus-based. The search algorithms may be guided by the gradient or the
Hessian of the function, leading to a local minimum solution. Random search and
enumeration are unguided search methods that simply enumerate the search space
and exhaustively search for the optimal solution. Heuristic search is guided search
that in most cases produces high-quality solutions.
Computational intelligence is a field of AI. It investigates adaptive mechanisms
to facilitate intelligent behaviors in complex environments. Unlike AI that relies
on knowledge derived from human expertise, computational intelligence depends
upon numerical data collected. It includes a set of nature-inspired computational
paradigms. Major subjects in computational intelligence include neural networks for
pattern recognition, fuzzy systems for reasoning under uncertainty, and evolutionary
computation for stochastic optimization search.
© Springer International Publishing Switzerland 2016
K.-L. Du and M.N.S. Swamy, Search and Optimization by Metaheuristics,
DOI 10.1007/978-3-319-41192-7_1

1


2

1 Introduction

Nature is the primary source of inspiration for new computational paradigms. For
instance, Wiener’s cybernetics was inspired by feedback control processes observable in biological systems. Changes in nature, from microscopic scale to ecological
scale, can be treated as computations. Natural processes always reach an equilibrium
that is optimal. Such analogies can be used for finding useful solutions for search
and optimization. Examples of natural computing paradigms are artificial neural
networks [43], simulated annealing (SA) [37], genetic algorithms [30], swarm intelligence [22], artificial immune systems [16], DNA-based molecular computing [1],

quantum computing [28], membrane computing [51], and cellular automata (von
Neumann 1966).
From bacteria to humans, biological entities have social interaction ranging from
altruistic cooperation to conflict. Swarm intelligence borrows the idea of the collective behavior of biological population. Cooperative problem-solving is an approach
that achieves a certain goal by the cooperation of a group of autonomous entities. Cooperation mechanisms are common in agent-based computing paradigms,
be biological-based or not. Cooperative behavior has inspired researches in biology,
economics, and the multi-agent systems. This approach is based on the notion of the
associated payoffs from pursuing certain strategies.
Game theory studies situations of competition and cooperation between multiple
parties. The discipline starts with the von Neumann’s study on zero-sum games [48].
It has many applications in strategic warfares, economic or social problems, animal
behaviors, and political voting.
Evolutionary computation, DNA computing, and membrane computing are dependent on knowledge on the microscopic cell structure of life. Evolutionary computation evolves a population of individuals by generations, generate offspring by
mutation and recombination, and select the fittest to survive each generation. DNA
computing and membrane computing are emerging computational paradigms at the
molecular level.
Quantum computing is characterized by principles of quantum mechanics, combined with computational intelligence [46]. Quantum mechanics is a mathematical
framework or set of rules for the construction of physical theories.
All effective formal behaviors can be simulated by Turing machines. For physical devices used for computational purpose, it is widely assumed that all physical
machine behaviors can be simulated by Turing machines. When a computational
model computes the same class of functions as the Turing machine, and potentially
faster, it is called a super-Turing model. Hypercomputation refers to computation
that goes beyond the Turing limit, and it is in the sense of super-Turing computation.
While Deutsch’s (1985) universal quantum computer is a super-Turing model, it is not
hypercomputational. The physicality of hypercomputational behavior is considered
in [55] from first principles, by showing that quantum theory can be reformulated in
a way that explains why physical behaviors can be regarded as computing something
in standard computational state machine sense.



1.2 Biological Processes

3

1.2 Biological Processes
The deoxyribonucleic acid (DNA) is carrier of the genetic information of organisms.
Nucleic acids are linear unbranched polymers, i.e., chain molecules, of nucleotides.
Nucleotides are divided into purines (adenine - A, guanine - G) and pyrimidines
(thymine - T, cytosine - C). The DNA is organized into a double helix structure.
Complementary nucleotides (bases) are pitted against each other: A and T, as well
as G and C.
The DNA structure is shown in Figure 1.1. The double helix, composed of phosphate groups (triangles) and sugar components (squares), is the backbone of the DNA
structure. The double helix is stabilized by two hydrogen bonds between A and T,
and three hydrogen bonds between G and C.
A sequence of three nucleotides is a codon or triplet. With three exceptions,
all 43 = 64 codons code one of 20 amino acids, and the synonyms code identical
amino acids. Proteins are polypeptide chains consisting of the 20 amino acids. An
amino acid consists of a carboxyl and an amino group which differs in other groups
that may also contain the hexagonal benzene molecule. The peptide bound of the
long polypeptide chains happens between the amino and the carboxyl group of the
neighbored molecule. Proteins are the basis modules of all cells and are actors of
life processes. They build characteristic three-dimensional structures, e.g., the alpha
helix molecule.
The human genome is about 3 billion base pairs long that specifies about 20488
genes, arranged in 23 pairs of homologous chromosomes. All base pairs of the DNA
from a single human cell have an overall length of 2.6 m, when unraveled and
stretched out, but are compressed in the core to size of 200 µm. Locations on these
chromosomes are referred to as loci. A locus which has a specific function is known
as a gene. The state of the genes is called the genotype and the observable of the
genotype is called the phenotype. A genetic marker is a locus with a known DNA

sequence which can be found in each person in the general population.
The transformation from genotype to phenotype is called gene expression. In the
transcription phase, the DNA is translated into the RNA. In the translation phase, the
RNA then synthesizes proteins.

A

C

T

G

T

G

A

C

Figure 1.1 The DNA structure.


4

1 Introduction

Figure 1.2 A gene on a
chromosome (Courtesy U.S.

Department of Energy,
Human Genome Program).

Figure 1.2 displays a chromosome, its DNA makeup, and identifies one gene.
The genome directs the construction of a phenotype, especially because the genes
specify sequences of amino acids which, when properly folded, become proteins. The
phenotype contains the genome. It provides the environment necessary for survival,
maintenance, and replication of the genome.
Heredity is relevant to information theory as a communication process [5]. The
conservation of genomes over intervals at the geological timescale and the existence
of mutations at shorter intervals can be conciliated, assuming that genomes possess
intrinsic error-correction codes. The constraints incurred by DNA molecules result
in a nested structure. Genomic codes resemble modern codes, such as low-density
parity-check (LDPC) codes or turbocodes [5]. The high redundancy of genomes
achieves good error-correction performance by simple means. At the same time,
DNA is a cheap material.
In AI, some of the most important components comprise the process of memory
formation, filtering, and pattern recognition. In biological systems, as in the human
brain, a model can be constructed of a network of neurons that fire signals with
different time sequence patterns for various input signals. The unit pulse is called an
action potential, involving a depolarization of the cell membrane and the successive
repolarization to the resting potential. The physical basis of this unit pulse is from
active transport of ions by chemical pumps [29]. The learning process is achieved by
taking into account the plasticity of the weights with which the neurons are connected
to one another. In biological nervous systems, the input data are first processed locally
and then sent to the central nervous system [33]. This preprocessing is partly to avoid
overburdening the central nervous system.
The connectionist systems (neural networks) are mainly based on a single brainlike connectionist principle of information processing, where learning and information exchange occur in the connections. In [36], the connectionist paradigm is
extended to integrative connectionist learning systems that integrate in their structure and learning algorithms principles from different hierarchical levels of information processing in the brain, including neuronal, genetic, quantum. Spiking neural
networks are used as a basic connectionist learning model.



1.3 Evolution Versus Learning

5

1.3 Evolution Versus Learning
The adaptation of creatures to their environments results from the interaction of two
processes, namely, evolution and learning. Evolution is a slow stochastic process
at the population level that determines the basic structures of a species. Evolution
operates on biological entities, rather than on the individuals themselves. At the
other end, learning is a process of gradually improving an individual’s adaptation
capability to the environment by tuning the structure of the individual.
Evolution is based on the Darwinian model, also called the principle of natural
selection or survival of the fittest, while learning is based on the connectionist model
of the human brain. In the Darwinian evolution, knowledge acquired by an individual
during the lifetime cannot be transferred into its genome and subsequently passed
on to the next generation. Evolutionary algorithms (EAs) are stochastic search methods that employ a search technique based on the Darwinian model, whereas neural
networks are learning methods based on the connectionist model.
Combinations of learning and evolution, embodied by evolving neural networks,
have better adaptability to a dynamic environment [39,66]. Evolution and learning
can interact in the form of the Lamarckian evolution or be based on the Baldwin
effect. Both processes use learning to accelerate evolution.
The Lamarckian strategy allows the inheritance of the acquired traits during an
individual’s life into the genetic code so that the offspring can inherit its characteristics. Everything an individual learns during its life is encoded back into the
chromosome and remains in the population. Although the Lamarckian evolution is
biologically implausible, EAs as artificial biological systems can benefit from the
Lamarckian theory. Ideas and knowledge are passed from generation to generation,
and the Lamackian theory can be used to characterize the evolution of human cultures. The Lamarckian evolution has proved effective within computer applications.
Nevertheless, the Lamarckian strategy has been pointed out to distort the population

so that the schema theorem no longer applies [62].
The Baldwin effect is biologically more plausible. In the Baldwin effect, learning
has an indirect influence, that is, learning makes individuals adapt better to their environments, thus increasing their reproduction probability. In effect, learning smoothes
the fitness landscape and thus facilitates evolution [27]. On the other hand, learning
has a cost, thus there is evolutionary pressure to find instinctive replacements for
learned behaviors. When a population evolves a new behavior, in the early phase,
there will be a selective pressure in favor of learning, and in the latter phase, there
will be a selective pressure in favor of instinct. Strong bias is analogous to instinct,
and weak bias is analogous to learning [60]. The Baldwin effect only alters the fitness
landscape and the basic evolutionary mechanism remains purely Darwinian. Thus,
the schema theorem still applies to the Baldwin effect [59].
A parent cannot pass its learned traits to its offspring, instead only the fitness after
learning is retained. In other words, the learned behaviors become instinctive behaviors in subsequent generations, and there is no direct alteration of the genotype.
The acquired traits finally come under direct genetic control after many generations, namely, genetic assimilation. The Baldwin effect is purely Darwinian, not


6

1 Introduction

Lamarckian in its mechanism, although it has consequences that are similar to those
of the Lamarckian evolution [59]. A computational model of the Baldwin effect is
presented in [27].
Hybridization of EAs and local search can be based either on the Lamarckian
strategy or on the Baldwin effect. Local search corresponds to the phenotypic plasticity in biological evolution. The hybrid methods based on the Lamarckian strategy
and the Baldwin effect are very successful with numerous implementations.

1.4 Swarm Intelligence
The definition of swarm intelligence was introduced in 1989, in the context of cellular
robotic systems [6]. Swarm intelligence is a collective intelligence of groups of

simple agents [8]. Swarm intelligence deals with collective behaviors of decentralized
and self-organized swarms, which result from the local interactions of individual
components with one another and with their environment [8]. Although there is
normally no centralized control structure dictating how individual agents should
behave, local interactions among such agents often lead to the emergence of global
behavior.
Most species of animals show social behaviors. Biological entities often engage
in a rich repertoire of social interaction that could range from altruistic cooperation
to open conflict. The well-known examples for swarms are bird flocks, herds of
quadrupeds, bacteria molds, fish schools for vertebrates, and the colony of social
insects such as termites, ants, bees, and cockroaches, that perform collective behavior.
Through flocking, individuals gain a number of advantages, such as having reduced
chances of being captured by predators, following migration routes in a precise and
robust way through collective sensing, having improved energy efficiency during the
travel, and the opportunity of mating.
The concept of individual–organization [57] has been widely used to understand
collective behavior of animals. The principle of individual–organization indicates
that simple repeated interactions between individuals can produce complex behavioral patterns at group level [57]. The agents of these swarms behave without supervision and each of these agents has a stochastic behavior due to its perception from,
and also influence on, the neighborhood and the environment. The behaviors can
be accurately described in terms of individuals following simple sets of rules. The
existence of collective memory in animal groups [15] establishes that the previous
history of the group structure influences the collective behavior in future stages.
Grouping individuals often have to make rapid decisions about where to move
or what behavior to perform, in uncertain or dangerous environments. Groups are
often composed of individuals that differ with respect to their informational status,
and individuals are usually not aware of the informational state of others. Some
animal groups are based on a hierarchical structure according to a fitness principle
known as dominance. The top member of the group leads all members of that group,
e.g., in the cases of lions, monkeys, and deer. Such animal behaviors lead to stable



1.4 Swarm Intelligence

7

groups with better cohesion properties among individuals [9]. Some animals, like
birds, fishes and sheep droves, live in groups but have no leader. This type of animals
has no knowledge about their group and environment. Instead, they can move in the
environment via exchanging data with their adjacent members.
Different swarm intelligence systems have inspired several approaches, including
particle swarm optimization (PSO) [21], based on the movement of bird flocks and
fish schools; the immune algorithm by the immune systems of mammals; bacteria foraging optimization [50], which models the chemotactic behavior of Escherichia coli;
ant colony optimization (ACO) [17], inspired on the foraging behavior of ants; and
artificial bee colony (ABC) [35], based on foraging behavior of honeybee swarms.
Unike EAs, which are primarily competitive among the population, PSO and
ACO adopt a more cooperative strategy. They can be treated as ontogenetic, since
the population resembles a multicellular organism optimizing its performance by
adapting to its environment.
Many population-based metaheuristics are actually social algorithms. Cultural
algorithm [53] is introduced for modeling social evolution and learning. Ant colony
optimization is a metaheuristic inspired by ant colony behavior in finding the shortest path to reach food sources. Particle swarm optimization is inspired by social
behavior and movement dynamics of insect swarms, bird flocking, and fish schooling. Artificial immune system is inspired by biological immune systems, and exploit
their characteristics of learning and memory to solve optimization problems. Society
and civilization method [52] utilizes the intra and intersociety interactions within a
society and the civilization model.

1.4.1 Group Behaviors
In animal behavioral ecology, group living is a widespread phenomenon. Animal
search behavior is an active movement by which an animal attempts to find resources
such as food, mates, oviposition, or nesting sites. In nature, group members often

have different search and competitive abilities. Subordinates, who are less efficient
foragers than the dominant, will be dispersed from the group. Dispersed animals
may adopt ranging behavior to explore and colonize new habitats.
Group search usually adopts two foraging strategies within the group: producing
(searching for food) and joining (scrounging). Joining is a ubiquitous trait found in
most social animals such as birds, fish, spiders, and lions. In order to analyze the
optimal policy for joining, two models for joining are information-sharing [13] and
producer–scrounger [4]. Information-sharing model assumes that foragers search
concurrently for their own resource while searching for opportunities to join. In
producer–scrounger model, foragers are assumed to use producing (finding) or joining (scrounging) strategies exclusively; they are divided into leaders and followers.
For the joining policy of ground-feeding birds, producer–scrounger model is
more plausible than information-sharing model. In producer–scrounger model, three
basic scrounging strategies are observed in house sparrows (Passer domesticus):
area copying—moving across to search in the immediate area around the producer,


×