Tải bản đầy đủ (.pdf) (470 trang)

Handbook of virtual humans

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (804 KB, 470 trang )


Handbook of Virtual Humans

Edited by

N. Magnenat-Thalmann
MIRALab, University of Geneva,
Switzerland

D. Thalmann
VRlab, EPFL , Switzerland



Handbook of Virtual Humans



Handbook of Virtual Humans

Edited by

N. Magnenat-Thalmann
MIRALab, University of Geneva,
Switzerland

D. Thalmann
VRlab, EPFL , Switzerland


Copyright © 2004



John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester,
West Sussex PO19 8SQ, England
Telephone (+44) 1243 779777

Email (for orders and customer service enquiries):
Visit our Home Page on www.wileyeurope.com or www.wiley.com
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or
transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or
otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of
a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP,
UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed
to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West
Sussex PO19 8SQ, England, or emailed to , or faxed to (+44) 1243 770571.
This publication is designed to provide accurate and authoritative information in regard to the subject
matter covered. It is sold on the understanding that the Publisher is not engaged in rendering
professional services. If professional advice or other expert assistance is required, the services of a
competent professional should be sought.
Other Wiley Editorial Offices
John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA
Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA
Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany
John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia
John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809
John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1

British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
ISBN 0-470-02316-3
Typeset in 10/12pt Times by Integra Software Services Pvt. Ltd, Pondicherry, India

Printed and bound in Great Britain by TJ International Ltd, Padstow, Cornwall
This book is printed on acid-free paper responsibly manufactured from sustainable forestry
in which at least two trees are planted for each one used for paper production.


Contents
Preface

xiv

List of Contributors

xv

List of Figures

xvi

List of Tables

xxiv

1 An Overview of Virtual Humans
Nadia Magnenat-Thalmann and Daniel Thalmann
1.1 Why Virtual Humans?
1.2 History of Virtual Humans
1.2.1 Early Models
1.2.2 Short Films and Demos
1.2.3 The Evolution towards Real-Time
1.3 The Applications of Virtual Humans

1.3.1 Numerous Applications
1.3.2 Virtual Presenters for TV and the Web
1.3.3 Virtual Assistants for Training in Case of Emergency
1.3.4 Virtual Ancient People in Inhabited Virtual Cultural Heritage
1.3.5 Virtual Audience for Treatment of Social Phobia
1.3.6 Virtual Mannequins for Clothing
1.3.7 Virtual Workers in Industrial Applications
1.3.8 Virtual Actors in Computer-Generated Movies
1.3.9 Virtual Characters in Video Games
1.4 The Challenges in Virtual Humans
1.4.1 A Good Representation of Faces and Bodies
1.4.2 A Flexible Motion Control
1.4.3 A High-Level Behavior
1.4.4 Emotional Behavior
1.4.5 A Realistic Appearance

1
1
3
3
5
7
8
8
8
9
11
12
13
14

16
18
20
20
20
21
22
22


vi

Contents

1.4.6 Interacting with the Virtual World
1.4.7 Interacting with the Real World
1.5 Conclusion

23
24
24

2 Face Cloning and Face Motion Capture
Wonsook Lee, Taro Goto, Sumedha Kshirsagar, Tom Molet
2.1 Introduction
2.2 Feature-Based Facial Modeling
2.2.1 Facial Modeling Review and Analysis
2.2.2 Generic Human Face Structure
2.2.3 Photo-Cloning
2.2.4 Feature Location and Shape Extraction

2.2.5 Shape Modification
2.2.6 Texture Mapping
2.2.7 Face Cloning from Range Data
2.2.8 Validation of the Face Cloning Results
2.3 Facial Motion Capture
2.3.1 Motion Capture for Facial Animation
2.3.2 MPEG-4-Based Face Capture
2.3.3 Generation of Static Expressions or Key-Frames
2.3.4 Analysis of Facial Capture Data to Improve Facial Animation

26

3 Body Cloning and Body Motion Capture
Pascal Fua, Ralf Plaenkers, WonSook Lee, Tom Molet
3.1 Introduction
3.2 Body Models for Fitting Purposes
3.2.1 Stick Figure
3.2.2 Simple Volumetric Primitives
3.2.3 Multi-Layered Models
3.2.4 Anatomically Correct Models
3.3 Static Shape Reconstruction
3.3.1 3-D Scanners
3.3.2 Finding Structure in Scattered 3-D Data
3.3.3 Conforming Animatable Models to 3-D Scanned Data
3.3.4 Photo-Based Shape Reconstruction
3.3.5 Video-Based Shape Reconstruction
3.4 Dynamic Motion Capture
3.4.1 Early Motion Analysis
3.4.2 Electro-Magnetic and Optical Motion Capture Systems
3.4.3 Video-Based Motion Capture

3.5 Articulated Soft Objects for Shape and Motion Estimation
3.5.1 State Vector
3.5.2 Metaballs and Quadratic Distance Function
3.5.3 Optimization Framework
3.5.4 Implementation and Results
3.6 Conclusion

26
27
27
31
32
33
38
39
41
41
46
46
47
49
50
52
52
53
53
53
54
55
55

55
56
57
58
60
60
61
62
64
67
67
68
68
70
71


Contents

vii

4 Anthropometric Body Modeling
Hyewon Seo
4.1 Introduction
4.2 Background
4.2.1 Anthropometry
4.2.2 Anthropometric Human Models in CG
4.2.3 Motivating Applications
4.2.4 Challenging Problems
4.3 Our Approaches to Anthropometric Models

4.3.1 Overview
4.3.2 Data Acquisition
4.3.3 Pre-Processing
4.3.4 Interpolator Construction
4.3.5 Results and Implementation
4.4 Conclusion

75

5 Body Motion Control
Ronan Boulic, Paolo Baerlocher
5.1 Introduction
5.2 State of the Art in 3-D Character Animation
5.2.1 The Levels of Abstraction of the Musculo-Skeletal System
5.2.2 Techniques for the Animation and the Control of the Multi-Body
System
5.2.3 What Is Motion?
5.2.4 Background to Inverse Kinematics
5.2.5 Review of Inverse Kinematics Resolution Methods
5.2.6 Other Issues in the Production of 3-D Character Animation
5.3 The Multiple Priority Levels IK (MPL-IK)
5.3.1 Background to Numeric IK
5.3.2 Handling Two Conflicting Constraints
5.3.3 Generalizing to p Priority Levels
5.4 MPL-IK Results
5.5 Conclusion
6 Facial Deformation Models
Prem Kalra, Stephane Garchery, Sumedha Kshirsagar
6.1 Introduction
6.2 Some Preliminaries about the Anatomy of a Face

6.2.1 Skin
6.2.2 Muscles
6.2.3 Bone
6.3 Control Parameterization
6.3.1 Interpolation
6.3.2 FACS (Facial Action Coding System)
6.3.3 FAP (Facial Animation Parameters)

75
77
77
81
84
85
86
87
87
89
94
96
98
99
99
101
101
103
104
107
109
112

113
113
115
116
116
117
119
119
120
120
120
121
122
122
122
123


viii

Contents

6.4 Facial Deformation Models
6.4.1 Shape Interpolation
6.4.2 Parametric Model
6.4.3 Muscle-Based Models
6.4.4 Finite Element Method
6.4.5 Other Models
6.4.6 MPEG-4-Based Facial Animation
6.5 Tongue, Wrinkles and Other Features

6.6 Summary
6.7 Conclusion

123
123
123
124
128
129
129
136
137
137

7 Body Deformations
Amaury Aubel
7.1 Surface Models
7.1.1 Rigid Deformations
7.1.2 Local Surface Operators
7.1.3 Skinning
7.1.4 Contour Deformation
7.1.5 Deformations by Example
7.2 Volumetric Models
7.2.1 Implicit Surfaces
7.2.2 Collision Models
7.3 Multi-Layered Models
7.3.1 Skeleton Layer
7.3.2 Muscle Layer
7.3.3 Fat Layer
7.3.4 Skin Layer

7.4 Conclusion
7.4.1 Comparative Analysis
7.4.2 Depth of Simulation
7.4.3 Future Research Directions

140

8 Hair Simulation
Sunil Hadap
8.1 Introduction
8.2 Hair Shape Modeling
8.3 Hair Dynamics
8.4 Hair Rendering
8.5 Summary
8.6 Static Hair Shape Modeling Based on Fluid Flow
8.6.1 Hair Shape Model
8.6.2 Interactive Hair-Styler
8.6.3 Enhancing Realism
8.7 Modeling Hair Dynamics Based on Continuum Mechanics
8.7.1 Hair as Continuum
8.7.2 Single Hair Dynamics

161

140
140
141
141
142
144

145
146
147
148
148
148
155
155
158
159
159
160

161
162
164
165
168
169
170
173
176
178
179
182


Contents

8.8


8.7.3
Fluid Hair Model
8.7.4
Results
Conclusion

9 Cloth Simulation
Pascal Volino, Frédéric Cordier
9.1
Introduction
9.2
Technology Summary
9.2.1
Historical Review
9.2.2
Cloth Mechanics
9.3
Mechanical Simulation of Cloth
9.3.1
Mechanical Modeling Schemes
9.3.2
A Precise Particle-Based Surface Representation
9.3.3
Numerical Integration
9.4
Collision Techniques
9.4.1
Principles of Collision Detection
9.4.2

Collision Detection for Cloth Simulation
9.4.3
Collisions on Polygonal Meshes
9.4.4
Collision Response Schemes
9.5
Enhancing Garments
9.5.1
Mesh Smoothing
9.5.2
Geometrical Wrinkles
9.5.3
Advanced Fabric Lighting
9.5.4
Toward Real-Time Garment Animation
9.6
Designing Garments
9.6.1
Garment Design Tools
9.6.2
Applications
9.7
Conclusion
10 Expressive Speech Animation and Facial Communication
Sumedha Kshirsagar, Arjan Egges, Stéphane Garchery
10.1 Introduction
10.2 Background and Review
10.3 Facial Animation Design
10.4 Parameterization
10.5 High Level Control of Animation

10.6 Speech Animation
10.6.1 Using Text-to-Speech
10.6.2 Phoneme Extraction from Natural Speech
10.6.3 Co-articulation
10.6.4 Expression Blending
10.6.5 Enhancing Realism
10.7 Facial Motion Capture and Analysis
10.7.1 Data Analysis
10.7.2 Principal Component Analysis
10.7.3 Contribution of Principal Components

ix

185
189
190
192
192
193
193
193
198
199
204
207
211
212
213
215
216

220
220
222
223
224
226
227
228
229
230
230
231
231
233
234
238
238
239
242
243
244
245
245
246
247


x

Contents


10.7.4 Nature of Analysis Data and the PCs
10.7.5 Expression Blending Using PCs
10.8 Facial Communication
10.8.1 Dialogue Generation
10.8.2 Natural Language Processing and Generation
10.8.3 Emotions
10.8.4 Personality and Mood
10.9 Putting It All Together
10.10 Conclusion

250
250
252
253
255
255
256
258
259

11 Behavioral Animation
Jean-Sébastien Monzani, Anthony Guye-Vuilleme, Etienne de Sevin
11.1 What Is Behavioral Animation?
11.1.1 Behavior
11.1.2 Autonomy
11.2 State-of-the-Art
11.2.1 Perception and Memory
11.2.2 Defining Behaviors
11.2.3 Interactions

11.2.4 Animation
11.2.5 Applications
11.3 An Architecture for Behavioral Animation
11.3.1 Separate the Body and the Brain
11.3.2 System Design
11.3.3 Animation: A Layered Approach
11.3.4 Intelligent Virtual Agent: Simulating Autonomous
Behavior
11.4 Behavioral Animation and Social Agents
11.5 Case Study
11.5.1 Storytelling
11.5.2 A Mechanism of Motivated Action Selection
11.6 Conclusion

260

12 Body Gesture Recognition and Action Response
Luc Emering, Bruno Herbelin
12.1 Introduction: Reality vs Virtuality
12.2 State-of-the-Art
12.3 Involved Technology
12.4 Action Recognition
12.4.1
Recognition Methods
12.4.2
Model vs Data-Oriented
12.4.3
Recognizable Actions
12.4.4
Action-Analysis Levels

12.4.5
Action Specification
12.4.6
Action Recognition Algorithm

260
261
262
262
262
264
267
268
268
269
269
270
273
276
277
279
279
281
285
287
287
289
289
289
289

291
293
294
295
297


Contents

xi

12.5 Case Studies
12.5.1
Interactive Fighting
12.6 Discussion
12.6.1
Sense of Touching
12.6.2
Reactivity
12.6.3
Objective and Subjective Views
12.6.4
Embodiment
12.6.5
System Performance
12.6.6
Delegate Sub-Tasks
12.6.7
Semantic Modulation
12.6.8

Event/Action Associations
12.6.9
Realistic Behavior
12.6.10 Verbal Response

298
300
300
300
301
301
301
301
301
302
302
302
302

13 Interaction with 3-D Objects
Marcello Kallmann
13.1 Introduction
13.2 Related Work
13.2.1 Object Functionality
13.2.2 Actor Animation
13.3 Smart Objects
13.3.1 Interaction Features
13.3.2 Interpreting Interaction Features
13.4 SOMOD
13.4.1 Object Properties

13.4.2 Interaction Information
13.4.3 Behaviors
13.5 Interacting with Smart Objects
13.5.1 Interpretation of Plans
13.5.2 Manipulation Actions
13.6 Case Studies
13.6.1 Opening a Drawer
13.6.2 Interaction of Multiple Actors
13.6.3 Complex Behaviors
13.7 Remaining Problems

303

14 Groups and Crowd Simulation
Soraia Raupp Musse, Branislav Ulicny, Amaury Aubel
14.1 Introduction
14.1.1 Structure of this Chapter
14.2 Related Work
14.2.1 Crowd Evacuation Simulators
14.2.2 Crowd Management Training Systems
14.2.3 Sociological Models
14.2.4 Computer Graphics
14.2.5 Classification of Crowd Methods

303
304
305
305
307
307

309
310
310
310
312
314
314
315
317
318
319
320
321
323
323
324
324
325
327
328
329
330


xii

Contents

14.3 A Hierarchical Approach to Model Crowds
14.3.1 Hierarchic Model

14.3.2 Emergent Crowds
14.4 Crowd Visualization
14.4.1 Virtual Human Model
14.4.2 Crowd Creation
14.4.3 Level of Detail
14.4.4 Animated Impostors
14.4.5 Impostor Rendering and Texture Generation
14.4.6 Texture Refreshment Approach
14.4.7 Visibility Issue
14.4.8 Factored Impostor
14.4.9 Z-Buffer Corrected Impostor
14.4.10 Results
14.5 Case Studies
14.5.1 Programmed Crowds
14.5.2 Guided Crowds
14.5.3 Autonomous Crowds
14.5.4 Interactive Crowd in Panic Situation
14.6 Conclusion

333
333
335
338
338
338
339
339
340
342
343

344
345
346
346
346
348
349
350
351

15 Rendering of Skin and Clothes
Neeharika Adabala
15.1 Introduction
15.2 Rendering of Skin
15.2.1 Texturing
15.2.2 Illumination Models
15.2.3 Interaction with the Environment
15.2.4 Temporal Features
15.2.5 Summary of Skin Rendering Techniques
15.3 Rendering of Clothes
15.3.1 Modeling Color Variation from Images
15.3.2 Illumination Models
15.3.3 Representation of Milli-Geometry
15.3.4 Rendering with Micro/Milli Geometric Detail and BRDF for
Woven Clothes
15.3.5 Macro-geometry and Interaction with the Environment
15.4 Conclusion

353


16 Standards for Virtual Humans
Stéphane Garchery, Ronan Boulic, Tolga Capin, Prem Kalra
16.1 Introduction
16.2 The H-Anim Specification Scheme
16.2.1 The Need for a Standard Human Skeleton
16.2.2 H-Anim Skeleton Convention

353
354
354
356
358
358
360
361
362
362
364
365
370
372
373
373
373
373
374


Contents


xiii

16.2.3 Benefits and Limitations of the Standard Skeleton
16.2.4 Enforcing Motion Re-use
16.3 The MPEG-4 FBA Standard
16.3.1 MPEG-4 Body Animation Standard
16.3.2 MPEG-4 Face Animation Standard
16.4 What’s Next?

375
376
379
380
383
391

Appendix A: Damped Least Square Pseudo-Inverse J+

392

Appendix B: H-Anim Joint and Segment Topology

393

Appendix C: Facial Animation Parameter Set

396

References


400

Index

437


Preface
Scenes involving Virtual Humans imply many complex problems that researchers have
been trying to solve for more than 20 years. Today, everyone would like to simulate or
interact with believable Virtual Humans in various situations as in games, films, interactive
television, cultural heritage or web-based situations. The field of applications is enormous
and there is still a long way to go to achieve the goal of having realistic Virtual Humans
who adapt their behavior to any real human interaction and real-life situation.
This book contains fundamental research in Virtual Human technology. It provides a stateof-the-art in face and body modeling and cloning, facial and body animation, hair simulation,
clothing, image-based rendering, crowd simulation, autonomous behavioral animation etc.
More than a dozen of the PhDs from both MIRALab, University of Geneva, and VRlab,
EPFL, Switzerland, have contributed to this book. They have described their recent work
taking into account the state-of-the-art in their field.
We would like to thank a few people who have helped the production of this book by
editing the manuscript: Professor Prem Kalra from the Indian Institute of Technology and
Dr. Chris Joslin from MIRALab. We would like to thank Mrs. Zerrin Celebi from VRlab
for her editorial assistance. We are also very grateful to the John Wiley editors, particularly
Ms. Laura Kempster, who have been very helpful in the production of this book.
As an extra resource we have set up a companion website for our title containing short
movies and demos illustrating all Chapters. Please use the following URL to access the
site: />Nadia Magnenat-Thalmann
Daniel Thalmann



List of Contributors
Nadia Magnenat-Thalmann and Daniel Thalmann
Wonsook Lee, Taro Goto, Sumedha Kshirsagar, Tom Molet
Pascal Fua, Ralf Plaenkers, WonSook Lee, Tom Molet
Hyewon Seo
Ronan Boulic, Paolo Baerlocher
Prem Kalra, Stephane Garchery, Sumedha Kshirsagar
Amaury Aubel
Sunil Hadap
Pascal Volino, Frédéric Cordier
Sumedha Kshirsagar, Arjan Egges, Stéphane Garchery
Jean-Sébastien Monzani, Anthony Guye-Vuilleme, Etienne de Sevin
Luc Emering, Bruno Herbelin
Marcello Kallmann
Soraia Raupp Musse, Branislav Ulicny, Amaury Aubel
Neeharika Adabala
Stéphane Garchery, Ronan Boulic, Tolga Capin, Prem Kalra


List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
1.10

1.11
1.12
1.13
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10

2.11
2.12
2.13

2.14

Representation of the head and face by Fred Parke, University of Utah, 1974.
Adam the Juggler created by Triple I, 1982.
Virtual Marilyn in ‘Rendez-vous in Montreal’ by Nadia Magnenat-Thalmann
and Daniel Thalmann, 1987.
Real-time Virtual Marilyn as referee in a Virtual Tennis match.
JUST immersive VR situation training of health emergency personnel.
Virtual Terracotta soldiers in Xian.
Capturing the motion for the Namaz prayer.
Revival of life in Pompeii.
Virtual classroom for assessment and treatment of childhood disorders.

Virtual Clothing in Salon Lanvin.
Virtual Clothing in Musée Picasso.
Training in industrial applications.
Lara Croft.
(a) Three images of an uncalibrated video-sequence. (b) The reconstructed
3-D model.
Control points and the generic model.
Overflow of face and body photo-cloning.
Automatic 3-D facial feature detection flow.
Distribution graph of features.
Distribution graph of each feature’s distance.
(a) Detected features. (b) Modification of a generic head with feature points.
(a) A geometrical deformation for the side views to connect to the front view.
(b) before and after multi-resolution techniques.
Snapshots of a reconstructed head in several views and animation on the face.
(a) Input range data. (b) Delaunay triangles created with feature points and
points collected for fine modification using Delaunay triangles. (c) The result
of giving functional structure on a range data.
(a) Snapshots taken from the same angles.
Two surfaces in different colours in the same space.
2-D case to show the calculation of the distance for each point on the
reconstructed curve to the original input curve by calculating the normal
vector.
2-D case to get error percentage between 2-D curves.

4
5
6
7
10

11
12
12
13
14
15
16
19
30
31
32
35
35
36
40
40
41

42
43
43

44
44


List of Figures

2.15 Error distribution for the face cloning from orthogonal photograph where the
bounding box of the head has the size 167 2 × 236 651 × 171 379.

2.16 Points with error bigger than (a) 15 (b) 10 and (c) 5.
2.17 Use of OPTOTRAK™ for head and facial motion data.
2.18 Feature points optical tracking.
2.19 MPEG-4 feature points.
2.20 Successive feature point displacements from a speech sequence (without
global motion compensation).
2.21 Facial captured expressions applied on various face models.
3.1 Layered body model: (a) Skeleton. (b) Volumetric primitives used to simulate
muscles and fat tissue. (c) Polygonal surface representation of the skin.
(d) Shaded rendering. (e) A cow and a horse modeled using the same technique.
3.2 (a) Cyberware Whole Body Color 3-D Scanner. (b) Polhemus mobile laser
scanner. (c) Resulting 3-D model.
3.3 Scan of a human subject. (a) Its associated axial structure and (b) a similar
scan of a different subject and the animated skeleton obtained by fitting our
model to the data.
3.4 Five input photographs of a woman, Reconstructed 3-D body.
3.5 Comparison of measurement on the actual body and reconstructed body.
3.6 How do horses trot?
3.7 Eadweard Muybridge’s photographic analysis of human and animal motion.
3.8 Magnetic (a) and optical (b) motion capture systems.
3.9 The importance of silhouette information for shape modeling. (a) One image
from a stereo pair. (b) Corresponding disparity map. (c) Fitting the model to
stereo data alone results in a recovered body model that is too far away from
the cloud. (d) Using these outlines to constrain the reconstruction results in
a more accurate model. (e) Only the dashed line is a valid silhouette that
satisfies both criteria of section 2-D Silhouette Observations.
3.10 Results using a trinocular sequence in which the subject performs complex
3-D motions with his arms, so that they occlude each other.
3.11 Results using another trinocular sequence in which the subject moves his
upper body in addition to abruptly waving his arms.

4.1 Leonardo da Vinci, a Renaissance artist, created the drawing of the Vitruvian
Man based on the ‘ideal proportions’.
4.2 Anthropometric landmarks (feature points) in H-Anim 1.1 specification.
4.3 Automatic measurement extractions from the scan data (Cyberware).
4.4 Anthropometric models for ‘Jack’.
4.5 Automatically generated ‘Anthroface’ models.
4.6 Geometrical deformation methods for differently sized body models.
4.7 Overview of the system.
4.8 A scanned model example.
4.9 The template model.
4.10 Feature points used in our system.
4.11 Skeleton fitting procedure.
4.12 Skeleton fitting.
4.13 Fine fitting.

xvii

45
45
47
47
48
49
50

54
56

57
59

59
61
62
63

69
72
73
76
78
81
82
83
84
88
89
90
91
93
93
95


xviii

4.14
4.15
5.1
5.2
5.3

5.4
5.5

5.6
5.7
5.8

5.9
5.10
5.11
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
7.1
7.2
7.3
7.4

7.5
7.6
7.7
7.8
7.9
7.10

7.11
7.12
7.13
7.14

List of Figures

Result models.
Cross-validation result.
The realism assessment seen from the point of view of the application field.
Levels of abstraction of the musculo-skeletal system.
Influence of the posture on the action of a muscle through its line of action.
Two priority levels IK achieved with the cascaded control.
The two chain tips have to reach conflicting positions in space, the solution
provided by the weight strategy is a compromise while the solution provided
by the priority strategy favors the right chain tip.
Overview of the animation production pipeline.
Simplest redundant case.
The variation space with the minimal norm solution (a) and (b) the final
solution including a lower priority task projected on the kernel of J (noted
N(J)).
The simplest redundant case with two conflicting tasks.
The low level task is first compensated prior to the mapping.
Interactive optimization with four priority levels.
Frontal view of facial muscles.
Muscle types.
Three-layered structure.
Rational free form deformation.
Computing weights for animation.
Facial deformation based on FAPs.

Automatic process for designing FAT with an MPEG-4 face model.
Process for designing FAT construction by experienced animator’s work.
Artistic MPEG-4 FAP high-level expressions.
Marilyn’s skin is deformed by JLDs.
Characteristic defects of the skinning algorithm.
Arm deformation with cross-sections.
Comparison on shoulder deformation between the skinning algorithm (top
row) and the Pose Space Deformation algorithm (Wyvill et al. 1998). (bottom
row) using two extreme key-shapes.
Shape blending of various laser scans of a real person using a template
subdivision surface.
Hand modeled with convolution surfaces.
Leg deformation including contact surface using FEM.
The material depth (encoded in psuedo-color) is used for computing the
contact surface when the knee is bent.
Hand deformation based on Dirichlet FFDs.
Leg muscles mimicked by deforming metaballs.
Anatomically-based modeling of the human musculature using ellipsoids.
Isotonic contraction of the arm muscles followed by an isometric contraction
when the hand is clenched into a fist.
A deformed-cylinder muscle model.
Leg muscles – reconstructed from anatomical data – are deformed by action
lines and muscle interaction.

97
98
100
101
102
106


108
112
113

114
115
115
117
121
125
126
127
131
134
135
135
136
141
143
143

144
144
146
147
148
149
149
150

151
152
152


List of Figures

7.15 Each muscle is parameterized and deformed by a set of centroid curves
(action lines).
7.16 Finite element model of the leg.
7.17 Layers in LEMAN.
7.18 Skin extraction by voxelizing the inner layers in a rest pose.
7.19 The layered model by Shen and Thalmann (1996). Ellipsoidal primitives
(b) form an implicit surface, which is sampled by a ray-casting process.
The sampling points (c) are used as control points of B-spline patches. The
B-spline surface is ultimately polygonized at the desired resolution (d and e).
7.20 Three frames of the monkey shoulder animation by Wilhelms and Van Gelder.
7.21 Elastic skin that slides over the underlying surfaces.
7.22 The skin buckles and creases in the work of Hirota et al..
8.1 Hairstyling by defining a few curves in 3-D.
8.2 Cluster hair model, by Yang et al..
8.3 Hair model based on fluid flow paradigm.
8.4 Hair animation using the explicit model, by Kurihara.
8.5 Volumetric texture rendering by Kajiya and Kay (1989).
8.6 Rendering the pipeline of the method- ‘pixel blending and shadow buffer’.
8.7 Fur using explicit hair model.
8.8 Modeling hair as streamlines of a fluid flow.
8.9 Ideal flow elements.
8.10 Linear combination of ideal flow elements.
8.11 Source panel method.

8.12 Subdivision scheme for ideal flow.
8.13 Polygon reduced geometry to define panels.
8.14 Hair growth map and normal velocity map.
8.15 Placing panel sources.
8.16 Simple hairstyles using few fluid elements.
8.17 Hair as a complex fluid flow.
8.18 Adding perturbations to a few individual hairs.
8.19 Adding overall volumetric perturbations.
8.20 Hair clumpiness.
8.21 Possibilities for enhancing realism.
8.22 Hair as a continuum.
8.23 Equation of State.
8.24 Hair strand as an oriented particle system.
8.25 Hair strand as rigid multibody serial chain.
8.26 Fluid dynamics: Eulerian and Lagrangian viewpoints.
8.27 Hair animation.
9.1 Early models of dressed virtual characters.
9.2 Different woven fabric patterns: plain, twirl, basket, satin.
9.3 A mechanical simulation carried out with particle systems and continuum
mechanics.
9.4 Using lengths or angles to measure deformations in a square particle system
grid.
9.5 Computing deformations in a fabric triangle.

xix

153
154
155
156


157
158
158
158
162
163
164
165
166
167
168
169
170
171
172
173
174
174
174
175
176
176
177
177
178
180
181
182
184

186
190
194
195
200
204
206


xx

9.6
9.7
9.8
9.9
9.10
9.11
9.12
9.13
9.14
9.15
9.16
9.17
9.18
9.19
9.20
9.21
9.22
9.23
9.24

10.1
10.2
10.3
10.4
10.5
10.6
10.7
10.8
10.9
10.10
10.11
10.12
10.13
10.14
10.15
10.16
10.17
10.18
10.19
11.1
11.2
11.3

List of Figures

Computing vertex forces for elongation, shear, bending.
Automatic hierarchization of a 50000 triangle object.
Using direction or bounding volumes to detect collisions within and between
nodes, and propagation of the direction volumes up the hierarchy tree.
Hierarchical collision detection at work, showing the hierarchy domains

tested.
Intersections and proximities in polygonal meshes.
Repartition of collision response on the vertices of a mesh element.
Combined corrections of position, speed and acceleration.
A rough sphere model with no smoothing, Gouraud, and Phong shading.
Polygon smoothing: Interpolation, vertex contribution and blending.
Smoothing the rough mesh of a coat, with texture mapping.
Dynamic amplitude variation of geometrical wrinkles, with respect to
elongation and orientation.
An animated dress, mechanically computed as a rough mesh, and dynamically wrinkled according to the mesh deformation.
Enclosing cloth particles into spheres of dynamic radius resulting from
the modeling of a catenary curve.
Building a real-time garment.
A typical framework for garment design, simulation and animation system.
Designing garments: pattern placement and seaming.
Simulating garments: draping and animation.
Fashion design: simulating animated fashion models.
Complex virtual garments.
Facial communication: a broad look.
Hierarchy of facial animation design.
Levels of parameterization.
Various envelopes for key-frame animation.
FAML syntax.
Example of expression track.
Interactive design of facial animation.
Typical speech animation system.
Two approaches to speech-driven facial animation.
Real-time phoneme extraction for speech animation.
Dominance functions for co-articulation.
FAP composition problem and solution.

Placement of cameras and facial markers.
Influence of first six principal components.
Generation of ‘happy’ speech in PC space.
Generation of ‘sad’ speech in PC space.
Speech animation using expression and viseme space.
Emotions-moods-personality layers.
Autonomous emotional dialogue system.
Results of Tyrrell’s test.
Low and high level components in the system.
Snapshot of the Agents’ Common Environment.

206
214
215
216
216
218
219
221
221
222
223
223
225
226
227
228
228
229
229

232
233
233
235
236
237
237
238
240
241
243
243
245
249
251
251
251
257
259
266
270
271


List of Figures

11.4
11.5
11.6
11.7

11.8
11.9
11.10
11.11
11.12
11.13
12.1
12.2
12.3
12.4
12.5
12.6
12.7
12.8
12.9
12.10
12.11
13.1
13.2
13.3
13.4

13.5
13.6
13.7
13.8
13.9
13.10
13.11
13.12

14.1
14.2
14.3
14.4
14.5
14.6
14.7

Verbal communication between two IVAs has to go through the low level.
Example callback for a walking task.
Reactivated tasks – Task stack for the walking tasks.
The Tasks Handler.
Inter-agent communication: the Secretary is talking to the Editor.
Simplified motivational model of action selection for Virtual Humans.
‘Subjective’ evaluation of the motivations (solid curve) from the value
of the internal variables (dashed line) with a threshold system.
A part of the hierarchical decision graph for the eat motivation.
Results of the simulation in terms of achieved behaviors.
Virtual life simulation: by default.
MAI Box: Manipulation, Activity and Impact.
Overview of the VR immersive devices.
Model-oriented simulation system.
Data-oriented simulation system.
Decoding human walking with minimal body references.
Global, floor and body-coordinate systems.
A database of pre-defined postures.
Compromise between recognition data quantity and data analysis costs.
Interactive walk-through environment driven by action recognition events.
The virtual shop.
Interactive fight training with a virtual teacher.

The choice of which interaction features to take into account is directly
related to many implementation issues in the simulation system.
Defining the specific parameters of a drawer.
Positions can be defined for different purposes.
The left image shows a hand shape being interactively defined. The
right image shows all used hand shapes being interactively located with
manipulators.
Defining interaction plans.
For each actor performing an interaction with an object, a thread is used to
interpret the selected interaction plan.
Considered phases for a manipulation instruction.
Specific constraints are used to keep the actor’s spine as straight as possible.
When the position to reach with the hand is too low, additional constraints
are used in order to obtain knee flexion.
The reaching phase of a button press manipulation.
A state machine for a two-stage lift functionality.
A state machine considering intermediate states.
Helbing’s crowd dynamics simulation.
EXODUS evacuation simulator (Exodus).
Simulex, crowd evacuation system.
Legion system, analysis of Sydney stadium.
Small Unit Leader Non-Lethal Training System.
Reynolds’s flock of boids.
Bouvier’s particle systems crowd.

xxi

273
274
275

276
280
282
282
283
284
285
288
290
292
292
294
295
296
298
299
299
300
309
310
311

311
313
314
315
316
317
318
321

321
325
326
327
328
329
330
331


xxii

14.8
14.9
14.10
14.11
14.12
14.13
14.14
14.15
14.16
14.17
14.18
14.19
14.20

14.21
14.22
14.23
14.24

14.25
14.26
14.27
15.1
15.2
15.3
15.4
15.5
15.6
15.7
15.8

15.9
15.10
15.11
15.12
16.1
16.2
16.3

List of Figures

Hodgins’s simulation of a group of bicyclists.
Hierarchical structure of the model.
Autonomous crowd entry at the train station.
A person enters the interaction space, the virtual people react.
Emergent crowds.
Body meshes of decreasing complexity using B-spline surfaces.
A football player and its impostor.
Impostor rendering and texture generation.

Posture variation.
Virtual Human decomposed into several planes.
Actual geometry (a), single quadrilateral (b), multi-plane impostor (c),
and factored impostor (d).
Actors performing a ‘wave’ motion.
Image of football stadium showing regions specified to place the crowd
at the beginning of the simulation as well as the surfaces to be respected
when agents pass through the doors.
Image of the simulation ‘The Crowd goes to the Football Stadium’.
Image of simulation ‘Populating the Virtual City with Crowds’.
Image of an autonomous crowd in a train station.
Crowd reacting as a function of actors’ performance.
Images show the grouping of individuals at the political demonstration.
Agents at the train station.
Crowd in the virtual park: (a) before emergency, (b) after gas leak.
Process of facial simulation and wrinkle generation.
Wrinkle generation with real-time system.
Demonstration of visualization of aging with realistic rendering of
wrinkles.
Artifact due to illumination.
Rendering of cloth with color texture and bump mapping (b) gives a
zoomed in view of (a).
Outline of algorithm.
Example color scheme of a weave pattern.
Output of procedural texture (a) Very loosely twisted thread without shading.
(b) More tightly twisted thread, noise is added to simulate the presence
of fibers about the thread. (c) Thicker fibers twisted into thread. (d) Tightly
twisted thread.
Example of color texture generated for the color scheme component in the
tiled texture of cloth from photograph.

Versatility of the approach for generation of various weaves.
Directional dependence of the appearance of cloth when contrasting colored
threads are woven together.
Illustration of ability to zoom in on detail.
Topologic relationships between the JOINT, SEGMENT and SITE node
types.
Two H-Anim compliant characters.
Types of rotation distribution (for tilt, roll and torsion) over the three spine
regions.

331
334
335
336
337
340
341
342
343
344
344
346

347
348
348
349
349
350
350

351
359
360
360
362
363
366
366

367
368
369
370
371
374
377
378


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×