Tải bản đầy đủ (.pdf) (101 trang)

Tài liệu Character Animation from a Motion Capture Database pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (3.14 MB, 101 trang )

Max-Planck-Institut f
¨
ur Informatik
Computer Graphics Group
Saarbr
¨
ucken, Germany
Character Animation from a Motion Capture Database
Master Thesis in Computer Science
Computer Science Department
University of Saarland
Edilson de Aguiar
Supervisors: Dipl. Inf. Christian Theobalt
Prof. Dr. Hans-Peter Seidel
Max-Planck-Institut f¨ur Informatik
Computer Graphics Group
Saarbr¨ucken, Germany
Begin: June 1,
End: November 26,
ii
Eidesstattliche Erkl
¨
arung
Hiermit erkl
¨
are ich an Eides statt, dass ich die vorliegende Mastersarbeit
selbst
¨
andig und ohne fremde Hilfe verfasst habe. Ich habe dazu keine weiteren
als die angef
¨


uhrten Hilfsmittel benutzt und die aus anderen Quellen entnommenen
Stellen als solche gekennzeichnet.
Saarbr¨ucken, den 26. November, 2003
Edilson de Aguiar
iv
Abstract
Character Animation from a Motion Capture Database
Edilson de Aguiar
Master Thesis in Computer Science
Computer Science Department
University of Saarland
This thesis discusses methods that use information contained in a motion capture
database to assist in the creation of a realistic character animation. Starting with
an animation sketch, where only a small number of keyframes for some degrees of
freedom are set, the motion capture data is used to improve the initial motion qual-
ity. First, the multiresolution filtering technique is presented and it is shown how
this method can be used as a building block for character animation. Then, the hier-
archical fragment method is introduced, which uses signal processing techniques,
the skeleton hierarchy information and a simple matching algorithm applied to data
fragments to synthesize missing degrees of freedom in a character animation, from
a motion capture database. In a third technique, a principal component model is
fitted to the motion capture database and it is demonstrated that using the motion
principle components a character animation can be edited and enhanced after it has
been created. After comparing these methods, a hybrid approach combining the
individual technique’s advantages is proposed, which uses a pipeline in order to
create the character animation in a simple and intuitive way. Finally, the methods
and results are reviewed and approaches for future improvements are mentioned.
Acknowledgements
First I want to thank my supervisors: Dipl. Inf. Christian Theobalt and
Prof. Dr. Hans-Peter Seidel for their help and advice during the development

of this thesis. In addition, I thank all my friends in the IMPRS and Kerstin
Meyer-Ross, for her help here in Germany. For all my colleagues of the Computer
Graphics group at MPI, thank you, specially Volker Blanz, for his help with the
PCA theory and Thomas Annen and Grzegorz Krawczyk for helping me with the
Latex stuff.
I also wish to thank my family, who always supported me, encouraging me
and making me never give up, despite the distance. Thank you Mom, Dad, Raquel,
Rose and Enoc.
Edilson de Aguiar
Contents
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Fundamentals of Character Animation 5
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Keyframing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Physical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Motion Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Project Implementation Aspects . . . . . . . . . . . . . . . . . . 10
2.5.1 Motion Capture Database . . . . . . . . . . . . . . . . . 11
2.5.2 Skeleton Model . . . . . . . . . . . . . . . . . . . . . . . 12
3 Multiresolution Filtering Method 15
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 Signal Processing Methods . . . . . . . . . . . . . . . . . 16
3.2.2 Multiresolution Methods . . . . . . . . . . . . . . . . . . 16
3.3 Multiresolution Filtering Method . . . . . . . . . . . . . . . . . . 16
3.4 Multiresolution filtering on motion data . . . . . . . . . . . . . . 18
3.5 Application to Character Animation . . . . . . . . . . . . . . . . 19

3.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 Fragment Based Methods 25
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
vii
CONTENTS
viii
4.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4 Motion Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.4.1 Motion Phases . . . . . . . . . . . . . . . . . . . . . . . 31
4.4.2 Frequency Analysis . . . . . . . . . . . . . . . . . . . . . 33
4.4.3 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.5 Motion Synthesis and Texture . . . . . . . . . . . . . . . . . . . 34
4.5.1 Fragmentation . . . . . . . . . . . . . . . . . . . . . . . 34
4.5.2 Matching . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5.3 Joining . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5.4 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.7 Hierarchical Fragment Method . . . . . . . . . . . . . . . . . . . 41
4.7.1 Skeleton Hierarchy and Correlation . . . . . . . . . . . . 43
4.7.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.7.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . 47
4.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5 Principal Component Analysis 53
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3 Principal Component Analysis . . . . . . . . . . . . . . . . . . . 55
5.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.3.2 PCA Theory . . . . . . . . . . . . . . . . . . . . . . . . 55

5.3.3 Data Compression . . . . . . . . . . . . . . . . . . . . . 57
5.4 PCA for motion synthesis . . . . . . . . . . . . . . . . . . . . . . 58
5.4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.4.2 Motion Synthesis . . . . . . . . . . . . . . . . . . . . . . 58
5.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6 Hybrid Approach 67
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.2 Hybrid Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
7 Conclusion and Future Work 77
List of Figures
2.1 Example of the motion capture session and equipments used to
capture the motions used in the database. In (a) it is shown the
camera setup and in (b) the subject performing the motion. Images
used from . . . . . . . . . . . . . . . . . 12
2.2 Example of the motion data in the database. The figure shows
respectively the z-angle values for the pelvis, hip, clavicle, forearm
and knee joints. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Joints and bones forming the skeleton model used in the project. . 14
2.4 The skeleton joint hierarchy. On the right side it shows the lower
kinematic sub-chain and on the left the upper kinematic sub-chain. 14
3.1 Generation of the Gaussian pyramid. The value of each node in the
next row,
, is computed as a weighted average of a sub-array
of nodes. In this example a sub-array of length five is used.
Adapted from [BA83]. . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Visualization of different frequency bands of a Gaussian pyramid
(shown only for the first 40 frames). The band g0 corresponds to

the original signal. The low-pass bands corresponding to the high
frequency are g1 and g2, to the middle are g3 and g4, and to the
low frequency are g5 and g6. . . . . . . . . . . . . . . . . . . . . 20
3.3 Visualization of different frequency bands of a Laplacian pyramid
(shown only for the first 40 frames). The band-pass bands corre-
sponding to the high frequency are l0 and l1, to the middle are l2
and l3, and to the low frequency are l4 and l5. . . . . . . . . . . . 21
3.4 Using multiresolution to increase the gain in the middle frequencies 23
3.5 Using multiresolution to decrease the gain in the middle frequencies 24
ix
LIST OF FIGURES
x
4.1 The input for the general fragment based method: (a) keyframed
and motion capture data are decomposed into frequency bands; (b)
animators set the general method parameters. Driven and master
joints are chosen to guide the method, and a particular frequency
band of the master joint is chosen to guide the fragmentation step.
Joints in black will be textured and in blue and red will be synthe-
sized. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2 A fragment based method is composed by four steps: fragmenta-
tion (a), matching (b), joining (c) and smoothing. At the end, an
original keyframed character animation is enhanced by synthesis
and texturing (d). . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3 Example of a walking animation: (a) set of four phases during a
human walking cycle; (b) the right hip z-angle values is plotted,
where it is possible to see the respective phases. . . . . . . . . . . 32
4.4 Plot of the pelvis joint angle against the hip joint angle for all ex-
amples in the database. The shape shown in red demonstrates a
good correlation between these joints. . . . . . . . . . . . . . . . 33
4.5 Example of the fragmentation step: (a) original degree of freedom;

(b) fragments created at locations where the first derivative changes
its sign. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.6 Considering one driven fragment (a), in the matching step all data
fragments are compared with the driven fragments (b) being stretched
or compressed properly (c). At the end, a number of good frag-
ments are found (d). . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.7 In the joining step the good fragments found in the matching step
are concatenated or blended (a). Three different criteria were tested
and compared: (b) best fragment; (c) cost matrix and (d) best ani-
mation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.8 In the smoothing step, the discontinuity magnitude (top left) is
multiplied with a smoothing function (top right) and the result is
added back to the original motion signal. In this way, the continu-
ous version shown on the bottom left is generated. Adapted from
[AF02]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
LIST OF FIGURES
xi
4.9 Example of the joining approaches developed. In (a) all good frag-
ments generated in the matching step for the z-angle of the pelvis
joint are shown. The final sequence generated by the three ap-
proaches are then shown in (b), (c) and (d). . . . . . . . . . . . . 42
4.10 Example of possible joint correlations: (a) represents a bad cor-
relation between joints that do not belong to the same kinematic
sub-chain; (b) represents a good correlation between joints that be-
long to the same kinematic sub-chain. . . . . . . . . . . . . . . . 43
4.11 Example of the root generation stage: (a) driven joints (red) and
root joint (green) are shown; (b) the parents of the driven joints are
generated; (c) next parent joint is generated and (d) root joint is
generated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.12 Example of the motion generation stage: (a) root joint; (b) root

joint together with the original driven joints are used to texture or
synthesis of its children; (c) next children are generated; (d) all
joints are generated. . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.13 Example of the Hierarchical Fragment method being applied to a
human character: (a) shows some frames from the initial character
animation where only lower body joints are keyframed; (b) shows
the respective frames from the resulting character animation where
upper body joints are synthesized and lower body joints are textured. 50
4.14 Another example of the final method being applied to a human
character: (a) shows some frames from the initial character an-
imation where only upper body joints are keyframed; (b) shows
the respective frames from the resulting character animation where
lower body joints are synthesized and upper body joints are textured. 51
4.15 Another example of the final method being applied to a human
character: (a) shows some frames from the initial character anima-
tion where all left body side joints are keyframed; (b) shows the
respective frames from the resulting character animation where all
joints are synthesized. . . . . . . . . . . . . . . . . . . . . . . . . 52
LIST OF FIGURES
xii
5.1 Three approaches are investigated in order to fit a PCA model to a
keyframed and motion capture data (a). In (b) one PCA model is
created for each missing DOF. In (c) one PCA model is created for
each frequency band of a missing DOF. In (d) one PCA model is
created for each sequence position of a particular frequency band
of a missing DOF. . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2 Applying the PCA method to a human character: (a) shows some
frames from the initial animation where the left hip, knee, elbow
and upper-arm joints are keyframed; (b) shows the respective frames
of the resulting animation where new motion for the joints are gen-

erated from the database in order to match the keyframed DOFs. . 65
5.3 Use of the motion principle components in order to edit a motion:
(a) arm positions are modified by altering the principle components
for the right and left clavicle joints; (b) leg positions are also modi-
fied by altering the principle components for the right and left knee
joints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.1 Pipeline showing the Hybrid approach . . . . . . . . . . . . . . . 68
6.2 Example of the first stage: (a) shows some frames from the initial
animation where left body side joints are keyframed; (b) shows the
respective frames after applying the PCA method; (c) and (d) show
possible editing capabilities of the PCA, where the influence of the
principle components for some DOFs are modified. . . . . . . . . 72
6.3 Example of the second stage: (a) shows some frames from the
initial animation where left body side joints are keyframed; (b)
shows the respective frames after applying the first stage (PCA);
(c) shows the same frames after applying the Multiresolution Fil-
tering method to decrease the low frequency bands, generating a
smooth acceleration of the movement. . . . . . . . . . . . . . . . 73
6.4 Example of the last stage: (a) shows some frames from the ini-
tial animation where left body side joints are keyframed; (b) shows
the respective frames after applying the first stage; (c) shows the
frames after applying the second stage; (d) shows the same frames
after applying the Hierarchical Fragment Method in order to im-
prove its realistic appearance. . . . . . . . . . . . . . . . . . . . . 74
LIST OF FIGURES
xiii
6.5 Example of the Hybrid approach: (a) shows some frames from
the initial animation where left body side joints are keyframed; (b)
shows the respective frames from the animation generated auto-
matically by the hybrid method; (c) and (d) show the same frames

from the resulting animations generated when motion principle
components are used to change arm and leg positions by adding
or subtracting constant values. . . . . . . . . . . . . . . . . . . . 75
LIST OF FIGURES
xiv
Chapter 1
Introduction
1.1 Motivation
Generating realistic character animation remains one of the great challenges in
computer graphics. Currently, there are three main methods by which this anima-
tion can be generated. Most commonly, keyframing is used, in which the animator
specifies important key poses for the character at specific frames, and the computer
calculates the frames in between by an interpolation technique. A second approach
uses physical simulation in order to drive the character’s motion. Although results
seem to be promising, due to lack of control, difficulty of use, instabilities and high
computation cost, this method has not been used with much success for characters.
The last approach, motion capture, has been widely used to animate characters.
The idea is to use sensors placed on subjects and collect the data that describes
their motion while they are performing the desired motion.
As the technology for motion capture has improved and the cost decreased, the
interest in using this approach for character animation has also increased. The main
challenge that an animator is confronted with is to create sufficient detail in order
to generate an character animation with a realistic appearance. Achieving detail in
a keyframed animation is extremely labor intensive. However, with motion capture
the details are immediately present. In other words, the data contains the motion
signature.
The main problem with motion capture is the lack of flexibility. For instance,
after collecting the data it is difficult to change it. Due to this problem, many
animators have little interest in using motion capture data. Although keyframing is
labor intensive, the animator can make a character do exactly what he wants it to.

1
CHAPTER 1. INTRODUCTION
2
Since it is often difficult to know exactly what motions are needed before enter-
ing a motion capture session, many techniques have been developed to edit motion
capture data after it has been collected (see Sec. 2.4). In motion editing, two dif-
ferent aspects should be considered: first, it is important to take care not to alter
the motion in such a way that the detail is lost. Second, the system should not just
provide editing capabilities but it should capture the essence of the motion, since
the animator may want a completely different action.
Our intent in this project is to consider the case that an animator wants to use
a number of existing motion sequences, for instance stored in a motion capture
database, to generate new motions. The idea is to use the style and life-like qualities
of the motions in the database to add details and a particular style to an initial
keyframed animation. Then, a different approach to create a character animation
is proposed: the animator starts the animation with keyframing, a method that he
is familiar with, to create some degrees of freedom. After that, the motion capture
database is used interactively to enhance the initial keyframed animation.
At the end, using the strengths of keyframing and motion capture, the character
performs realistic motion while preserving the keyframed style and incorporating
the details of the motions in the database. As a result, the animator does not need
to spend hours defining key poses and the expensive motion capture session can be
kept at a minimum.
1.2 Goals
As mentioned in the previous section, the main goal of this work is to combine the
strengths of keyframing and motion capture in order to simplify the process of cre-
ating character animations. In recent years, different approaches have been used in
order to achieve this goal. Liu and Popovic [LP02] presented a method for proto-
typing realistic character motion using a constraint detection method that automat-
ically generates the constraints by analyzing the input motion. Tanco and Hilton

[TH00] presented a system that synthesizes motion sequences from a database of
motion capture examples using a statistical model created from the captured data.
Pullen and Bregler [PB02] described a method to enhance an initial keyframed an-
imation using motion capture data, by decomposing the data into frequency bands
and fragments.
In this project, we want to analyze important characteristics of these methods,
by implementing and comparing two different approaches (chapters 4 and 5) and
CHAPTER 1. INTRODUCTION
3
verifying the importance of some aspects of human motion: skeleton hierarchy,
joint correlation, motion frequency range and motion phases. In addition, we try to
analyze and synthesize motion by fitting a principal component model.
Another goal is to understand the details of the characteristics of human mo-
tion: variations in repetitive motions and differences in the same movement exe-
cuted by different individuals. A detailed understanding of these data is important
in fields like biomechanics and medicine, where many quantitative studies of hu-
man motion have been conducted for the purposes of treatment and prevention of
injuries [AA00]. This understanding is also interesting for character animation,
since the details of a motion usually reveal mood or personality.
Using this understanding, in a long term we intend to develop a system able to
describe and characterize the character motion in a high level way. For instance,
using simple parameters and descriptions, the system will be able to create charac-
ter animations reflecting aspects like gender, mood and personality.
1.3 Thesis Outline
Chapter 2 gives a brief review of the three main animation methods. Their advan-
tages and disadvantages are described in more detail with references to the related
work. In addition, some implementation aspects of our project are mentioned: the
skeleton model and the motion database that was used.
Chapter 3 presents the multiresolution filtering technique. Possible motion
editing capabilities of this technique are investigated and it is shown how this

method can be used as a building block for character animation.
In chapters 4 and 5 methods implemented to enhance an initial keyframed ani-
mation are described. Chapter 4 introduces the fragment based methods, showing
that they can be successfully applied to character animation. After decomposing
keyframed and captured data into frequency bands, motion phases are used to di-
vide the motion capture data in small pieces, which are used to improve the original
keyframed animation.
In chapter 5 a principal component model is fitted to the motion capture database
and it is shown that using motion principle components it is possible to create, edit
and enhance a character animation.
Using the techniques described in the previous chapters as components, chap-
ter 6 introduces a pipeline solution to interactively control the creation of a charac-
ter animation, which results in a better performance and more expressive results.
CHAPTER 1. INTRODUCTION
4
Finally, in chapter 7 all techniques and their respective results are briefly re-
viewed and approaches for future improvements are mentioned.
Chapter 2
Fundamentals of Character
Animation
2.1 Introduction
In this chapter the three main methods by which character animations are created
will be briefly described: keyframe interpolation, physical simulation and motion
capture. Each of these methods has its advantages and disadvantages and they
are appropriate in different situations. In the following sections, characteristics
of these animation methods will be reviewed in more detail on the basis of their
relevant related work.
In the last section, some important implementation aspects related to the project
are mentioned: the skeleton model and the motion capture database that was used.
2.2 Keyframing

In the traditional technique, an animator first draws the motion extremes, and then
the intermediate frames using keyframes as a guide. Nowadays, using computers,
an animator can specify keyframes by posing the model in a specific position. A
computer then calculates the remaining frames by interpolating between keyframes
in order to create the motion curves that drive the model action.
The main problem with keyframing is that this technique is time-consuming
and labor intensive. A typical articulated kinematic model, such as a humanoid
character, usually has at least 50 degrees of freedom. In keyframing, an animator
must animate all these DOFs, one at a time. To construct a more realistic model, its
5
CHAPTER 2. FUNDAMENTALS OF CHARACTER ANIMATION
6
complexity is increased and the animator must keyframe more degrees of freedom.
Usually, constraints, like position of legs and arms at specific times, are always
a problem because all DOFs should be specified at these positions in the character.
One possible way to treat this problem is by using inverse kinematics to simplify
keyframe definition. In this case, posing arm or leg, it is possible to calculate the
position of parent joints - forearm, hip and knee, in order to permit the character to
achieve this point.
Another problem with this technique is the interpolation process. Usually in-
terpolation is done with smooth splines which fails to model variations in high
frequency that real human motion has. The number of keyframes set by the anima-
tor is also an important aspect. If too few keyframes are set, the motion may lack
the details usually seen in live motion. To overcome this problem, trained anima-
tors achieve a high level of detail by setting more and more keyframes, but in this
case, at the expense of more time and effort.
Although keyframing is extensively used by animators, nowadays it is not a
topic of much research. Recent works related to keyframing are trying to improve
its quality using noise functions to describe variations in the motion signature.
Bodenheimer et al. [BSH99] described how to introduce natural-looking variability

into cyclic human motion animation using a noise function. Perlin and Goldberg
[PG96] presented Improv, a system for scripting interactive actors in a virtual world
using Perlin noise functions [Per85] to characterize personality and mood in human
motion.
2.3 Physical Simulation
This technique tries to reduce the animator’s work by using physics to determine
the motion in situations where it can be clearly specified. Although these methods
have been successfully applied for animating cloth deformation (DeRose [DKT98]),
rigid bodies (Baraff [Bar94] and Moore [MW88]) and to fluids (Foster and Fedkiw
[FF01]), the application of physics based methods for the generation of character
motion remains challenging.
Most of the work in physical simulation was done by researchers seeking ac-
curate models for use in biomechanical studies, as stated in Pandy [Pan01]. Such
models were found to be very complex: more than one muscle can control one
joint, muscles exert nonlinear forces on tendons and joints usually have complex
kinematics, involving sliding and rotation about multiple axes, as mentioned in
CHAPTER 2. FUNDAMENTALS OF CHARACTER ANIMATION
7
Delp and Loan [DL00].
Clearly, this type of modeling is not practical for computer graphics, where an
animator wants to quickly compose a wide variety of motions. In fact, an animator
most certainly will not know the proper configuration of muscles and bones or the
amount of energy required in a specific movement.
An alternative way to create animations of articulated figures using physics
was done by Witkin and Kass [WK88] in the method called spacetime constraints.
Considering the entire animated motion as a numerical problem, in contrast to most
previous methods that consider each frame independently, spacetime constraint
methods allow motion editing while preserving the characteristics of the original
motion and motion animating based on incomplete observations. The general ap-
proach is to specify some physical parameters of the character, like masses of each

limb and spring constants of the joints. Then using constraints, like leg and arm
positions at a specific time, an animator can control the character key positions. At
the end, the motion is determined by solving a constrained optimization problem,
where the character energy is minimized while the constraints are satisfied.
However, normally it is difficult to model the complex human motion and cor-
rectly specify masses and joints spring constants. Another problem is the high
computation cost for solving the constrained optimization problem. The most suc-
cessful attempts at using physical simulation of human motion come from robotics
research. For instance, Raibert [Rai02] introduced some form of feedback control
in the robot movement instead of just solving the equations of motion and predict-
ing the proper initial conditions.
Applying the same principle to character animation, Hodgins et al. [HWBO95]
developed a method to apply control systems to virtual humans. In other work,
Raibert et al. [RH91] applied these controls to create animations of humans per-
forming athletic events like running, biking and a gymnast vaulting. Although
the characters vaguely performed the activity being simulated, the motions did not
look realistic. Another problem is that each activity was treated separately, while it
turned out that their method cannot be used in the same way for different activities.
Due to lack of control, difficulty of use, instabilities and high computation cost,
the use of physics simulation for animation of a complete character is not widely
used yet. One possible way to include physics in character animation is by using
spacetime constraints with other techniques, as described in the next section.
CHAPTER 2. FUNDAMENTALS OF CHARACTER ANIMATION
8
2.4 Motion Capture
In this technique joint angles of a performing actor are recorded via sensors. These
values are then used to create a character motion (Menache [Men99]). In the past,
such data was extremely difficult to obtain, as the sensor technology was costly.
However, recently the technology has improved, the costs decreased and, therefore,
this technique is becoming more available for general use.

Currently the most common techniques for obtaining motion capture data are
mechanical, optical, magnetic and video-based systems. In a mechanical system,
tracking is performed by having the subject wear a mechanical device, or exoskele-
ton. Then, angle measuring equipments are located at exoskeleton joint locations.
By measuring the joint angles of the exoskeleton, subject limb orientation is ob-
tained. The main advantage of this system is that it is accurate, cheap and allows
the use of haptic devices in order to generate feedback reactions. However, it is not
easy to perform fast and expressive motions due to the weight of the exoskeleton
and the limited range of the angle measurement devices. In addition, problems due
to shift positions of the exoskeleton cause errors in the motion capture process.
In an optical system, retro-reflective markers are attached to the body of the
subject. Then, a set of cameras surrounds the space where a subject moves and
each camera sends out a beam of infrared light, which is reflected back from the
markers. After the marker locations are recorded as 2D frames positions in the
camera image planes, post-processing finds the 3D location of each marker at each
time instant, and then solves for the joint configurations. The main advantages
of optical systems are the very high rates of data collection and the possibility to
create a great range of motion in a relatively large space. However, it requires in-
tensive post-processing computations and presents problems related with occluded
markers and with captured motions from more than one subject, where the markers
can overlap each other.
In a magnetic system, a known magnetic field is set up and a subject wears
sensors that detect location and orientation of each limb based on the magnetic
field. This technique allows real-time data collection and there are no problems
with occlusion. On the other hand, this method is very sensitive to the area where
the motion is performed. In addition, due to the wires that must be attached to
sensors, many motions are awkward for the subject.
Systems of the last category, video-based systems, try to get the motion data by
merely using a couple of video cameras. Although it is an interesting technique,
CHAPTER 2. FUNDAMENTALS OF CHARACTER ANIMATION

9
it is complicated. Standard computer vision techniques do not work for an artic-
ulated figure, in which many of the motions cannot be defined by a simple affine
transformation, but involve rotations about all the joints in the kinematic chain.
Currently, much research is trying to make video-based motion acquisition more
accurate and, therefore, useful for motion capturing.
The interest in using motion capture for creating character animation is increas-
ing. The main reason is that this technique can provide motion data for all degrees
of freedom at a high level of detail. For instance, using motion capture all the de-
tails of a motion are inherent in the data, hence, coming for free. In addition, a
transfer of data to a different character with different skeleton dimensions is also
possible.
On the other hand, this technique also has disadvantages. The main problem is
the lack of flexibility. Since it is difficult to modify a captured motion after it has
been collected, an animator should know exactly what he wants. In general this is
not the case since the process of creating a character animation is normally evolu-
tionary. Usually, an animator has only a coarse impression of the desired motion
before he captures it, minor connections might be needed anytime thereafter. In
addition, motion capture sessions are still costly and labor intensive, which makes
the repetition process prohibitive.
As a result, a great deal of research in recent years aimed at providing better
ways of editing motion capture data after it is collected. A general approach is to
adapt the motion to different constraints while preserving the style of the original
motion. Wiley and Hahn [WH97] developed a method providing inverse kinemat-
ics capability by mixing motions from a database to create a new animation that
matches a certain specification. Witkin and Popovic [WP95] developed a method
in which the motion data is warped between keyframe-like constraints set by the
animator. Warping is done by overlapping and blending motion clips. Rose et
al. [RCB98] developed a method which uses radial basis functions and low order
polynomials to interpolate between example motions while maintaining inverse

kinematic constraints.
As mentioned in the previous section, spacetime constraints can be used in or-
der to include physics in character animation. Gleicher [Gle97] presented a method
that allows an animator to start with an initial animation and to interactively repose
the character. A spacetime constraint solver is then used to minimize the difference
between the new and old motion, subject to constraints specified by the animator.
A similar approach was also used by Gleicher [Gle98] to retarget motions to char-
CHAPTER 2. FUNDAMENTALS OF CHARACTER ANIMATION
10
acters of different dimensions. Lee and Shin [LS99] combined a hierarchical curve
fitting technique with an inverse kinematics solver to adapt the motion. Popovic
and Witkin [PW99] developed a method that uses a reduced dimensionality space
and dynamics to perform the editing process. Rose et al. [RGBC96] described
the generation of motion transitions using a combination of spacetime and inverse
kinematics constraints in order to create seamless and dynamically plausible tran-
sitions between motion segments.
A more general problem with motion capture is that it is not an intuitive way to
start a character animation. Since many factors (such as the environment) influence
the motion, the final motion sequence will not be known with all details right from
the beginning. Animators are usually trained to use keyframes. They will often
build an animation by first making an initial motion sketch with a few keyframes
and add complexity and detail on top of that later.
Therefore, combining the strengths of keyframing and motion capture, the pro-
cess of animating a character is expected to be simplified. In our approach, an
animator starts a character animation with keyframing, a method that he is familiar
with, to animate some degrees of freedom. After that, he uses the motion capture
database interactively to enhance the initial animation.
2.5 Project Implementation Aspects
The methods described in the next chapters are implemented in our prototype sys-
tem using the C++ programming language on the Linux platform. In order to

facilitate the skeleton and animation manipulation, the free open-source character
animation library CAL3D
1
is used in the project.
The character animation library, CAL3D, is coded in C++ and uses the STL
containers to store the data. It provides basic data structures for skeleton-based
character animation: sequencing and blending of animations, handling of bones,
skeletons, materials and meshes. The character motion is composed by two kinds
of transformations: translation and rotation. Translations are represented as vectors
and rotations as quaternions. In comparison with other rotation representations
(e.g. explicit matrices or Euler angles), keyframe interpolation using quaternions
is accurate and intuitive.
1
/>CHAPTER 2. FUNDAMENTALS OF CHARACTER ANIMATION
11
2.5.1 Motion Capture Database
The motion capture data used in this work was obtained from MOTEK
2
, a motion
capture company that provides a set of motion sequences to the research commu-
nity. The company uses a VICON 8 optical motion capture system with 8 to 24
cameras for data acquisition. Using cameras placed around the capture space to
track the positions of markers attached to the body of performers, triangulation is
used to compute the 3D position of a marker at any given sample, from an array of
2D information from every camera. For data processing the Diva software is used,
which applies template matching software algorithms in order to solve occlusions
and marker disappearance problems.
Using markers attached to the body of a subject, cameras surrounding the sub-
ject send out a beam of infrared light, which is reflected back from the markers.
After the marker positions are recorded in 2D from each camera position, post-

processing finds the 3D location of each marker at each point in time, and then
solves for the joint configuration. At the end, the data is delivered as a set of trans-
lations and rotations of the joints of an articulated body which corresponds with
the description of a human character. Figure 2.1 shows the motion capture session
and equipments used to create the motions in the database.
To evaluate the methods presented in this work, all the sequences of the database
that represent variations of one particular action were considered. In our project we
chose human walking, since it is the most common action that need to be animated
in a realistic way. Starting with an initial database of 129 different motions, each of
which of 40 frames long, we used the 3D Studio MAX ™ version 5.1
3
to increase
the length of all animations. Since walking can be considered as a cyclic motion,
we created the final animation by repeating the original four times.
At the end, each animation in the database has the length of 160 frames. Using
the exporting software provided by the CAL3D library, these animations were ex-
ported to be used in our prototype as CAL3D animation files. Figure 2.2 shows the
motion data plotted over time for different degrees of freedom (i.e. joint angles)
for a particular animation sequence in the database.
2

3
/>

×