Tải bản đầy đủ (.pdf) (445 trang)

The Engineering of Mixed Reality Systems pot

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (14.8 MB, 445 trang )

Human-Computer Interaction Series
Editors-in-chief
John Karat
IBM Thomas J. Watson Research Center (USA)
Jean Vanderdonckt
Université catholique de Louvain (Belgium)
Editorial Board
Gaëlle Calvary, LIG-University of Grenoble 1, France
John Carroll, School of Information Sciences & Technology, P enn State University, USA
Gilbert Cockton, Northumbria University, UK
Larry Constantine, University of Madeira, Portugal, and Constantine & Lockwood Ltd,
Rowley, MA, USA
Steven Feiner, Columbia University, USA
Peter Forbrig, Universität Rostock, Germany
Elizabeth Furtado, University of Fortaleza, Brazil
Hans Gellersen, Lancaster University, UK
Robert Jacob, Tufts University, USA
Hilary Johnson, University of Bath, UK
Kumiyo Nakakoji, University of Tokyo, Japan
Philippe Palanque, Université Paul Sabatier, France
Oscar Pastor, University of Valencia, Spain
Fabio Pianesi, Bruno Kessler Foundation (FBK), Italy
Costin Pribeanu, National Institute for Research & Development in Informatics, Romania
Gerd Szwillus, Universität Paderborn, Germany
Manfred Tscheligi, University of Salzberg, Austria
Gerrit van der Veer, University of Twente, The Netherlands
Shumin Zhai, IBM Almaden Research Center, USA
Thomas Ziegert, SAP Research CEC Darmstadt, Germany
Human-Computer Interaction is a multidisciplinary field focused on human aspects of the
development of computer technology. As computer-based technology becomes increas-


ingly pervasive – not just in developed countries, but worldwide – the need to take a
human-centered approach in the design and development of this technology becomes ever
more important. For roughly 30 years now, researchers and practitioners in computational
and behavioral sciences have worked to identify theory and practice that influences the
direction of these technologies, and this diverse work makes up the field of human-computer
interaction. Broadly speaking, it includes the study of what technology might be able to do
for people and how people might interact with the technology.
In this series, we present work which advances the science and technology of developing
systems which are both effective and satisfying for people in a wide variety of contexts. The
human-computer interaction series will focus on theoretical perspectives (such as formal
approaches drawn from a variety of behavioral sciences), practical approaches (such as the
techniques for effectively integrating user needs in system development), and social issues
(such as the determinants of utility, usability and acceptability).
For further volumes:
/>Emmanuel Dubois · Philip Gray · Laurence Nigay
Editors
The Engineering of Mixed
Reality Systems
123
Editors
Dr. Emmanuel Dubois
Université Toulouse III - Tarbes
Institut de Recherches en
Informatique de Toulouse (IRIT)
France
Pr. Laurence Nigay
Université Grenoble I
Labo. d’Informatique de
Grenoble (LIG)
France

Philip Gray
University of Glasgow
Dept. Computing Science
UK
ISSN 1571-5035
ISBN 978-1-84882-732-5 e-ISBN 978-1-84882-733-2
DOI 10.1007/978-1-84882-733-2
Springer London Dordrecht Heidelberg New York
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
Library of Congress Control Number: 2009938036
© Springer-Verlag London Limited 2010
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms of licenses issued by the
Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to
the publishers.
The use of registered names, trademarks, etc., in this publication does not imply, even in the absence of a
specific statement, that such names are exempt from the relevant laws and regulations and therefore free
for general use.
The publisher makes no representation, express or implied, with regard to the accuracy of the information
contained in this book and cannot accept any legal responsibility or liability for any errors or omissions
that may be made.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Contents
1 Introduction 1
Emmanuel Dubois, Phil Gray, and Laurence Nigay
Part I Interaction Design

2 An Integrating Framework for Mixed Systems 9
Céline Coutrix and Laurence Nigay
3 A Holistic Approach to Design and Evaluation of Mixed
Reality Systems 33
Susanna Nilsson, Björn Johansson, and Arne Jönsson
4 Embedded Mixed Reality Environments 57
Holger Schnädelbach, Areti Galani, and Martin Flintham
5 The Semantic Environment: Heuristics for a Cross-Context
Human–Information Interaction Model 79
Andrea Resmini and Luca Rosati
6 Tangible Interaction in Mixed Reality Systems 101
Nadine Couture, Guillaume Rivière, and Patrick Reuter
7 Designing a Mixed Reality Intergenerational Entertainment
System 121
Eng Tat Khoo, Tim Merritt, and Adrian David Cheok
8 Auditory-Induced Presence in Mixed Reality Environments
and Related Technology 143
Pontus Larsson, Aleksander Väljamäe, Daniel Västfjäll,
Ana Tajadura-Jiménez, and Mendel Kleiner
9 An Exploration of Exertion in Mixed Reality Systems via
the “Table Tennis for Three” Game 165
Florian ‘Floyd’ Mueller, Martin R. Gibbs, and Frank Vetere
v
vi Contents
10 Developing Mixed Interactive Systems: A Model-Based
Process for Generating and Managing Design Solutions 183
Guillaume Gauffre, Syrine Charfi, Christophe Bortolaso,
Cédric Bach, and Emmanuel Dubois
Part II Software Design and Implementation
11 Designing Outdoor Mixed Reality Hardware Systems 211

Benjamin Avery, Ross T. Smith, Wayne Piekarski, and
Bruce H. Thomas
12 Multimodal Excitatory Interfaces with Automatic Content
Classification 233
John Williamson and Roderick Murray-Smith
13 Management of Tracking for Mixed and Augmented
Reality Systems 251
Peter Keitler, Daniel Pustka, Manuel Huber, Florian Echtler,
and Gudrun Klinker
14 Authoring Immersive Mixed Reality Experiences 275
Jan M.V. Misker and Jelle van der Ster
15 Fiia: A Model-Based Approach to Engineering
Collaborative Augmented Reality 293
Christopher Wolfe, J. David Smith, W. Greg Phillips, and
T.C. Nicholas Graham
16 A Software Engineering Method for the Design of Mixed
Reality Systems 313
S. Dupuy-Chessa, G. Godet-Bar, J L. Pérez-Medina, D. Rieu,
and D. Juras
Part III Applications of Mixed Reality
17 Enhancing Health-Care Services with Mixed Reality Systems 337
Vladimir Stantchev
18 The eXperience Induction Machine: A New Paradigm
for Mixed-Reality Interaction Design and Psychological
Experimentation 357
Ulysses Bernardet, Sergi Bermúdez i Badia, Armin Duff,
Martin Inderbitzin, Sylvain Le Groux, Jônatas Manzolli,
Zenon Mathews, Anna Mura, Aleksander Väljamäe, and
Paul F.M.J Verschure
19 MyCoach: In Situ User Evaluation of a Virtual and

Physical Coach for Running 381
Margit Biemans, Timber Haaker, and Ellen Szwajcer
Contents vii
20 The RoboCup Mixed Reality League – A Case Study 399
Reinhard Gerndt, Matthias Bohnen, Rodrigo da Silva Guerra,
and Minoru Asada
21 Mixed-Reality Prototypes to Support Early Creative Design 419
Stéphane Safin, Vincent Delfosse, and Pierre Leclercq
Index 447
Contributors
Minoru Asada Graduate School of Engineering, Osaka University, Osaka, Japan,

Benjamin Avery Wearable Computer Laboratory, University of South Australia,
Mawson Lakes, SA, Australia, 5095,
Cédric Bach University of Toulouse, IRIT, 118 route de Narbonne, 31062
Toulouse Cedex 9, France,
Sergi Bermúdez i Badia SPECS@IUA: Laboratory for Synthetic Perceptive,
Emotive, and Cognitive Systems, Universitat Pompeu Fabra, Tànger 135, 08018
Barcelona, Spain
Ulysses Bernardet SPECS@IUA: Laboratory for Synthetic Perceptive, Emotive,
and Cognitive Systems, Universitat Pompeu Fabra, Tànger 135, 08018 Barcelona,
Spain,
Margit Biemans Novay, 7500 AN Enschede, The Netherlands,

Matthias Bohnen University Koblenz-Landau, Koblenz, Germany,

Christophe Bortolaso University of Toulouse, IRIT, 118 route de Narbonne,
31062 Toulouse Cedex 9, France,
Syrine Charfi University of Toulouse, IRIT, 118 route de Narbonne, 31062
Toulouse Cedex 9, France, Syrine.Charfi@irit.fr

Adrian David Cheok Mixed Reality Lab, 21 Lower Kent Ridge Rd, National
University of Singapore, Singapore 119077, Singapore,

Céline Coutrix Grenoble Informatics Laboratory, 385 avenue de la Bibliothèque,
Domaine Universitaire, B.P. 53, 38 041 Grenoble cedex 9, France,

ix
x Contributors
Nadine Couture ESTIA-RECHERCHE, Technopole Izarbel, 64210 Bidart,
France; LaBRI, 351, cours de la Libération, 33405 Talence, France,

Vincent Delfosse LUCID-ULg: Lab for User Cognition and Innovative Design –
University of Liège – Belgium,
Emmanuel Dubois University of Toulouse – Tarbes, IRIT, 118 route de
Narbonne, 31062 Toulouse Cedex 9, France,
Armin Duff SPECS@IUA: Laboratory for Synthetic Perceptive, Emotive, and
Cognitive Systems, Universitat Pompeu Fabra, Tànger 135, 08018 Barcelona,
Spain
Sophie Dupuy-Chessa Laboratory of Informatics of Grenoble, Grenoble
Université, 385 rue de la bibliothèque, B.P. 53, 38041 Grenoble Cedex 9, France,

Florian Echtler Technische Universität München, München, Germany
Martin Flintham Mixed Reality Lab, Computer Science, University of
Nottingham, Jubilee Campus, Nottingham, NG8 1BB, UK,
Areti Galani International Centre for Cultural and Heritage Studies, Newcastle
University, Bruce Building, Newcastle upon Tyne, NE1 7RU, UK,

Guillaume Gauffre University of Toulouse, IRIT, 118 route de Narbonne, 31062
Toulouse Cedex 9, France,
Reinhard Gerndt University of Applied Sciences Braunschweig/Wolfenbuettel,

Wolfenbuettel, Germany,
Martin R. Gibbs Interaction Design Group, Department of Information Systems,
The University of Melbourne, Parkville, VIC, Australia,

Guillaume Godet-Bar Laboratory of Informatics of Grenoble, Grenoble
Université, 385 rue de la bibliothèque, B.P. 53, 38041 Grenoble Cedex 9, France,

T.C. Nicholas Graham School of Computing, Queen’s University, Kingston,
Canada K7L3N6,
Phil Gray University of Glasgow, DCS, Glasgow G12 8QQ, Scotland, UK,

Timber Haaker Novay, 7500 AN Enschede, The Netherlands,

Manuel Huber Technische Universität München, München, Germany,

Contributors xi
Martin Inderbitzin SPECS@IUA: Laboratory for Synthetic Perceptive, Emotive,
and Cognitive Systems, Universitat Pompeu Fabra, Tànger 135, 08018 Barcelona,
Spain
Björn Johansson Department of Computer and Information Science, Linköping
University, SaabSecurity, Santa Anna IT Research Institute AB, Linköping,
Sweden,
Arne Jönsson Department of Computer and Information Science, Linköping
University, SaabSecurity, Santa Anna IT Research Institute AB, Linköping,
Sweden,
David Juras Laboratory of Informatics of Grenoble, Grenoble Université, 385 rue
de la bibliothèque, B.P. 53, 38041 Grenoble Cedex 9, France,
Peter Keitler Technische Universität München, München, Germany,

Eng Tat Khoo Mixed Reality Lab, 21 Lower Kent Ridge Rd, National University

of Singapore, Singapore 119077, Singapore,
Mendel Kleiner Department of Applied Acoustics, Chalmers University of
Technology, SE-412 96, Göteborg, Sweden,
Gudrun Klinker Technische Universität München, München, Germany,

Pontus Larsson Department of Applied Acoustics, Chalmers University of
Technology, SE-412 96, Göteborg, Sweden; Volvo Technology Corporation,
SE-405 08 Göteborg, Sweden,
Pierre Leclercq LUCID-ULg: Lab for User Cognition and Innovative Design –
University of Liège – Belgium,
Sylvain Le Groux SPECS@IUA: Laboratory for Synthetic Perceptive, Emotive,
and Cognitive Systems, Universitat Pompeu Fabra, Tànger 135, 08018 Barcelona,
Spain
Jônatas Manzolli Interdisciplinary Nucleus for Sound Communications (NICS),
State University of Campinas, Rua da Reitoria, 165 – Cidade Universitária
"Zeferino Vaz" Campinas, São Paulo, Brasil
Zenon Mathews SPECS@IUA: Laboratory for Synthetic Perceptive, Emotive,
and Cognitive Systems, Universitat Pompeu Fabra, Tànger 135, 08018 Barcelona,
Spain
Tim Merritt Mixed Reality Lab, 21 Lower Kent Ridge Rd, National University of
Singapore, Singapore 119077, Singapore,
Jan M.V. Misker V2_ Institute for the Unstable Media, Eendrachtsstraat 10, 3012
XL, Rotterdam, The Netherlands,
xii Contributors
Florian ‘Floyd’ Mueller Interaction Design Group, Department of Information
Systems, The University of Melbourne, Parkville, VIC, Australia,
floyd@floydmueller.com
Anna Mura SPECS@IUA: Laboratory for Synthetic Perceptive, Emotive, and
Cognitive Systems, Universitat Pompeu Fabra, Tànger 135, 08018 Barcelona,
Spain

Roderick Murray-Smith University of Glasgow, Glasgow, UK,

Laurence Nigay Grenoble Informatics Laboratory, 385 avenue de la Bibliothèque,
Domaine Universitaire, B.P. 53, 38 041 Grenoble cedex 9, France,

Susanna Nilsson Department of Computer and Information Science, Linköping
University, SaabSecurity, Santa Anna IT Research Institute AB, Linköping,
Sweden,
J L. Pérez-Medina Laboratory of Informatics of Grenoble, Grenoble Université,
385 rue de la bibliothèque, B.P. 53, 38041 Grenoble Cedex 9, France,

W. Greg Phillips Department of Electrical and Computer Engineering, Royal
Military College of Canada, Kingston, Canada K7K7B4,
Wayne Piekarski Wearable Computer Laboratory, University of South Australia,
Mawson Lakes, SA 5095, Australia,
Daniel Pustka Technische Universität München, München, Germany
Andrea Resmini University of Bologna, Via Carboni 2, 43100 Parma, Italy,

Patrick Reuter LaBRI, 351, cours de la Libération, 33405 Talence, France;
INRIA Bordeaux – Sud Ouest, 351, cours de la Libération, 33405 Talence France;
Université Bordeaux 2, 146 rue Léo Saignat, 33076 Bordeaux, France,

Dominique Rieu Laboratory of Informatics of Grenoble, Grenoble Université,
385 rue de la bibliothèque, B.P. 53, 38041 Grenoble Cedex 9, France,

Guillaume Rivière ESTIA-RECHERCHE, Technopole Izarbel, 64210 Bidart,
France; LaBRI, 351, cours de la Libération, 33405 Talence, France,

Luca Rosati University for Foreigners of Perugia, via XX settembre 25, 06124
Perugia, Italy,

Stéphane Safin LUCID-ULg: Lab for User Cognition and Innovative Design –
University of Liège – Liège, Belgium, Stephane.Safi
Contributors xiii
Holger Schnädelbach Mixed Reality Lab, Computer Science, University of
Nottingham, Jubilee Campus, Nottingham, NG8 1BB, UK,
Rodrigo da Silva Guerra Graduate School of Engineering, Osaka University,
Osaka, Japan,
J. David Smith School of Computing, Queen’s University, Kingston, Canada
K7L3N6,
Ross T. Smith Wearable Computer Laboratory, University of South Australia,
Mawson Lakes, SA 5095, Australia,
Vladimir Stantchev Public Services and SOA Research Group, Berlin Institute of
Technology and Fachhochschule für Ökonomie und Management, Berlin,
Germany,
Jelle van der Ster V2_ Institute for the Unstable Media, Eendrachtsstraat 10,
3012 XL, Rotterdam, The Netherlands,
Ellen Szwajcer Novay, 7500 AN Enschede, The Netherlands,

Ana Tajadura-Jiménez Department of Applied Acoustics, Chalmers University
of Technology, SE-412 96, Göteborg, Sweden,
Bruce H. Thomas Wearable Computer Laboratory, University of South Australia,
Mawson Lakes, SA 5095, Australia,
Aleksander Väljamäe Department of Applied Acoustics, Chalmers University of
Technology, SE-412 96 Göteborg, Sweden; Research Laboratory for Synthetic
Perceptive, Emotive and Cognitive Systems (SPECS), Institute of Audiovisual
Studies, Universitat Pompeu Fabra, Barcelona, Spain,

Daniel Västfjäll Department of Applied Acoustics, Chalmers University of
Technology, SE-412 96, Göteborg, Sweden; Department of Psychology, Göteborg
University, Göteborg, Sweden,

Paul F.M.J. Verschure SPECS@IUA: Laboratory for Synthetic Perceptive,
Emotive, and Cognitive Systems, Universitat Pompeu Fabra, Tànger 135, 08018
Barcelona, Spain; ICREA, Technology Department, Universitat Pompeu Fabra,
Tànger 135, 08018 Barcelona, Spain,
Frank Vetere Interaction Design Group, Department of Information Systems, The
University of Melbourne, Parkville, VIC, Australia,
John Williamson University of Glasgow, Glasgow, UK,
Christopher Wolfe School of Computing, Queen’s University, Kingston, Canada
K7L3N6,
Chapter 1
Introduction
Emmanuel Dubois, Phil Gray, and Laurence Nigay
1.1 Mixed Reality Systems: A Booming Domain
Human–Computer Interaction (HCI) is no longer restricted to interaction between
users and computers via keyboard and screen: Currently one of the most challenging
aspects of interactive systems is the integration of the physical and digital aspects of
interaction in a smooth and usable way. The design challenge of such mixed reality
(MR) systems lies in the fluid and harmonious fusion of the physical and digital
worlds. Examples of MR systems include tangible user interfaces, augmented real-
ity, augmented virtuality and embodied interfaces. The diversity of terms highlights
the ever growing interest in MR systems and the very dynamic and challenging
domain they define.
1.1.1 Variety of Mixed Reality Systems
The growing interest of designers and developers for mixed reality systems is due
to the dual need of users to both benefit from computers and stay in contact with the
physical world. A first attempt to satisfy this requirement consists of augmenting
the real world with computerized information: This is the rationale for augmented
reality (AR). Another approach consists of making the interaction with the computer
more realistic. Interaction with the computer is augmented by objects and actions
in the physical world. Examples involve input modalities based on real objects,

such as bricks also called tangible user i nterfaces and more generally augmented
virtuality (AV).
E. Dubois (B)
University of Toulouse, IRIT, 118 route de Narbonne, 31062 Toulouse Cedex 9, France
e-mail:
1
E. Dubois et al. (eds.), The Engineering of Mixed Reality Systems, Human-Computer
Interaction Series, DOI 10.1007/978-1-84882-733-2_1,
C

Springer-Verlag London Limited 2010
2 E. Dubois et al.
This anthology contains these two types of mixed reality systems. For exam-
ple, Chapter 6 considers tangible interaction, while Chapter 11 focuses on
augmented reality in the case of a mobile user.
1.1.2 Variety of Application Domains
There are many application domains of mixed reality, including game, art, architec-
ture and heath care. The variety of application domains makes it difficult to arrive at
a consensus definition of mixed reality systems, i.e. different people having distinct
goals are using the term “Mixed Reality”.
This anthology is an attempt to highlight this variety. The last part of the
anthology is dedicated to applications of mixed reality.
1.2 Mixed Reality Engineering
In this very dynamic research and industrial context, the goal of this anthology is to
go one step ahead to gain understanding of this challenging domain. The book cov-
ers topics on mixed reality engineering by addressing the design and development of
such systems: it includes chapters on conceptual modelling, design and evaluation
case studies, prototyping and development tools, and software technology.
The book brings together papers on a broad range of mixed reality engineer-
ing issues organized into three main sections:

• Interaction design
• Software design and implementation
• Applications of mixed reality.
1.2.1 Interaction Design
The first section dedicated to interaction design includes nine chapters.
1 Introduction 3
1.2.1.1 Elements of Design
• Chapter 2: An Integrating Framework for Mixed Systems. This chapter presents a
framework for comparing Mixed Reality Systems and exploring the design space.
It proposes the concept of mixed object as a unified point of view on MRS. Based
on intrinsic and extrinsic characteristics of mixed objects, this chapter illustrates
the use of this framework on several mixed systems.
• Chapter 3: A Holistic Approach to Design and Evaluation of Mixed Reality
Systems. This chapter focuses on the usefulness of a mixed reality system that
is complementary to the usability of the system. As part of an iterative user-
centred design approach, this chapter highlights the value of qualitative methods
in natural setting for evaluating the usefulness of a mixed r eality system. Two
case studies of mixed reality systems and their qualitative experimental results
are presented.
• Chapter 4: Embedded Mixed Reality Environments. This chapter describes,
analyses and compares the context of the use of three different mixed reality
environments in terms of interaction space, asymmetries between physical and
digital worlds and social interaction triggered by these MRE before discussing
potential impacts of these dimensions on the interaction with such MRE.
• Chapter 5: The Semantic Environment: Heuristics for a Cross-Context Human–
Information Interaction Model. This chapter addresses the challenge of how to
support activities that involve complex coordination between digital and physical
domains, such as shopping experiences that combine both online and store-based
aspects. The authors present an information-oriented model that offers a frame-
work for the analysis of such mixed environments in terms of information and

product design, and spatial layout.
1.2.1.2 Specific Design Issues
• Chapter 6: Tangible Interaction in Mixed Reality Systems. This chapter discusses
the design of tangible interaction techniques, a particular form of mixed real-
ity environments. An engineering-oriented software/hardware co-design process
is introduced and illustrated with the development of three different tangible
user interfaces for real-world applications. Feedback collected from user studies
raised discussion about the use of such interaction techniques.
• Chapter 7: Designing a Mixed Reality Intergenerational Entertainment System.
This chapter focuses on the particular design issue of mixed reality for intergener-
ational family members (grandparents, parents and grandchildren simultaneously
using the mixed system). The focus is on intergenerational family entertain-
ment and the chapter presents a user-centred design process for designing a
collaborative Space Invaders mixed game.
• Chapter 8: Auditory-Induced Presence in Mixed Reality Environments and
Related Technology. This chapter explores how audio-based techniques can be
used to develop MR systems that enhance the sense of presence, one element of
comparison between the multiplicity and the heterogeneity of MR systems.
4 E. Dubois et al.
• Chapter 9: An Exploration of Exertion in Mixed Reality Systems via the “Table
Tennis for Three” Game. This chapter explores dimensions relevant for the design
of MR systems that involve a user’s body in the interaction. A case s tudy is used
through the chapter to illustrate the different dimensions of the design of such
interaction.
1.2.1.3 Structuring the Design
• Chapter 10: Developing Mixed Interactive Systems: A Model-Based Process
for Generating and Managing Design Solutions. Using a classic model-based
transformational approach, the authors demonstrate via a worked example how
the ASUR family of models can be used to support the progressive refinement of
a mixed reality system design from conceptual model to software realization.

1.2.2 Software Design and Implementation
The second section addresses technical platforms and solutions for interaction tech-
niques, development tools and a global view on the software development process.
This section includes six chapters.
1.2.2.1 Technical Solutions and Interaction Techniques
• Chapter 11: Designing Outdoor Mixed Reality Hardware Systems. This chapter
presents the design of an outdoor mixed reality system from the point of view of
the involved hardware for localization, I/O interaction, computer systems and bat-
teries, and finally raises a set of lessons learnt related to technical considerations,
computing system management and I/O interactions.
• Chapter 12: Multimodal Excitatory Interfaces with Automatic Content Classific-
ation. This chapter presents a technical solution supporting the “active percep-
tion” concept: the goal is to encode information descriptor through the use of
vibrotactile and sonic feedback in response to user’s action on a device that
contains the information.
1.2.2.2 Platform: Prototyping, Development and Authoring Tools
• Chapter 13: Management of Tracking for Mixed and Augmented Reality Systems.
This chapter describes a platform, namely trackman built on top of the UbiTrack
middleware, that allows us to model a constellation of sensors for quickly defin-
ing a tracking setup based on a spatial relationship graph and a data flow diagram.
This is done in a graphical editor.
• Chapter 14: Authoring Immersive Mixed Reality Experiences. This chapter
presents a platform for authoring immersive mixed reality experiences for artistic
1 Introduction 5
use. The platform enables artists and designers to create the content for immer-
sive mixed reality environments. An example of a production, an art game done
by an artist along with a programmer, is presented.
• Chapter 15: Fiia: A Model-Based Approach to Engineering Collaborative
Augmented Reality. This chapter presents and illustrates a platform supporting
the implementation of a collaborative and distributed augmented reality system

on the basis of a high-level notation that models the system and abstracts the
distribution and software implementation considerations.
1.2.2.3 Life Cycle
• Chapter 16: A Software Engineering Method for the Design of Mixed Reality
Systems. Like the approach described in Chapter 10, the software engineering
process model presented here proposes that mixed reality systems are developed
via the progressive refinement of a set of linked models. Here the Symphony
development method has been augmented to accommodate mixed reality con-
cerns, including the physical aspects of interaction and the richer representational
options of mixed systems.
1.2.3 Application of Mixed Reality
The final section consists of a set of detailed case studies that highlight the appli-
cation of mixed reality in a number of fields, including emergency management,
games, and health care. The section includes five chapters.
• Chapter 17: Enhancing Health-Care Services with Mixed Reality Systems.This
chapter presents the steps of a development approach for enhancing health-care
services with mixed reality systems. The steps of the development approach are
illustrated by considering the design and development of a location-aware system
in the surgical sector of a hospital.
• Chapter 18: The eXperience Induction Machine: A New Paradigm for Mixed-
Reality Interaction Design and Psychological Experimentation. The eXperience
Induction Machine is a novel and rich mixed r eality environment that offers a test
bed for immersive applications. The authors describe the environment, present a
justification for their system in terms of its role in exploring the psychological and
social aspects of mixed reality interaction and give two examples of experiments
performed using it.
• Chapter 19: MyCoach: In Situ User Evaluation of a Virtual and Physical Coach
for Running. The work presented in this chapter is rather different than others in
this anthology. MyCoach is a mobile system designed to work in concert with
a human coach in delivering assistance to amateur runners in improving their

performance. This study examines the potential of such an approach compared to
the non-augmented approach using a conventional human coach alone.
6 E. Dubois et al.
• Chapter 20: The RoboCup Mixed Reality League – A Case Study. This chapter
describes a mixed reality platform where robots are involved. It provides a way
to tightly couple real and virtual worlds in a two-way interaction. The case study
is a soccer game where micro-robots on a table are playing soccer.
• Chapter 21: Mixed-Reality Prototypes to Support Early Creative Design.This
chapter discusses EsQUIsE, a mixed reality system that supports early stage
architectural design. The system and its rationale are given, followed by the
description of user-based evaluation and a discussion of the results of the
evaluation in terms of the system’s mixed reality features.
Chapter 2
An Integrating Framework for Mixed Systems
Céline Coutrix and Laurence Nigay
Abstract Technological advances in hardware manufacturing led to an extended
range of possibilities for designing physical–digital objects involved in a mixed
system. Mixed systems can take various forms and include augmented reality,
augmented virtuality, and tangible systems. In this very dynamic context, it is diffi-
cult to compare existing mixed systems and to systematically explore the design
space. Addressing this design problem, this chapter presents a unified point of
view on mixed systems by focusing on mixed objects involved in interaction, i.e.,
hybrid physical–digital objects straddling physical and digital worlds. Our integrat-
ing framework is made of two complementary facets of a mixed object: we define
intrinsic as well as extrinsic characteristics of an object by considering its role in
the interaction. Such characteristics of an object are useful for comparing existing
mixed systems at a fine-grain level. The taxonomic power of these characteristics is
discussed in the context of existing mixed systems from the literature. Their gener-
ative power is illustrated by considering a system, Roam, which we designed and
developed.

Keywords Mixed systems · Mixed object · Interaction model · Characterization
space · Taxonomy
2.1 Introduction
The growing interest for mixed interactive systems is due to the dual need of
users to both benefit from computers and stay in contact with the physical world.
Mixed systems can take various forms and include augmented reality, augmented
C. Coutrix (B)
Grenoble Informatics Laboratory, 385 avenue de la Bibliothèque, Domaine Universitaire,
B.P. 53, 38 041, Grenoble cedex 9, France
e-mail:
9
E. Dubois et al. (eds.), The Engineering of Mixed Reality Systems, Human-Computer
Interaction Series, DOI 10.1007/978-1-84882-733-2_2,
C

Springer-Verlag London Limited 2010
10 C. Coutrix and L. Nigay
virtuality, and tangible systems. Although mixed systems are becoming more preva-
lent, we still do not have a clear understanding of this interaction paradigm. In
particular, we lack capitalization of our experience, comprehension of problems
when explaining the choice of a design to other designers. In addition, we are not
able to explore the design space in a systematic way and as a result quite often
find a better solution after the development is finished. Even though several con-
ceptual results exist for understanding and designing such systems, they do not
address the entire design and remain local and are not related to each other. As a
consequence, it is difficult to compare existing mixed reality systems and explore
new designs.
Rather than presenting yet another taxonomy that would not improve the clarity
of this domain, we capitalize on existing research in our framework:
• We encapsulate related works in order to provide a coherent, integrating, and

unifying framework.
• We identify overlaps between existing studies, so that we can contribute to a
better comprehension of the domain.
• We refine existing taxonomies as well as identify new characteristics and uncover
areas to be explored in the design space.
The basis of our integrating framework is that we take the viewpoint of
the objects involved in interaction with mixed systems, namely mixed objects,
i.e., hybrid physical–digital objects straddling physical and digital worlds. Our
framework is therefore made of characteristics of mixed objects. The charac-
teristics are useful for analysis and comparison of existing systems as well
as for design: indeed the characteristics allow generation of ideas and choice
of design alternatives. Since these characteristics are also used for design, we
organized them according to two points of view of a mixed object that make
sense for design: intrinsic and extrinsic characteristics. These two sets of char-
acteristics enable designers to study the reusability of their design for differ-
ent application contexts. Indeed intrinsic characteristics of a mixed object are
not modified from one context to another, whereas extrinsic characteristics are
modified.
In this chapter, we first recall our definition of a mixed object [8, 9] and then
present the corresponding intrinsic characterization space of a mixed object while
demonstrating its taxonomic power. We then focus on interaction with mixed objects
[8]: we present the resulting extrinsic characterization space of a mixed object and
study its taxonomic power. The taxonomic power of our intrinsic and extrinsic char-
acteristic framework is studied in the light of several existing mixed systems that we
present in the following section. Finally, in the last section, we show how our char-
acteristic framework is useful for the design, in the context of a new mixed system
that we designed and developed.
2 An Integrating Framework for Mixed Systems 11
2.2 Illustrative Examples
For demonstrating the taxonomic power of our framework, we rely on existing

mixed systems. We purposely chose mixed systems that seemed similar at first
glance. Indeed, the selected mixed systems support interaction with objects on a
horizontal surface. These systems are from the literature (i.e., not designed using
our framework) and are therefore unbiased examples for evaluating the taxonomic
power of our framework.
NavRNA [2] is a system for interacting with RNA molecules. As shown in
Fig. 2.1 (left), biologists are gathered around a table equipped with a camera and
a projector. The camera captures the positions of the blue tokens that the users hold
and move in order to explore (i.e., move, turn, resize) the 2D view of RNA.
Fig. 2.1 NavRNA (left), the MIT Great Dome Phicon in the Tangible Geospace (center), the
reacTable (right)
Phicon [22] stands for Physical Icons. In the Tangible Geospace [22] (Fig. 2.1,
center), the Phicon is a tool shaped as the MIT Great Dome. Users hold and move
it on the table, where a map of the campus is projected. In this way, the location of
the Phicon on the table always corresponds to the location of the Dome on the map.
The reacTable [16] (Fig. 2.1, right) is used as a music synthesizer, where mixed
cubes and tokens represent the synthesizer modules. Users can directly touch the
surface with several fingers in order to interact. They can also hold and move, change
the relative distance, orientation, and relation of the objects on the table in order to
control the synthesizer. When studying reactTable, we consider only the interac-
tion with the objects. The table is augmented by a camera, which tracks the nature,
location, and orientation of t he objects, and by a projector for displaying animation
corresponding to the state of the objects onto the surface.
Fig. 2.2 The music bottles (left) filling a drawing (a roof with tiles) with the Digital Desk (center)
and erasing a part of the drawing with the Digital Desk (right)
12 C. Coutrix and L. Nigay
The music bottles (Fig. 2.2, left) are objects that are part of a music player system.
Each bottle contains a musical part. When a music bottle [15] is put on the table and
opened, the corresponding music part is played. In addition, rear-projected light
corresponding to pitch and volume is displayed underneath the bottle on the table.

The Digital Desk [25] is one of the first mixed systems and was partially devel-
oped. We consider the seminal drawing scenario (Fig. 2.2, center and right). The
user draws a house with a regular pen on a regular sheet of paper on a table equipped
with a camera and a projector. In Fig. 2.2 (center), the user starts drawing tiles on
the roof and then decides to use a “fill” paper button by pointing it toward the roof.
She then presses the paper button, which is sensed by the camera. Then the roof
is filled with tiles displayed by the projector. The resulting drawing is mixed with
physical parts, made by a pen, and a projected digital part. In Fig. 2.2 (right), the
user erases projected tiles with a regular eraser, thanks to the camera.
Fig. 2.3 Digital pointing using an interaction device and a hand held display
The actuated workbench [19] is a table that embeds magnets. On this table, the
user or the system can manipulate pucks. The manipulation of a puck can be indirect
by using a trackball (Fig. 2.3, left) or direct by holding the puck (Fig. 2.3, center).
Fig. 2.4 The PICO system tries to have an equilateral triangle: as the user moves one puck, the
system changes its position in order to form an equilateral triangle
PICO [20] stands for “physical intervention in computational optimization.” The
system [20] is similar to the actuated workbench, with a table embedding magnets
and augmented by a camera and a projector above. The system computes the ideal
positions of the pucks on the table and the magnets automatically move them toward
these positions (Fig. 2.4). Furthermore the user can add physical constraints: For
example, in Fig. 2.3 (right), the puck cannot access the entire surface of the table.
2 An Integrating Framework for Mixed Systems 13
2.3 Integrating Framework for Describing and Classifying
Mixed Systems
Focusing on mixed objects involving a mixed system, our integrating framework is
made of intrinsic and extrinsic characteristics of a mixed object. We first present the
modeling of a mixed object and the induced intrinsic characteristic framework. We
then put the mixed objects into interaction context and we expose the modeling of
interaction with mixed objects. From this modeling of mixed interaction, we finally
describe the extrinsic characteristic framework.

2.3.1 Modeling of a Mixed Object
Mixed objects are hybrid objects with a physical part, like a physical object, in addi-
tion to a digital part like a digital object. For describing mixed objects, we consider
its physical and digital parts as well as the link between them. On the one hand, the
user interacts with the physical part, because users belong to the physical world. On
the other hand, the system can interact with the digital part of the object. Physical–
digital properties are properties like shape, color, weight, etc. for physical properties
and a digital image, a boolean value, etc. for digital properties.
Fig. 2.5 Our description of the music bottle
We describe the link between these properties with linking modalities and
draw the definition of a linking modality from that of an interaction modality
[24]: given that d is a physical device that acquires or delivers information and
l is an interaction language that defines a set of well-formed expressions that
convey meaning, an interaction modality is a pair ( d,l), such as (camera, com-
puter vision). The two levels of abstraction (device, language) constitute the basis
14 C. Coutrix and L. Nigay
of the definition of a linking modality. But in contrast to high-level interaction
modalities used by the user to interact with mixed environments, the low-level
modalities that define the link between physical and digital properties of an object
are called linking modalities. Figure 2.5 shows the two types of linking modal-
ities (i.e., input/output linking modalities) in the example of the music bottle.
An input linking modality allows the system to compute the presence and place-
ment of the bottle on the table. The musical part is made perceivable by the user,
through two combined output linking modalities. For the composition of the link-
ing modalities we reuse the CARE (complementarity, assignment, redundancy,
and equivalence [24]) properties: in the music bottle example, the composition
of the two different output linking modalities corresponds to a case of partial
redundancy.
2.3.2 Mixed Object: Intrinsic Characterization
Based on the modeling of a mixed object, our intrinsic characteristic framework

applies to a mixed object without considering its context of use in a particular inter-
active mixed system. We consider related studies and show how our characterization
scheme unifies such approaches. Existing characteristics including affordance [18],
expected and sensed actions [4], characteristics of devices [6, 17] and languages
[5, 24], bounce-back physical properties [10], and some aspects of composition
of physical properties [13, 11] fit in our modeling of a mixed object. More inter-
estingly, this modeling leads us to identify new characteristics, such as generated
physical properties, acquired and materialized digital properties, bounce-back digi-
tal properties, and some aspects of composition of physical properties. This clearly
states our contribution: we provide a unifying framework that organizes various
existing characteristics into a single unifying framework and we further identify
new characteristics.
Based on our modeling of a mixed object, we present our integrating framework
by starting with the characteristics of the linking modalities. We then consider the
characteristics that apply to the physical and digital properties.
2.3.2.1 Characteristics of the Linking Modalities (Devices and Languages)
As our approach capitalizes on existing studies, we reuse the results from mul-
timodal interaction studies for characterizing the two levels of abstraction of a
linking modality. Taxonomies of devices [6, 17] are applied to characterize input
and output linking devices. Frameworks described in [5, 24] can be applied for the
linking languages also: a language can be static or dynamic, linguistic or not, ana-
logue or not (similarity with the real world or not), arbitrary or not (need to be
learned or not), deformed or not (like “how r u?” as opposed to “how are you?”),
local or global (only a subset of the information or all the information are con-
veyed). Our framework also allows study of the relationship between devices and

×