SMART ENVIRONMENTS
TECHNOLOGIES, PROTOCOLS,
AND APPLICATIONS
Diane J. Cook and Sajal K. Das
SMART ENVIRONMENTS
WILEY SERIES ON PARALLEL AND DISTRIBUTED COMPUTING
SERIES EDITOR: Albert Y. Zomaya
Parallel & Distributed Simulation Systems / Richard Fujimoto
Surviving the Design of Microprocessor and Multimicroprocessor
Systems: Lessons Learned / Veljko Milutinovic
Mobile Processing in Distributed and Open Environments / Peter Sapaty
Introduction to Parallel Algorithms / C. Xavier and S.S. Iyengar
Solutions to Parallel and Distributed Computing Problems: Lessons
from Biological Sciences / Albert Y. Zomaya, Fikret Ercal,
and Stephan Olariu (Editors)
New Parallel Algorithms for Direct Solution of Linear Equations /
C. Siva Ram Murthy, K.N. Balasubramanya Murthy, and Srinivas Aluru
Practical PRAM Programming / Joerg Keller, Christoph Kessler,
and Jesper Larsson Traeff
Computational Collective Intelligence / Tadeusz M. Szuba
Parallel & Distributed Computing: A Survey of Models, Paradigms,
and Approaches / Claudia Leopold
Fundamentals of Distributed Object Systems: A CORBA
Perspective / Zahir Tari and Omran Bukhres
Pipelined Processor Farms: Structured Design for Embedded Parallel
Systems / Martin Fleury and Andrew Downton
Handbook of Wireless Networks and Mobile Computing /
Ivan Stojmenoviic (Editor)
Internet-Based Workflow Management: Toward a Semantic Web /
Dan C. Marinescu
Parallel Computing on Heterogeneous Networks / Alexey L. Lastovetsky
Tools and Environments for Parallel and Distributed Computing Tools /
Salim Hariri and Manish Parashar
Distributed Computing: Fundamentals, Simulations, and Advanced Topics,
Second Edition / Hagit Attiya and Jennifer Welch
Smart Environments: Technology, Protocols, and Applications / Diane J. Cook
and Sajal K. Das (Editors)
SMART ENVIRONMENTS
TECHNOLOGIES, PROTOCOLS,
AND APPLICATIONS
Diane J. Cook and Sajal K. Das
This book is printed on acid-free paper. W
1
Copyright # 2005 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form
or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as
permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior
written permission of the Publisher, or authorization through payment of the appropriate per-copy fee
to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400,
be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken,
NJ 07030, (201) 748-6011, fax (201) 748-6008.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts
in preparing this book, they make no representations or warranties with respect to the accuracy or
completeness of the contents of this book and specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose. No warranty may be created or extended by sales
representatives or written sales materials. The advice and strategies contained herein may not be suitable
for your situation. You should consult with a professional where appropriate. Neither the publisher
nor author shall be liable for any loss of profit or any other commercial damages, including but not
limited to special, incidental, consequential, or other damages.
For general information on our other products and services please contact our Customer Care Department
within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993 or fax 317-572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print,
however, may not be available in electronic format.
Library of Congress Cataloging-in-Publication Data is available.
ISBN 0-471-54448-5
Printed in the United States of America
10987654321
fax 978-646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should
To my parents, Gilbert and Nancy Cook, for their love, support, and
inspiration.
—Diane
To my parents, Baidyanath and Bimala Das, for their love
and passion for education.
—Sajal
&
CONTENTS
Contributors ix
Foreword xi
Howard E. Shrobe
Acknowledgments xvii
PART 1. INTRODUCTION 1
1. Overview 3
Diane J. Cook and Sajal K. Das
PART 2. TECHNOLOGIES FOR SMART ENVIRONMENTS 11
2. Wireless Sensor Networks 13
Frank L. Lewis
3. Power Line Communication Technologies 47
Haniph A. Latchman and Anuj V. Mundi
4. Wireless Communications and Pervasive Technology 63
Marco Conti
5. Middleware 101
G. Michael Youngblood
6. Home Networking and Appliances 129
Dave Marples and Stan Moyer
PART 3. ALGORITHMS AND PROTOCOLS FOR SMART
ENVIRONMENTS 151
7. Designing for the Human Experience in Smart Environments 153
Gregory D. Abowd and Elizabeth D. Mynatt
vii
8. Prediction Algorithms for Smart Environments 175
Diane J. Cook
9. Location Estimation (Determination and Prediction)
Techniques in Smart Environments 193
Archan Misra and Sajal K. Das
10. Automated Decision Making 229
Manfred Huber
11. Security, Privacy and Trust Issues in Smart Environments 249
P.A. Nixon, W. Wagealla, C. English, and S. Terzis
PART 4. APPLICATIONS 271
12. Lessons from an Adaptive Home 273
Michael C. Mozer
13. Smart Rooms 295
Alvin Chen, Richard Muntz, and Mani Srivastava
14. Smart Offices 323
Christophe Le Gal
15. Perceptual Environments 345
Alex Pentland
16. Assistive Environments for Individuals with Special Needs 361
Abdelsalam Helal, William C. Mann, and Choonhwa Lee
PART 5. CONCLUSIONS 385
17. Ongoing Challenges and Future Directions 387
Sajal K. Das and Diane J. Cook
Index 393
viii CONTENTS
&
CONTRIBUTORS
Gregory D. Abowd College of Computing and GVU Center, Georgia Institute
of Technology, 801 Atlantic Drive, Atlanta, GA 30332-0280
Alvin Chen Electrical Engineering Department, University of California at Los
Angeles, 7702-B, Boelter Hall, Box 951594, Los Angeles, CA 90095-1594
Marco Conti National Research Council, Instituto di Informatica e Telematica,
Room B.63, Via G. Moruzzi, 1, 56124 Pisa, Italy
Diane J. Cook Department of Computer Science and Engineering, The University
of Texas at Arlington, Box 19015, Arlington, TX 76019
Sajal K. Das Crewman, Department of Computer Science and Engineering, The
University of Texas at Arlington, Box 19015, Arlington, TX 76019
C. English Department of Computer and Information Sciences, The University of
Strathclyde, Livingstone Tower, 26 Richmond Street, Glasgow G1 1XQ, Scotland
Abdelsalam Helal CISE Department, University of Florida, 448 Computer
Science Engineering Building, Gainesville, FL 32611
Manfred Huber Department of Computer Science and Engineering, The
University of Texas at Arlington, Box 19015, Arlington, TX 76019
Haniph A. Latchman Electrical and Computer Engineering Department,
University of Florida, NEB 463—P.O. Box 116130, Gainesville, FL 32611-6130
Choonhwa Lee CISE Department, University of Florida, 448 Computer Science
Engineering Building, Gainesville, FL 32611
Christophe Le Gal PRIMA Group, GRAVIR Lab, INRIA, Joanneum Research,
Institute of Digital Image Processing, Wastiangasse 6, A-8010 Graz, Austria
Frank L. Lewis ARRI, The University of Texas at Arlington, Arlington, TX
76019
William C. Mann CISE Department, 448 Computer Science Engineering
Building, University of Florida, Gainesville, FL 32611
Dave Marples Telcordia Technologies, Inc., RRC-1A361, One Telcordia Drive,
Piscataway, NJ 08854
ix
Archan Misra Pervasive Security and Networking Department, IBM T.J. Watson
Research Center, 19 Skyline Drive, Hawthorne, NY 10532
Stan Moyer Telcordia Technologies, Inc., RRC-1A361, One Telcordia Drive,
Piscataway, NJ 08854
Michael C. Mozer Department of Computer Science, University of Colorado,
Regent Road and Colorado Avenue, Boulder, CO 80309-0430
Anuj V. Mundi Electrical and Computer Engineering Department, University of
Florida, NEB 463—P.O. Box 116130, Gainesville, FL 32611-6130
Richard Muntz Electrical Engineering Department, University of California at
Los Angeles, 7702-B, Boelter Hall, Box 951594, Los Angeles, CA 90095-1594
Elizabeth D. Mynatt Georgia Institute of Technology, College of Computing,
801 Atlantic Drive, Atlanta, GA 30332-0280
Paddy Nixon Department of Computer and Information Sciences, The University
of Strathclyde, Livingstone Tower, 26 Richmond Street, Glasgow G1 1XQ,
Scotland
Alex Pentland The Media Laboratory, Massachusetts Institute of Technology,
Wiesner Building, 20 Ames Street, Cambridge, MA 02139-4307
Mani Srivastava Electrical Engineering Department, University of California at
Los Angeles, 7702-B, Boelter Hall, Box 951594, Los Angeles, CA 90095-1594
Howard E. Shrobe Artificial Intelligence Laboratory, Massachusetts Institute of
Technology, Cambridge, MA 02139
S. Terzis Department of Computer and Information Sciences, The University of
Strathclyde, Livingstone Tower, 26 Richmond Street, Glasgow G1 1XQ,
Scotland
W. Wagealla Department of Computer and Information Scie nces, The University
of Strathclyde, Livingstone Tower, 26 Richmond Street, Glasgow G1 1XQ,
Scotland
G. Michael Youngblood Department of Computer Science and Engineering, The
University of Texas at Arlington, Box 19015, Arlington, TX 76019
x CONTRIBUTORS
&
FOREWORD
HOWARD E. SHROBE
MIT Computer Science and Artificial Intelligence Laboratory
In 1991, Mark Weiser described his vision of an emerging world of pervasive,
embedded computation. He predicted “a physical world that is richly and invisibly
interwoven with sensors, actuators, disp lays, and computational elements, embed-
ded seamlessly in the everyday objects of our lives and connected through a continu-
ous network.” This vision is becoming a reality: the ever-increasing availability of
inexpensive computation and storage has introduced computers into nearly every
facet of our everyday lives, while a revolution in communications has brought
high-bandwidth communications into our homes and offices. Wireless communi-
cations also has exploded, making digital services available nearly everywhere.
But what is the nature of this revolution in technology? How will it impact our
lives? And what new technical challenges will it present? The ubiquity of compu-
tation and communication is not the only manifestation of the revolution. Much
of this emerging computation is embedded: the processors in your phones, cars, per-
sonal digita l assistants (PDAs), and home appliances. Increasingly, these embedded
computers are acting in concert with other computational elements as part of a larger
ensemble. Thus, we have processors at one end of the spectrum providing megahertz
cycle rates and a few kilobytes of memory, while at the other end we have machines
providing gigahertz cycle rates, gigabytes of primary storage, and terabytes of
persistent storage. Across every dimension of interest—processor power, primary
memory, persistent storage, communications bandwidth, and display capabili-
ties—we witness a variability of at least three orders of magnitude. This broad
span of capabilities represents a new computational framework, particularly when
we realize that the ubiquity of communica tions bandwidths often makes it possible
to locate computational tasks at whatever point in this hierarchy makes the most
sense. This represents a radically new framework for distributed and mobile
computation.
A second striking new feature of the emerging ubiquitous computing environ-
ment is the mobility of the user. We are already beginning to see the convergence
of a variety of technol ogies, all of which serve as personal computational acces-
sories: Internet-capable smart cell phones, wireless-enabled PDAs, and music
players, such as the Apple IPOD, that move with the user but in one way or another
are tapped into the pervasive communications and computing environment. In the
xi
past, the fact that most people used only a single desktop computer led to a struggle
over what would occupy that critical desktop position; now, the fact that most people
are willing to carry at most one mobile device (e.g., a PDA, cell phone, music play er)
is leading to a struggle over what will occupy the critical “belt loop” position and
over what networks that single device on the user’s belt will link into the broader
computational world. However this plays out, it is still the case that in most
places where the user lives and works, far more abundant computational resources
are built in. Thus, the mobility of the user raises many more questions about how we
can dynamically link the limited computing power that travels with the user to the
much vaster computational power that is present in the environment.
A third new feature is that these systems never stop. They are not used to “run a
job” and then shut down; they are always on, always available. Our normal model of
upgrading the software in a system consists of taking a system down, installing
upgrades, and then rebooting, but this model ill fits the components of a ubiquitous
computing environment. Instead, these systems need to evolve in place, with new
software being installed while they are running. In general, ubiquitous computation
will be far more dynamic and evolutionary. As we build systems that are intended to
last for very long periods of time, it becomes necessary to recognize that we can’t
anticipate all future issues at design time, but will instead need to make many
more decisions at run-time, to allow the systems to learn from their own experience
and to adopt a philosophy of “delayed binding”.
A fourth new feature is that the computational nodes we are considering are often
equipped with sensors and effectors. They are embedded in the physical world with
which they interact constantly. Previ ously, this type of embedded computing was the
province of the specialized subfield of real-time controllers; indeed, many of our
current embedded computing components have emerged from the world of control
systems. But they are now being asked to perform a different role: sensing and acting
on behalf of the user and performing more human-like tasks.
Finally, the ubiquitous computing revolution involves constant human-computer
interaction. And it is this feature that is most crucial. The challenge we face is to make
this revolution wear a human face, to focus on human-centered ubiquitous comput-
ing. Michael Dertouzos reflected on the emerging ubiquitous computing paradigm
with a certain degree of horror. He observed that VCRs, cash registers, and ATMs
all represent the computerization of common, everyday tasks, but that the way in
which computers were deployed led to needless inflexibility, unintuitive interactions,
the inability of even experts to do simple things, and general dehumanization. Com-
putation is increasingly being used to eliminate certain kinds of jobs that were pre-
viously done by people with a certain degree of expertise (e.g., phone operators)
and is making those tasks part of the everyday burden on the rest of us, who must
now acquire some of that expertise (anyone who has tried to make a long-distance
call from certain foreign countries will understand this perfectly); sometimes this
makes life easier, but often it doesn’t. If the new technology fails to meet us
humans at least partway, then things get a great deal worse, as Dertouzos observed.
What if ubiquitous computing were to bring us the ubiquitous need to interact with
systems as unpleasant as early VCRs and phone menu systems?
xii FOREWORD
Surely we can do better. It is worth observing that Moore’s law (that comput-
ing doubles in capability roughly every 18 months) is fixing virtually everything but
us humans; unfortunately, we don’t scale with silicon densities. This means that
increasingly the critical resources are human time, attention, and decision-making
ability. We used to think of computational resources as scarce and shaped the inter-
face to make the computer’s life easier; now we must do the reverse. We have abun-
dant computational resources; we need to shape the interface to make the human’s
jobs easier.
Research on smart environments is intended to address this issue. Smart environ-
ments combine perceptual and reasoning capabilities with the other elements of
ubiquitous computing in an attempt to create a human-centered system that is
embedded in physical spaces. Perceptual capabilities allow the system to situate
itself within the world of human discourse. Reasoning capabilities allow the
system to behave flexibly and adaptively as the context changes and as resources
become more or less available.
We can identify an agenda of challenges to be met if we are to make such intel-
ligent environments the norm:
.
We must provide a comprehensive infrastructure and a firm computation foun-
dation for the style of distributed computing that ubiquitous computing
requires. This includes protocols for both wired and wireless communications
media and middleware for distributed computing, agent systems, and the like.
.
We must develop frameworks that allow systems to respond adaptively to user
(and internal) requests. This would include the ability to dynamically discover
new resources; the ability to choose from among a set of alternative plans for
achieving a goal in light of the task context and the availability of critical
resources; the ability to recognize, diagnose, and recover from failures; the abil-
ity to generate new plans; and the ability of the system to learn from its experi-
ences and to improve its own performance.
.
We must develop frameworks that allow us to integrate information from many
perceptual sources, to make sense of these inputs, and to do so even in the pre-
sence of sensor failures and noise.
.
We must provide the systems with extensive knowledge of the human world. In
particular, these systems must be capable of reasoning about how we think of
space (e.g., that floors in a building are a significant organizational construct or
that a desk establishes a work zone separate from the space around it in an
office), organizational structures, tasks, projects, etc. This is a huge challenge,
which in its largest form constitutes teaching our systems all of commonsense
knowledge. However, in practice, we can focus on particular domains of appli-
cation, such as office environments, home environments, and cars, which are
more bounded, although still enormously challenging.
.
We must develop notions of context that help the system ground its reactions to
the events going on around it. In practice, much of the research on context has
focused on location. This research has been largely motivated by a concern for
FOREWORD xiii
the mobile user, for whom location is indeed a strong indicator of context.
Many technologies have been developed to help a mobile computer know
where it is (e.g., the Global Positioning System in the outside environment;
badge readers, beacons, and the like for the inside environment), but there is
still much to be done in this area. Within individual spaces, location is also a
strong cue to context because where you are in a room is often a strong cue
to what you’re going to do. If I’m sitting at my desk, then I’m doing one
type of activity, while if I’m stretched out on my couch, I’m likely to be
doing another. Systems with cameras and machine vision have been developed
that can track people’s locations within an individual space and make such
inferences about what they’re doing. Finally, we note that context is a much
broader notion than simply one’s location. Task context, in particular, rep-
resents another important component of context that has been relatively little
explored. To further complicate matters, most people are in more than one
task context at any particular time. Much more needs to be done in this area.
Context plays two critical roles. First, it helps to determine how the system
should respond to an event; if a person walks into a dark room, then turning
the lights on makes sense unless there is a group of people in the room watching
a movie. Second, context establishes perceptual bias, helping to disambiguate
perceptual signals. One would make very different sense of the phonemes in
“recognize speech” if one were instead talking about envi ronmental disasters
that could “wreck a nice beach.”
.
We must develop techniques to restructure the human-computer interface along
human-centered lines. In many cases, this will mean replacing conventional
keyboard and pointer interfaces by speech recognition, machine vision, natural
language understanding, sketch recognition, and other modes of communi-
cation that are natural to people. The best interface is often the one that you
need not notice at all (as Weiser observed), so unnatural use of perceptual inter-
faces could be as bad as the thing they are meant to replace. I believe that per-
ceptual interfaces are part of the answer, but the overall goal is to make the
computers seem as natural to interact with as another person. Sometimes this
means that the system should have no interface; it should just recognize
what’s going on and do the right thing. At other times, it means that the
system should engage in a dialogue with a person. No single metaphor, such
as the desktop as a metaphor for the personal computer, governs the range of
interactions that are required. Rather, we want a system that is truly human-cen-
tered and natural to interact with; this requires not just perception but also a sig-
nificant understanding of the semantics of the everyday world and the reasoning
capabilities to use this understanding flexibly.
.
Finally, we must develop techniques for providing guarantees of secur ity and
privacy. I mention this point last because it often occurs as an afterthought in
any system design. But in the context of perceptually enabled, intelligent
environments, security and privacy are make-or-break issues. We cannot
deploy perceptually based systems broadly until we can realistically promise
xiv FOREWORD
people that their privacy will be respected, that the information gathered will
not be used to their detriment, and that the systems are secure against pen-
etration. The issue, however, is quite complex. Generally speaking, technol-
ogies that protect security and privacy tend to work against convenience;
even in conventional computing systems, most people don’t take even basic
precautions because the benefit doesn’t seem worth the bother. In pervasive
computing systems, the issues become even more complex because the range
of interacting parties is both extremely broad and quite dynamic. We will
need to develop techniques for structuring ubiquitous computing environments
into domains or societies that represent individual entities in the real world
(e.g., a person or a particular space) and for then clustering these into larger
aggregations reflecting social and physical organization. It will be necessary
to build access controls into the resource discovery protocols to reflect owner-
ship and control. I shouldn’t be able to discover that you have a projector in
your office and then simply allocate it for my use (which would happen in
many of our current resource discovery models). Instead, I should have to nego-
tiate with you to obtain access. In addition, we need to recognize that most
access control systems are too rigid and lack contextual sensitivity. For
example, for reasons of privacy, I might have a rule that says I don’t want
my location to be divulged; however, if someone in my family were injured,
my privacy would suddenly matter much less to me. So we will ultimately
need to treat privacy and security with the same contextual sensitivity that
we do almost all other decision making in intelligent environments.
These are significant challenges, and most of them will not be dealt with comple-
tely for many years. However, they are challenges that need to be taken up. Many of
the chapters in this book address these issues and represent significant first steps on a
march of many miles.
FOREWORD xv
&
ACKNOWLEDGMENTS
Creating a book that describes a multidisciplinary area of rapid growth, such as
smart environments, is a challenge. The result has been a collaborative effort of aca-
demia and industry researchers from around the world. The authors of this edited
book have been exceptional at exchanging ideas and initiating collaborations as
well as contributing chapters based on their own research efforts, and we thank
them for their fine contributions. We also would like to thank Kirsten Rohstedt
and Val Moliere of John Wiley for their assistance. We recognize the impact of
the faculty and staff at the University of Texas at Arlington on this work and
thank them for their support. We gratefully acknowledge the support of NSF ITR
grants that made our research programs so exciting. Finally, we dedicate this
book to our families, who increased in number during the creation of this book
and gave generously of their time and encouragement.
Please visit our web site at />xvii
&
PART 1
INTRODUCTION
&
CHAPTER 1
Overview
DIANE J. COOK and SAJAL K. DAS
Department of Computer Science and Engineering
The University of Texas at Arlington
This book is about technologies and standards for smart environments. Smart
environments link compu ters to everyday set tings and commonplace tasks. The
desire to create smart environments has existed for decades, and recent advances
in such areas as pervasive computing, machine learning, and wireless and sensor net-
working now allow this dream to become a reality. In this book we introduce the
necessary technologies, architectures, algorithms, and protocols to build a smart
environment and describe a variety of existing smart environment applications.
A smart environment is a small world where all kinds of smart devices are con-
tinuously working to make inhabitants’ lives more comfortable. A definition of
smart or intelligent is the ability to autonomously acquire and apply knowledge,
while environment refers to our surroundings. We therefore define a smart environ-
ment as one that is able to acquire and apply knowledge about an environment and
also to adapt to its inhabitants in order to improve their experience in that environ-
ment. A schema of smart environments is presented in Figure 1.1.
The type of experience that individuals wish from their environment varies with the
individual and the type of environment. They may wish the environment to ensure the
safety of its inhabitants, they may want to reduce the cost of maintaining the environ-
ment, or they may want to automate tasks that are typically performed in the environ-
ment. The expectations of such environments have evolved with the history of the field.
1.1 FEATURES OF SMART ENVIRONMENTS
1.1.1 Remote Control of Devices
The most basic feature of smart environments is the ability to control devices remo-
tely or automatically. Powerline control systems have been available for decades,
3
Smart Environments: Technologies, Protocols, and Applications, edited by D.J. Cook and S.K. Das
ISBN 0-471-54448-5 # 2005 John Wiley & Sons, Inc.
and basic controls offered by X10 can be easily purchased and installed. By plugging
devices into such a controller, inhabitants of an environment can turn lights, coffee
makers, and other appliances on or off in much the same way that couch potatoes
switch television stations with a remote control (Figure 1.2). Computer software
can additionally be employed to program sequences of device activities and to cap-
ture device events executed by the powerline controllers.
With this capability, inhabitants are freed from the requirement of physical
access to devices. The individual with a disability can control devices from a dis-
tance, as can the person who realized when he got to work that he left the sprinklers
on. Automated lighting sequences can give the impression that an environment is
occupied while the inhabitants are gone, and basic routine procedures can be exe-
cuted by the environment with minimal intervention.
1.1.2 Device Communication
With the maturing of wireless technology and communication middleware, smart
environment designers and inhabitants have been able to raise their standards and
expectations. In particular, devices use these technologies to communicate with
each other, sharing data to build a more informed model of the state of the environ-
ment and the inhabitants, and retrieving information from outside sources over the
Internet or wireless communication infrastructure to respond better to current state
and needs.
Figure 1.1 Schematic view of smart environments.
4
OVERVIEW